ISSN ONLINE(2319-8753)PRINT(2347-6710)

Face Recognition of Different Modalities Using SIFT and LBP Features

G. Srividhya1, Ms. B. Vijaya Lakshmi M.E (Ph.D) 2
  1. Final Year Research Student, Department of ECE, K.L.N.College of Information Technology, Madurai, Tamilnadu, India
  2. Associate Professor, Department of ECE, K.L.N.College of Information Technology, Madurai, Tamilnadu, India
Related article at Pubmed, Scholar Google

Visit for more related articles at International Journal of Innovative Research in Science, Engineering and Technology


The objective of the project is to transform the samples in different modality to a common feature space. The discriminant features for the modalities are well aligned so that the comparison between them is Possible. In this paper, we proposedmethod to recognize the heterogeneous face recognition. The prototype subjects (i.e., the training set) have an image in each modality (probe and gallery), and the similarity of an image is measured against the prototype images from the corresponding modality. Initially filtered with three different image filters such as Difference of Gaussians, CSDN (Center Surround Devise Normalization), and Gaussian filter. After this process we have to identify the SIFT, CLBP features. It is an algorithm in computer vision to detect and describe local features in images. The similarity between the two pattern images will identify by the kernel similarity. Finally the identified image will be retrieved from the database


Discriminant analysis, Feature descriptors, Heterogeneous face recognition, Kernel similarityprototypes, and View sketch.


The heterogeneous face recognition algorithm is not built for any specific HFR scenario. Instead, it is designed to generalize to any HFR scenario. A frontal photograph image exists for the majority of the population and many security and intelligence scenarios necessitate identification from different modalities of face images (e.g. view sketch, infrared image) Matching nonphotograph face images (probe images) to large databases of frontal photographs (gallery images) is called HETEROGENEOUS FACE RECOGNITION (HFR). Current technology does not support this scenario. HFR one of the most challenging problems in face recognition due to high intra-class variability due to change in modality Successful solutions greatly expand the opportunities to apply face recognition technology
Common modalities:
Sketch – facilitates FR when no face image exists
NIR – nighttime and controlled condition face capture, close to visible spectrum
Thermal – passive sensing method, highly covert.
The motivation behind heterogeneous face recognition is that circumstances exist in which only a particular modality of a face image is available for querying a large database of mug shots (visible band face images) In this case a view sketch is based on a verbal description provided by a witness or the victim, is likely to be the only available source of a face image. Despite significant progress in the accuracy of face recognition systems, most commercial off-the-shelf (COTS) face recognition systems (FRS) are not designed to handle HFR scenarios. The need+ for face recognition systems specifically designed for the task of matching heterogeneous face images is of substantial interest.


This began with sketch recognition using viewed sketches, and has continued into other modalities such as near-infrared (NIR) and forensic sketches. We will highlight a representative selection of studies in heterogeneous face recognition as well as studies that use kernel based approaches for classification. The work in heterogeneous face recognition with several approaches to synthesize a sketch from a photograph (or vice-versa). It is performed the transformation using local linear embedding to estimate the corresponding photo patch from a sketch patch. The proposed prototype framework is similar in spirit to these methods in that no direct comparison between face images in the probe and gallery modalities is needed.
The generative transformation based approaches have generally been surpassed by discriminative feature-based approaches. A number of discriminative feature-based approaches to HFR have been proposed which have shown good matching accuracies in both the sketch and NIR domains. These approaches first represent face images using local feature descriptors, such as variants of local binary patterns (LBP) and SIFT descriptors.
The core of the proposed approach involves using a relational feature representation for face images. By using kernel similarities between a face pattern and a set of prototypes, we are able to exploit the kernel trick, which allows us to generate a high dimensional, nonlinear representation of a face image using compact feature vectors. While it is not common to refer to kernel methods as prototype representations, in this work we emphasize the fact that kernel methods use a training set of images (which serve as prototypes) to implicitly estimate the distribution of the patterns in a non-linear feature space. One key to our framework is that each prototype has one pattern for each image modality.


images, the blurred images are obtained by convolving the original gray scale images with Gaussian kernels having differing standard deviations. Blurring an image using a Gaussian kernel suppresses only high-frequency spatial information. Subtracting one image from the other preserves spatial information that lies between the range of frequencies that are preserved in the two blurred images. Thus, the difference of Gaussians is a band-pass filter that discards all but a handful of spatial frequencies that are present in the original gray scale image.
(ii) CSDN is interactions between center and surround regions of the receptive fields. A constant plus a measure of local stimulus contrast.
(iii) Gaussian filter is windowed filter of linear class; by its nature is weighted mean. Named after famous scientist Carl Gauss because weights in the filter calculated according to Gaussian distribution.
It is an algorithm in computer vision to detect and describe local features in images. For any object in an image, interesting points on the object can be extracted to provide a "feature description" of the object. This description, extracted from a training image, can then be used to identify the object when attempting to locate the object in a test image containing many other objects. To perform reliable recognition, it is important that the features extracted from the training image be detectable even under changes in image scale, noise and illumination. Such points usually lie on high-contrast regions of the image, such as object edges.
Another important characteristic of these features is that the relative positions between them in the original scene shouldn't change from one image to another. First the extremer is calculated for the Scale-Space. Then the key points are localized. The nearby points are interpolated. The Low contrast key points and the edge responses are eliminated.
The LBP operator assigned a label to every pixel of a gray level image. The label mapping to a pixel is affected by the relationship between this pixel and its eight
Each of these sketch images were drawn by an artist while looking at the corresponding photograph of the subject.
Kernel function is mapping the features between testing and training images.Kernel function is also a similarity function. Instead of functional in an implicit Hilbert space, similarity-based predictors are given in terms of a weight function.Here we find the distance between the testing and training feature.The minimum distance is identified from the database.
The matching stage outputs a measure of similarity (or dissimilarity) between two face images, where the feature vectors used to compute such (dis)similarities are the outputs from the feature extraction stage discussed above. That is, a probe (or query) image is matched against a gallery (or database) by finding the face image in the gallery with the minimum distance (such as the Euclidean or cosine distance) or maximum similarity. Often the matching stage can be augmented by an additional stage of statistical learning (that is, in addition to the learning that occurred in the feature extraction stage). A common notion here is to map the task of generating a measure of similarity between two faces images to a binary classification problem that determines whether or not two face images are of the `same subject' or a `different subject'.


(i) Face Recognition Using Kernel Direct Discriminant Analysis Algorithms
The proposed method combines kernel-based methodologies with discriminant analysis techniques. The kernel function is utilized to map the original face patterns to a high-dimensional feature space, where the highly non-convex and complex distribution of face patterns is linearized and simplified, so that linear discriminant techniques can be used for feature extraction. The small sample size problem caused by high dimensionality of mapped patterns is addressed by an improved D-LDA technique which exactly finds the optimal discriminant subspace of the feature space without any loss of significant discriminant information. In conclusion, the KDDA algorithm is a general pattern recognition method for nonlinearly feature extraction from high-dimensional input patterns without suffering from the SSS problem.
KDDA provide the excellent performance in the recognition system.
The Algorithm will not perform for the illumination images.
(ii) Face Description with Local Binary Patterns Application to Face Recognition
In this paper, a novel and efficient facial representation is proposed. It is based on dividing a facial image into small regions and computing a description of each region using local binary patterns. These descriptors are then combined into a spatially enhanced histogram or feature vector. We provide a more detailed analysis of the proposed representation. The LBP operator was originally designed for texture description. The operator assigns a label to every pixel of an image by thresholding the 3x3-neighborhood of each pixel with the center pixel value and considering the result as a binary number. Then the histogram of the labels can be used as a texture descriptor.
LBP used to extract the texture pattern of the face image.
The Accuracy of the system is very low.
(iii) Enhanced Local Texture Feature Sets for Face Recognition under Difficult Lighting Conditions
We have presented new methods for face recognition under uncontrolled lighting based on robust preprocessing and an extension of the Local Binary Pattern (LBP) local texture descriptor. There are three main contributions:
(i) A simple, efficient image preprocessingchain whose practical recognition performance is comparable to or better than current illumination normalization methods.
(ii) Arich descriptor for local texture called Local Ternary Patterns (LTP) that generalizes LBP while fragmenting less under noise in uniform regions.
(iii) A distance transformsbased similarity metric that captures the local structure and geometric variations of LBP/LTP face images.
The method is robust in recognizing the uncontrolled light images.
The process took the more time for the execution.
(iv) Coupled Spectral Regression for Matching Heterogeneous Faces
In this paper, we have developed the couple spectral regression (CSR) as an effective and efficient framework for matching heterogeneous faces. CSR first models the properties of different types of data separately and then learns two associated projections to project heterogeneous data (e.g. VIS and NIR) respectively into a discriminative common subspace in which classification is finally performed. CSR method can also be integrated with kernel trick to vernalize data into an implicit high or even infinite dimension feature space.
It improves the generalization performance effectively and meanwhile greatly reduces the computational expenses.
The system is only work for two modalities
(v)Improving Kernel Fisher Discriminant Analysis for Face Recognition
In this paper, we present two new subspace methods (NLDA, NKFDA) based on the null space approach and the kernel technique. Both of them effectively solve the small sample size problem and eliminate the possibility of losing discriminative information.
The main contributions of this paper are summarized as follows:
(a) The essence of null space-based LDA in the SSSP is revealed, and the most suitable situation of null space method is discovered.
(b) A more efficient Cosine kernel function is adopted to enhance the capability of the original polynomial kernel.
It is simpler than all other null space methods and saves the computational cost and maintains the performance simultaneously.
The algorithm extract the more feature for face; it should be reduced by changing the some other algorithm.
(vi) An Efficient Face Recognition and Retrieval Using LBP and SIFT
The method for face recognition and retrieval. In most of the cases various methods are unable to increase retrieval rate of face images especially LFW images, with the help of proposed system the retrieval rate drastically increased. In face recognition, inter class objects should have larger distance than intra class objects ideally. By extracting LBP & SIFT features of training images and arranging them in sparse representation; shape context and inner distance shape contexts methods are applied on test image for deriving relevant images with better performance.
Performance improvement on four large data sets has demonstrated the effectiveness’s ofCo c transduction/tri-transduction for shape/object retrieval.
Insensitive to articulation and sensitive to part structures.




In the proposed method, Test and Training images are initially filtered with three different image filters, and two different local feature descriptors are then extracted. A training set of prototypes is selected, in which each prototype subject has an image in both the gallery and probe modalities. The non-linear kernel similarity between an image and the prototypes is measured in the corresponding modality. This method leads to excellent matching accuracy for viewed sketch images.


  1. Brendan F. Karle, Member, IEEE, and Anil K. Jain, Fellow, IEEE “Heterogeneous Face Recognion Using Kernel Prototype Similarities”, IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 35, NO. 6, JUNE 2013.
  2. Stan Z. Li, Senior Member, IEEE, RuFeng Chu, Liao, and Lun Zhang “Illumination Invariant Face Recognition Using Near-Infrared Images”, IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 29, NO. 4, APRIL 2007.
  3. Xiaoou Tang, Senior Member, IEEE, and Xiaogang Wang, Student Member, IEEE“Face Sketch Recognition”, IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 14, NO. 1, JANUARY 2004.
  4. Brendan Klare and Anil K. Jain1 IEEE, “Heterogeneous Face Recognition: Matching NIR to Visible Light Images”, IEEE TRANS. PATTERN ANALYSIS & MACHINE INTELLIGENCE, 29(4):627–639, JUNE 2007.
  5. Leonardo Trujillo, Gustavo Olague, RiadHammoud and Benjamin Hernandez IEEE, “Automatic feature Localization in Thermal Images for Facial Expression Recognition”, IEEE.TRANS. PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL.21, NO.12, DEC. 2004.
  6. Xian sheng Huang, Zhen Lei, Member, IEEE, Mingyu Fan, Xiao Wang, and Stan Z. Li, Fellow, IEEE,“RegularizedDiscriminative Spectral Regression Method for Heterogeneous Face Matching”, IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 22, NO. 1, JANUARY 2013.
  7. P.Phillips, H. Moon, P. Rauss, and S. Rizvi, “The FERET Evaluation Methodology for Face-Recognition Algorithms,” IEEE CONF. COMPUTER VISION AND PATTERN RECOGNITION, VOL.24, NO.5, SEPTEMBER 1997.
  8. B. Klare and A. Jain, “Sketch to Photo Matching: A Feature-Based Approach,” Proc. SPIE Conf. Biometric Technology for Human Identification, IEEE TRANS. INFORMATION FORENSICS AND SECURITY, VOL. 7, NO. 2, PP. 518-529, 2012.
  9. A. Quattoni, M. Collins, and T. Darrell, “Transfer Learning for Image Classification with Sparse Prototype Representations,”Proc. IEEE CONF. COMPUTER VISION AND PATTERNRECOGNITION, 2008.
  10. K.I. Kim, K. Jung, and H.J. Kim, “Face Recognition Using Kernel Principal Component Analysis,” IEEE SIGNALS PROCESSING LETTERS, VOL. 9, NO. 2, PP. 40-42, FEB. 2002.
  11. A. Quattoni, M. Collins, and T. Darrell, “Transfer Learning for Image Classification with Sparse Prototype Representations,” Proc. IEEE CONF. COMPUTER VISION AND PATTERN RECOGNITION, 2008.
  12. M.-F. Balcan, A. Blum, and S. Vempala, “Kernels as Features: On Kernels, Margins, and Low-Dimensional Mappings,” Machine Learning, vol. 65, pp. 79-94, 2006.