All submissions of the EM system will be redirected to Online Manuscript Submission System. Authors are requested to submit articles directly to Online Manuscript Submission System of respective journal.

Review on Various Face Recognition Techniques

Ashlesha D. Kolap1,S.V. Shrikhande2, Nitin K. Jagtap1
  1. M.Tech Student, Dept. of Electrical, VJTI, Mumbai University, Maharashtra, India
  2. Software Development Section, Electronics and Instrumentation Group, BARC, Trombay, Mumbai, India
Related article at Pubmed, Scholar Google

Visit for more related articles at International Journal of Innovative Research in Computer and Communication Engineering


Face recognition is an interesting field of research for the purpose of person identification. Multiple techniques have been used to carry out this task. Face recognition undergoes some steps of pre-processing and then person identification is done. Pre-processing involves removal of noise, illumination effect and obtain face part. This face is used further for obtaining feature vectors and recognition of a person. In this paper, overview of some techniques is discussed with merits and demerits. This can help in building face recognition algorithm.


Face Recognition, face detection, feature extraction, classification


In any application, person identification is an important task for the purpose of security. For identifying any individual many techniques can be used such as Person Identity Numbers (PINs), passwords, identity cards. But in such cases, identity cards can be misplaced or stolen; PINs, passwords can be forgotten. However individual‟s biological traits cannot be stolen or misplaced. So biometric techniques play an essential role in person identification. Thesetechniques are widely used since such techniques give better results than using identity cards, PINs, passwords. Biometric techniques like iris detection, fingerprint impression, face recognition can be used. Amongst all, Face recognition is one of the challenging areas in computer vision and image processing [1].
Face recognition has various applications and can be used for security purposes at various installations to identify whether the person resembles one of the images stored in the database. This may be deployed at critical government installations and also at public places like airports, ATM machines, etc. In forensics[2], it can be usedfor image reconstruction, image generation using forensic sketch. Face recognition has to face various challenges like facial orientation, spectacles or glasses, varying hair styling, varying moustache and beard styles, facial aging and facial marks. Various techniques are used to overcome these challenges and are discussed below.


In [3], for face detection and recognition purpose, rectangular block is used to detect feature orientation. This rectangular block system is based on adaboost fast face detection technique. It is used to locate eyes and lips regions. Using distance measures between different features, artificial neural network is trained and based on this, face recognition is carried out. In [6], implementation of face recognition method is done using face detection by AdaBoost face detector, region of interest (ROI) extraction, feature extraction using discrete wavelet transform (DWT), dimensionality reduction by employing independent component analysis (ICA) and classification using k-Nearest Neighborhood (k-NN) classifier.


Face recognition techniques include following tasks:
A. Background removal and Face detection:
Images have slight variation in illumination, noise and more of background part. Some images can have different scale and orientation. Instead of taking whole image, only face part is considered, so that component based recognition can also be performed (such as eyes, lips) which becomes useful in minimizing forensic challenges. Hence to carry out recognition, only face part has to be detected and scaled in pre-processing. In this, the background is removed and only face part is considered for further processing.
B. Features extraction:
Once the face is extracted from an image, features are found out from the face. Features need to be extracted so that to get compared with features of training dataset. Features of training set are calculated, stored and those get compared with the features of test image. These features can be calculated by various techniques that will be discussed below.
C. Classification and recognition:
The final stage is carried out as a classification and depending on that recognition of a person is done. This classification is done based on either various classifiers or simply a distance measured between feature vectors of training and test image. Thus test image is being classified to respective class and recognition is done.
Techniquesused for each of the above tasks are given below:
A. Background removal and Face detection:
a) Viola Jones method:
Viola Jones proposed a face detection system based on constructing an integral image that allows fast feature extraction [3]. The integral image is defined as the summation of the pixel values of the original image. The value at any location (x, y) of the integral image is the sum of the image‟s pixels above and to the left of location (x, y) [4].


After integral image construction, it calculates features and constructs a classifier by selecting small number of important features using Adaboost (Adaptive Boosting) [5]. The image is partitioned in sub windows. Haar-like features (Fig. 2) are used to detect features from an image [6].Those features are scanned through whole image sub-windows. The value for each rectangle or sub-window is calculated applying those Haar-like features and the cascading is done for each possible face region to finally obtain face part. This cascading constructs a „strong‟ classifier as a linear combination of many „weak‟ classifiers [7].
Advantages of this method are it is robust to position, invariant to size, orientation and lighting conditions as well. Viola Jones method can give robust real time face detection [8]. And it is a simple and efficient classifier used for face detection.
b) Skin colour modeling:
Different people have different skin colours; only difference is in intensities rather than chrominance. So to detect face part, skin part detection is very useful. To build skin colour model different values are used in different colour space as given below [9]:
YCbCr space: Value of Cb: 80-120
Value of Cr: 133-173
YIQ space: Value of I: 30-100
YUV space: Hue (θ): 105ᵒ-150ᵒ
Using those values face or skin area can be bounded and we can get detected face part.
In other techniques, Mahalanobis distance from colour vector C to mean vector μs is calculated. Covariance matrix Σs is used to measure „skin like‟ colour C:
If x is an image pixel, then x is considered a skin pixel if λs(x) <= t (threshold) [10]. Edges of skin part are found out and which contains more filled area that area is extracted, which gives the Face part of an image. Thus background is removed from given image.
This method gives face part or skin part almost correctly since it directly depends on colour modelled values which finds skin region exactly. It works best for frontal face images. But demerit of skin colour modelling is it includes neck region and is unable to distinguish between skin colour backgrounds.
In skin colour modelling, colour based mask generation can also be done [11]. In this method, probability is found out for each image pixel for being a face part.
c) Face detection techniques:
In [12, 13, and 14], various techniques are discussed such as template matching, knowledge based, machine learning, feature based, geometry based.
In template matching, face template of fixed size is taken and is used to scan whole image. At face part we will get most similarity. Thus face can be detected. This method is easy to implement but is scale dependent, rotation dependent and computationally expensive.
Knowledge based method reduces computational cost but are rotation dependent. This uses knowledge about basic face part. And comparing this knowledge of trained data with test samples, final face can be detected.
Machine learning techniques have high detection rates but accuracy of detection depends upon training samples. Most machine learning techniques use frontal face or fixed orientation.
Feature based techniques use low level features and are scale as well as rotation independent. Such techniques give fast detection rate. Feature based techniques use eyes, nose and lips like predominant features to find face. Separate templates of each features can also be used for face detection.
Geometry based techniques consider relative poses and size of important components of face. Edge detection gives direction of important components. It has advantage that this method concentrates on important components as eyes, nose and mouth. But it does not represent face global structure. Using ellipse shape, face can be bounded and detected from background.
d) Illumination invariant face detection:
Images considered as a database are not always taken under same lighting condition. This means every image is of different illumination. So while detecting face, illumination may affect. Hence illumination invariant face detection is important here.Instead of discarding illumination variations, correcting the illumination variation has been the objective of the method. In Discrete Cosine Transform (DCT) domain, illumination variations that fall predominantly in the low frequency band have been normalized by their method. By controlling the properties of the odd and even constituents of the Discrete Cosine Transform (DCT) additional effects of illumination variation which results in the creation of shadows and specular defects have been corrected [15]. Gamma correction can also be done to remove illumination effect.
B. Feature extraction:
a) Principal Component Analysis (PCA) or Eigen face method:
Principal Component Analysis (PCA) is a method of orthogonal transformation of correlated components into linearly uncorrelated components known as principal components. Using a mathematical method, numerous possibly correlated variables are transformed into a smaller number of uncorrelated variables known as principal components by PCA. PCA reduces dimensionality of feature vectors [16]. Training dataset is merged in a matrix and covariance matrix of it is calculated. To find principal components, eigen vectors and eigen values are calculated of the covariance matrix. To decide which eigen vectors to be considered as principal components, selection of eigen vectors is done based on entropy [17]. This is done by removing each eigen vector in turn and calculating corresponding entropy of remaining eigen vector matrix. If removal of eigen vector causes more disorder then it shows more importance and higher rank of corresponding eigen vector. The top ranking eigen vectors are selected. Once principal components are found, those are projected on training and test samples to find maximum similarity in feature space.
In many cases, PCA outperforms Linear Discriminant Analysis (LDA). PCA is an unsupervised learning so it does not rely on class information.
b) Independent Component Analysis (ICA) feature extraction:
Principal Component Analysis (PCA) uses only second order statistics but Independent Component Analysis is a generalization of PCA uses higher order statistics.
Independent Component Analysis (ICA) attempts to decompose multivariate signal into non-Gaussian independent components [18, 19, 20 and 21]. Kurtosis is used in ICA to measure non-Gaussianity of signals (images). In general, it gives idea about “peakedness” of probability distribution of signals (images). The pixels are treated as random variables and the face images as outcomes. i.e. X = A.S where S represents source matrix containing uncorrelated Gaussian elements, A is the mixing matrix and X is correlated Gaussian elements. First non-Gaussian vector corresponds to largest eigen value. Minimizing negative entropy function, we can obtain independent vectors which are treated as basis.
As in the case of PCA, independent components are projected on training and test samples to find maximum similarity in the test case with one of the training samples.
c) Gray Level Co-occurrence Matrix (GLCM):
The direct GLCM (gray level co-occurrence matrix) method is very competitive. GLCM matrix is used to extract feature vector [22].
GLCM is created by calculating how often a pixel with gray-level value i occurs adjacent to a pixel with the value j. Adjacency can be of chosen magnitude and chosen direction. It can be horizontally, vertically, diagonally left and diagonally right. The features can also be averaged for all four directions- 0ᵒ, 45ᵒ, 90ᵒ, and 135ᵒ. GLCM construction is shown in Fig. 4 below:
Fig. 4 tells about gray level co-occurrence matrix construction horizontally and vertically. In similar ways, diagonal co-occurrences can be calculated.
Gray co-occurrence matrix also gives information regarding contrast, correlation, energy and homogeneity. It tells about measure of the intensity contrast between a pixel and its neighbor over the whole image. It also gives an idea about how correlated a pixel is to its neighbor over the whole image. It also gives information regarding homogeneity of the image by having higher values in GLCM diagonal position.
Feature vector is generated by converting GLCM matrix into a vector which can be used for classification process.
Using smaller number of gray levels makes the algorithm faster and also preserves the high recognition rate.
d) Discrete Wavelet Transform (DWT):
Feature extraction is done using discrete wavelet transform (DWT) coefficients [23, 24]. For finding out DWT coefficients,Haar, Daubechies, Coiflet, Meyer, Morlet and Mexican hat can be used as a mother wavelet. In DWT decomposition, image gets decomposed in different sub bands such as LL, LH, HL, HH (for single level decomposition). It gives levels of approximation and detail coefficients. Approximation coefficients look like replica of an input image and which is the lowest frequency sub band. Remaining three detailed coefficients give horizontal, vertical and diagonal features. Thus features can be extracted using wavelet decomposition.
e) Facial features detection:
Eyes can be detected based on assumption that they are darker than other part of a face. Ellipse is fitted to each potential eye region using connected component analysis [10]. To increase the detection speed, upper half of detected face can be considered as eyes and lower 1/3rd of the detected face can be considered as lips [3]. We can get different features such as finding nose, Nose orientation can be found as follows:
Width of nose is considered as ~ 80% of widthof the lips
This method is advantageous because we can obtain precise eye localization. And once eyes are found other features like nose and mouth can be easily obtained.
Nose tip as well eyes can also be found by considering corner points [25, 26]. These points of face can be extracted using corner detection. To find eye balls, valley point searching can be done using directional projection and symmetry of two balls can give location of eyes.
C. Classification and recognition:
a) Distance-wise classification:
Once the feature vectors of training and test images are found out then we need to classify test image in either of training image classes. This can be done by calculating Euclidean distance between training and test image feature vectors [14, 16]. The minimum distance shows test image belongs to the respective class. Thus recognizing the person.
b) K – nearest neighbor (K-NN):
Nearest neighbor is searched for feature vectors in a feature space. K-NN is used for classification and can be employed on the images [15]. It is observed, as K increases recognition rate decreases. This method gives 83.5% recognition rate for K = 1 [6].
c) Neural Network:
Thehuman way of recognizing faces is modeled somewhat by employing neural network. The direct neural network makes a mapping from face images to the persons and their expression variations [25]. Moreover, the inverse network can approximately reform the images by using persons‟ information and facial expression diversities.
d) Support Vector Machine (SVM):
Support Vector Machine builds supervised learning algorithm which is used for classification and regression analysis [26]. It constructs a hyperplane having n-dimensional parameters w and b, which is used for classification andsign of (w.x+b)gives the result for bi-class classification. Multiple classifiers will have to be employed.


Comparative study of numeroustechniques for face recognition is done. From the observed merits and demerits, a hybrid system will be designed giving promising results. According to literature survey, Viola Jones face detection gives real time performance as well as tracks multiple faces in an image. Skin colour modelling also works well for face detection. Principal Component Analysis and Independent Component Analysis works better for feature extraction. These techniquesselectively take only those features in the feature vector which has more information, thereby reducing computation time. Gray Level Co-occurrence Matrix (GLCM) also gives better feature vectors but if large number of gray levels are taken then GLCM method becomes slower and computation time increases. For classification purpose, Euclidean distance also gives good results. SVM can also be used for classification. Thus combining these techniques, face recognition application will be developed.


1. Rabia Jafri and Hamid R. Arabnia, “A Survey of Face Recognition Techniques”, Journal of Information Processing Systems, Vol.5, No.2, June2009

2. Anil K. Jain, Brendan Klare and Unsang Park, “Face Recognition: Some Challenges in Forensics”

3. Dong-Liang Lee, Jen-Sheng Liang, “A Face Detection and Recognition System based on Rectangular Feature Orientation”, 2010 International Coriference on System Science and Engineering

4. M.Gopi Krishna, A. Srinivasulu, “Face Detection System On AdaBoost Algorithm Using Haar Classifiers”, International Journal of Modern Engineering Research (IJMER) Vol. 2, Issue. 5, Sep.-Oct. 2012 pp-3556-3560

5. “Explaining AdaBoost” PDF, Robert E. Schapire

6. K M Poornima, Ajit Danti, S K Narasimhamurthy, “Wavelet Based Face Recognition using ROIs and k-NN”, International Journal of Innovations in Engineering and Technology (IJIET), Vol. 3 Issue 2 December 2013

7. “The Viola/Jones Face Detector” PDF (2001)

8. PAUL VIOLA, MICHAEL J. JONES, “Robust Real-Time Face Detection”, International Journal of Computer Vision 57(2), 137–154, 2004

9. PENG Zhao-yi, WEN Zhi-qiang, ZHOU Yu, “Application of Mean Shift Algorithm in Real-time Facial Expression Recognition”, 2009 IEEE

10. D. Sidibe, P. Montesinos, S. Janaqi, “A simple and efficient eye detection method in color images”, published in "International Conference Image and Vision Computing New Zealand 2006, Nouvelle-Zélande (2006)"

11. Gary Chern, Paul Gurney, and Jared Starman, “Face Detection”

12. Wu-Chih Hu, Ching-Yu Yang, Deng-Yuan Huang, and Chun-Hsiang Huang, “Feature-based Face Detection Against Skin-color Like Backgrounds with Varying Illumination”, Journal of Information Hiding and Multimedia Signal Processing, Volume 2, Number 2, April 2011

13. Bhumika G. Bhatt, Zankhana H. Shah, “Face Feature Extraction Techniques: A Survey”, National Conference on Recent Trends in Engineering & Technology, 13-14 May 2011

14. G. Prabhu Teja, S. Ravi, “Face Recognition using Subspaces Techniques”, 2012 IEEE, ICRTIT-2012

15. Shermina J., “Illumination invariant face recognition using discrete cosine transform and principal component analysis”, Preceedings of ICETECT 2011

16. Heng Fui Liau,Li-Minn Ang, Kah Phooi seng, “A Multiview Face Recognition System Based on Eigenface Method”, 2008 IEEE

17. Manisha Satone, G.K.Kharate, “Selection of Eigenvectors for Face Recognition”, (IJACSA) International Journal of Advanced Computer Science and Applications, Vol. 4, No.3, 2013

18. Meftah Ur Rahman, “A comparative study on face recognition techniques and neural network”

19. Bruce A. Draper, Kyungim Baek, Marian Stewart Bartlett, J. Ross Beveridge, revised copy, “Recognizing Faces with PCA and ICA”

20. Issam Dagher and Rabih Nachar, “Face Recognition Using IPCA-ICA Algorithm”, IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 28, NO. 6, JUNE 2006

21. Naresh Babu N T, Annis Fathima A, V. Vaidehi, “An Efficient Face Recognition System Using DWT-ICA features”, 2011 International Conference on Digital Image Computing: Techniques and Applications

22. Alaa Eleyan, Hasan Demirel, “Co-Occurrence based Statistical Approach for Face Recognition”, IEEE 2009

23. Kamarul Hawari Ghazali, Mohd Fais Mansor, Mohd. Marzuki Mustafa and Aini Hussain, “Feature Extraction Technique using Discret e Wavelet Transform for Image Classification”, The 5th Student Conference on Research and Development -SCOReD 2007, 11-12 December 2007, Malaysia

24. Ashish Sharma, Dr.Varsha Sharma, Dr.Sanjeev Sharma, “A Face Recognition Scheme Based On Principle Component Analysis and Wavelet Decomposition”, IOSR Journal of Computer Engineering (IOSR-JCE) e-ISSN: 2278-0661, p- ISSN: 2278-8727Volume 16, Issue 2, Ver. VII (Mar-Apr. 2014), PP 59-63

25. JalilMazloum, Ali Jalali and Javad Amiryan, “A Novel Bidirectional Neural Network for Face Recognition”, 2012 2nd International Conference on Computer and Knowledge Engineering (ICCKE), October 18-19, 2012

26. Wenhan Jiang, Xiaofei Zhou, Hongchuan Hou, Xing gang Lin,” A New Sampling-based SVM for Face Recognition”, 2009 IEEE

27. Yong-An Li, Yong-Jun Shen, Gui-Dong Zhang, Taohong Yuan, Xiu-Ji Xiao, Hua-Long Xu, “An Efficient 3D Face Recognition Method Using Geometric Features”, 2010 IEEE

28. Hua Gu Guangda, Su Cheng Du, “Feature Points Extraction from Faces”, Image and Vision Computing NZ, Palmerston North, November 2003