Face can be considered as multidimensional visual stimuli. Eunuchs are the special kind of human being regarding the gender and their faces are also different in nature since these faces do not belong to either male or female gender. In this paper, a modified Local Binary Pattern (LBP) technique is used for extracting feature of eunuch faces and neural network based algorithm is used for face recognition. Recognition is done by Multilayer Feed Forward Neural Network with back proapgation learning rule. Here real life face images of the eunuchs are taken from North East India people and performance evaluation metrics like acceptance ratio and execution time are calculated.
Keywords |
Face recognition, modified Local Binary Pattern, Multilayer Feed Forward Neural Netwrok, back
propagation learning rule. |
INTRODUCTION |
A number of biometric approaches have been proposed for personal identification in the past. Among the vision based
ones, we can mention Face recognition, Fingerprint recognition, Iris Scanning and Retina Scanning [1 - 4]. Face
Recognition is the most widely known among the vision based ones. Face is a behavioral biometric; it is based on
physical properties of an individual. As such ones face may change over time but still it is unique and difficult to forge
as the pattern. The face is a primary focus of attention in social life. Playing a big role in conveying identity and
emotion. Human ability to recognize faces is remarkable but it is very difficult to recognize all faces of particular type
of people which are similar in looks.Eunuchs are the special kind of human being addressed by various names: hijra,
kinnar, transsexuals, the third sex, or the other sex. Eunuchs live in their own communities - a separate world of their
own. Among these ostracized eunuchs, many of them are castrated, few are genetically born hermaphrodite, that is,
they have genitals of both sexes, and few are transvestites that is, a female mind trapped in male body or vice versa
[5]. Transgender communities have existed in most parts of the world with their own local identities, customs and
rituals. As of date now face recognition done on either male or female faces but Eunuch faces are different in
dimension as their faces are different from normal human being.Face recognition problem is concerned to determine
whether a particular face belongs to a person, to decide if the record regarding the concerned person already exists or
not. Computer recognition of face images involves two important aspects: facial feature extraction and classification.
Before extracting features it is necessary to derive a set of features from original image that are to be used for
describing faces. Features may or may not relate to intuitive notation such as eyes, nose, lips, and hair etc. If features
used for recognition are not passable, even the best classifier will fail to achieve accurate recognition. Hence it requires
extensive knowledge to select adequate feature that describe face. Adequate facial features are desired to have
following properties [6-7]: first, they should be able to tolerate the within-class variations while discriminate different
classes well; second, they can be easily extracted from the raw images to allow fast processing; and finally, the features
should lie in a space with the low dimensionality in order to avoid computationally expensive classifiers.A number of
studies based on various algorithms has been reported since long on face recognition. M.Turk et al proposed Principal
Component Analysis [8] approach for automated face recognition aimed at to catch the total variation in the set of
training faces and to explain the variation by a few variables. Linear Discriminant Analysis (LDA) [9], and Independent
Component Analysis (ICA)[10] have been widely used for feature extraction and object recognition.Although these
studies were made for the recognition of general human faces, no work on Eunuch Face Recognition using combination
of various algorithm is found in available published or on time literature. Face recognition system generally includes a
series of steps as follows: (i) image acquisition, (ii) face pre processing including localization, segmentation, and
normalization (iii) feature extraction, and (iv) matching and classification, as shown in figure 1. Image acquisition is
the first step in Eunuch Face Recognition System where a face image is captured, and the second step is pre processing
and it includes localization, segmentation and normalization. The third step is the feature extraction to get the feature vector and Classifier used for matching and classification to obtain the recognition rate.This paper is organized as
follows: Local Binary Pattern (LBP) and its variants are described in section 2. Proposed modified algorithm over LBP
and feature extraction technique has been discussed in sectin 3. Section 4 has dealt with the application of proposed
algorithm in order to classify the extracted features. Experimental results and discussion are given in section 5.
Conclusion is incorporated in section 6. |
LOCAL BINARY PATTERN METHOD: A BRIEF REVIEW |
Local Binary Pattern (LBP) is an efficient method used for feature extraction and texture classification [11]. In this
section, we introduce the original LBP operator as well as several extensions like multi-scale LBP, uniform LBP and
variants Extended LBP and Census LBP |
A. Original LBP |
The original Local Binary Pattern (LBP) operator is a non-parametric 3x3 neighbourhood operator which summarizes
the local spatial structure of an image. It was first introduced by Ojala et al. [12] who showed the high discriminative
power of this operator for texture classification. Each pixel with the 3x3 neighbourhood centre value and considering
the results as a binary number, of which the corresponding decimal number is used for labelling. The derived binary
numbers are called Local Binary Patterns or LBP codes. |
The decimal form of the resulting 8-bit word (LBP code) can be expressed as follows: |
|
where n = Number of neighbour with respect to centre pixel |
in = Pixel intensity of nth neighbouring pixel |
ic = Intensity of center pixel. |
r = Maximum number of neighbours surrounding the center pixel. |
Here r = 8 and function s(x) that denoted binary intensity of each cell is defined as: |
|
By the definition above, the LBP operator is invariant to the monotonic gray-scale transformations which preserves the
pixel intensity order in local neighborhoods. The histogram of LBP labels calculated over a region can be exploited as a
texture descriptor. The limitation of the basic LBP operator is that its small 3x3 neighborhood cannot capture the
dominant features with large scale structures. As a result, to deal with the texture at different scales, the operator
requires extension to use neighborhoods of different sizes. |
B. Multiscale LBP |
Multi-scale LBP [13] is an extension to the basic LBP, with respect to neighborhood of different sizes. In Multiscale-
LBP, a circle is made with radius R from the center pixel. P sampling points on the edge of this circle are taken and
compared with the value of the center pixel.. Fig.3 shows some examples of Multiscale LBP operator, where the
notation (P, R) denotes a neighborhood of P sampling points on a circle of radius of R. |
C. Uniform LBP |
The LBP operator LBP(P, R) produces 2p different output values, corresponding to 2p different binary patterns formed
by the P pixels in the neighborhood. It has been shown that certain patterns contain more information than the others
[13]. It is possible to use only a subset of the 2p binary patterns to describe the texture of the images. Ojala et al. named
these patterns as uniform patterns [13]. A local binary pattern is called uniform if it contains at most two bitwise
transitions from 0 to 1 or vice versa when the corresponding bit string is considered circular. 11111111, 00000110 or
10000111 are for instance uniform patterns. |
D. Extended LBP |
Huang et al. [14] reported that LBP can only reflect the first derivative information of images, and cannot represent the
velocity of local variations. So they proposed an extended LBP by applying the LBP operators to both the gradient
magnitude image and the original image. For that purpose, they simply applied kernelsLBPu2(S1) ,LBPu2(S2) and
both to the original image and the gradient image. Approximately at the same time when the original LBP
operator was introduced by Ojala [12], Zabih and Woodfill [26] proposed a very similar local structure feature. This
feature, called Census Transform, also maps the local neighborhood surrounding a pixel to a bit string. With respect to
LBP, the Census Transform only differs by the order of the bit string. |
FEATURE EXTRACTION USING MODIFIED LBP |
Feature extraction is presented in order to reduce the input face data and transfers it to feature vector. If the features
extracted are carefully chosen, it is expected that the features set will extract the relevant information from the input
data in order to perform the desired task using this reduced representation instead of the full size input face image. A
face image can be seen as a composition of micro-patterns which are described by LBP. The histogram of LBP
computed over the whole face image encodes only the occurrences of the micro-patterns without any indication about
their locations. To consider the shape
Information of faces, Ahonen et al. [15] proposed to divide face images into m local regions to extract LBP histograms
and concatenate them into a single, spatially enhanced feature histogram (Fig 4). 8 main facial components such as
eyebrow, eye, pupil, nose and face boundary have been selected as spatial templates shown in Fig. 5 to preserve
information about shape of
facial component. |
With only those spatial templates, all facial components can be described; for example, nose can be described by a
union of templates 0, 5 and 6. However, both spatial information and local texture the information can be combined to
improve the capacity of describing faces. Here instead of considering the central pixel PC only with its each
neighborhood pixel as original LBP operator did, we use each pair of two neighborhood pixels (Pi1,Pi2) according to
spatial templates to compare with the central pixel PC. Eight spatial templates form 8 binary digits of modified LBP
number [15, 16]. So new LBP operator produces 256 different modified LBP values. Function (1) gives the
computation of modified LBP number. |
|
The above mentioned modified LBP histogram is used to represent a face. But some parts of human faces can be
occluded by sun glasses, human body parts or other complex objects. If we only use single LBP histogram for the
whole face candidate image, occlusion will affect matching algorithm seriously. In general, human face has two most
important parts: the upper part from nose up to forehead and the lower part from nose down to neck which includes the
top of nose, mouth, lips, chin and neck. So we should calculate each part by individual histogram. And these two
histograms are connected sequentially to create one mixed 255 x 2 histogram representing to face candidate image. By
this way, we can reduce effectively the influence of occlusion. Fig. 7 shows gray image sample, its modified LBP
image and histogram. |
PROPOSED ALGORITHM |
The main objective of the proposed algorithm is to combine the advantages of both Statistical and Neural Network
features in order to build a hybrid system depending upon the advantages of both methods. In this paper a modified
algorithm is proposed using LBP and histogram properties as a statistical approach for feature extraction and the use of
Multilayer Feed Forward Network classifiers as a neural network approach for classification [8, 17, 18]. This
Multilayer Feed forward network works on the basis of Back Propagation learning rule. Once the features are extracted
using both LBP and Histogram properties, a face image is transformed into a feature vector which is applied to
Multilayer Feed forward network classifier for classification. Multilayer Feed Forward Neural network consist of
neurons that are ordered into layers. The first layer is called the input layer , the last layer is called the output layer and
the layers between them are hidden layers. The mapping function Γ assigns for each neuron i a subset which
consists of all ancestors of the given neuron. Each neuron in a particular layer is connected with all neurons in the next
layer. The connection between the ith and jth neuron is characterized by the weight co-efficient wij. The value of the
output neuron is considered as one pattern class. The Multilayer Feed Forward Neural network operates in two mode
viz Training and Test mode [27]. Training mode begins with arbitrary values of the weights and process iteratively. In
each iteration the network adjusts the weights in the direction that reduces error by applying Back Propagation learning
rule [28]. In Testing mode information flows forward direction through the network from inputs to outputs. The
network producing an estimate of the output value(s) based on the input values and then finds similarity between the
test image and trained images stored in database.The present output value with all existing pattern classes which are
formed during training period.The proposed Multilayer Feedforward Network is shown in Fig.8 |
The Algorithm for the proposed method is as follows |
Step 1: Upload pre-processed input image |
Step 2: Extract face feature components such as nose, eyes etc using proposed modified LBP algorithm
and its Histogram properties to obtain feature vector |
Step 3: The Feature vectors extracted using modified LBP is than fed into Multilayer Feed forward
Network Classifier. |
• Set the Target value for accepting classifier performance |
≈ +1 for acceptance |
Step 4: The training and testing phase for the Classifier – |
• Selecting the suitable value of number of neuron in input layer. |
• Selecting number of hidden layer and number of neuron in each hidden layer. |
• Selecting suitable transfer functions like “tansig” “traingdm” from one layer to another. |
• Selecting learning rate( lr ) value that to be set under testable condition |
• Selecting number of epochs. |
• Apply Back propagation learning algorithm for each epoch |
Back propagation Algorithm for learning |
• Input = A set of training pairs {(x(k),d(k))|k=1,2,…,p} |
x(k) = feature vector |
p = Total no. of training patterns |
• Processing Steps: |
• Step 0: (Initialization) Choose η>0 and Emax, set E=0 and k=1. |
• Step 1: (Training Loop) Apply kth input pattern to the input Layer. |
• Step 2: (Forward Propagation) Propagate the Signal forward through the Network. |
• Step 3: (O/p error Measure) Compute the error |
• value E=1/2 Σ (di
(K) – yi
(K))2 + E |
• Step 4: (Error propagation) Propagate the errors backward to update weights. |
• Step 5: (One epoch Looping) If K < P then K=K+1, goto Step 1. |
• Step 6: (Total error checking) Check whether the current total error is acceptable. If E < Emax then terminate the training process and o/p the final weights, otherwise E=0, K=1 and
goto Step 1. |
EXPERIMENTAL RESULT AND DISCUSSION |
We implement the proposed method and conduct experiments to evaluate its effectiveness. For our result we choose
face images of eunuch and pre process before applying proposed method. The proposed method is implemented using
MATLAB version 10.0 on Intel core i3 Processor PC with 4 GB of RAM memory. The changes of Multilayer feed
forward network classifier parameters have a high effect on the classification results. After setting different numbers of
neuron in each layer we found that the classifier works best with 3 hidden layers with each layer having number of
neurons 100, 50, 10 respectively. With the number of trials made by keeping the number of hidden layer at three it has
been found by the experiment that recognition rate is optimum when the number of input neurons in the input layer is
10.The number of hidden layer had been kept at the value of three for reducing hardware of the system as well as to
reducing the propagation time of the input variable to a minimum.The saving in the computational time (which include
both training and testing time) would lead towards saving of cost also. Different learning rate parameters like lr, mc,
goal have been used in order to get best Learning rate with 800 epochs. Fig 9 shows that recognition rate increased
with the same set of Test images using more and more input image for same class during training period of the
network. Our best recognition rate is found to be 97.35%. Recognition rate = ((No.of recognised face / No. of test face)
*100). The results obtained by the proposed algorithm have been compared with those obtained by using orginal LBP
operator alongwith feed forward neural network and the both were applied to the faces of eunuch faces obtained from
the real life residence of India. It has been found out that the proposed algorithm outperforms the results obtained by
applying the previous algorithm. |
CONCLUSION |
Face recognition has provided many important applications and a large number of approaches has been proposed
during the past several years. There are several techniques which have been proposed for identification of general
human being but so far no work has been done on eunuch faces to categorise their gender. In this paper, we proposed a
hybrid method by merging two different methods one for feature extraction and another for classification. This hybrid
method produces a good result with high recognition rate for eunuch identification and verification. We presented a
modified LBP technique for extracting some important features of eunuch face and making them into feature vector
which reduces computational cost for classification. Finally, we used Multilayer feed forward network for classification
with optimised performance achieve through successive trials. |
ACKNOWLEDGMENT |
The author would like to thank Professor P.K .Bose, Director National Institute of Technolgy, Agartala and Dr. R. P.
Sharma for their constant inspiration and support. |
Figures at a glance |
|
|
|
|
|
Figure 1 |
Figure 2 |
Figure 3 |
Figure 4 |
Figure 5 |
|
|
|
|
Figure 6 |
Figure 7 |
Figure 8 |
Figure 9 |
|
References |
- W. Zhao, R. Chellappa, A. Rosenfeld, and P. J. Phillips, “Face recognition: a literature survey,” ACM Computing Surveys, pp. 399-458,2003.
- Jyoti Rajharia, Dr. P C Gupta, Arvind Sharma “Fingerprint Based Identification System:–A Survey” International Journal of ComputerTechnology and Electronics Engineering (ISSN 2249-6343) Volume 1, Issue 3
- J. Daugman, ―”How Iris Recognition Works”, IEEE Trans. Circuits and Systems for Video Technology, vol. 14, no. 1, pp. 21- 30,Jan. 2004.
- DibyenduGhoshal, Parthasarathi De, BapiSaha “Identification of Tigers for Census by the Method of Tiger Iris Pattern Matching andRecognition” International Journal of Computer Applications (0975 – 8887), Volume 49– No.2, July 2012
- en.wikipedia.org/wiki/Eunuch
- A. Hadid, M. Pietikäinen, and T. Ahonen, “A discriminative feature space for detecting and recognizing faces,” in Proc. Int. onf. ComputerVision and Pattern Recognition (CVPR), 2004, pp. 797–804.
- BorutBatagelj , Franc Solina, "Face recognition in different subspaces - A comparative study", 6th International Works hop on PatternRecognition in Information Systems, PRIS 2006 in conjunction with ICEIS 2006, May 23-24, 2006, Paphos, Cyprus.
- M. Turk, and A. Pentland, “Eigenfaces for recognition,” J. Cognitive Neuroscience, vol. 13, no. 1, pp. 71–86, 1991.
- Chelali, FatmaZohra “Linear discriminant analysis for face recognition” IEEE conference on Multimedia computing and Systems 2009ICMS „09
- M. S. Bartlett, J. R. Movellan, and T. J. Sejnowski, “Face recognition by independent component analysis,” IEEE Trans. Neural Network, vol.13, no. 6, pp. 1450–1464, 2002.
- S. Marcel, Y. Rodriguez, and G. Heusch, “On the recent use of local binary patterns for face authentication,” Int. J. Image and VideoProcessing Special Issue on Facial Image Processing, 2007
- T. Ojala and M. Pietik¨ainen and D. Harwood, “A comparative study of texture measures with classification based on distributions”,Pattern Recognition, Volume 29, 51–59, 1996.
- T. Ojala, M. Pietikäinen, and T. Maenpaa, “Multiresolutiongray-scale and rotation invariant texture classification with local binary patterns,”IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 24, no. 7, pp. 971–987, 2002
- X. Huang, S. Z. Li, and Y. Wang, “Shape localization based on statistical method using extended local binary pattern,” in Proc. Int. Conf.Image and Graphics (ICIG), 2004, pp. 184–187.
- T. Ahonen, A. Hadid and M. Pietik¨ainen, “Face recognition with local binary patterns”, European Conference on Computer Vision, Prague,469–481, 2004.
- Phuong-Trinh Pham-Ngoc and Kang-Hyun Jo “Color-based Face Detection using Combination of Modified Local Binary Patterns and embedded Hidden Markov Models” SICE-ICASE International Joint Conference 2006 .
- T.Yahagi and H.Takano,(1994) “Face Recognition using neural networks with multiple combinations of categories,” International Journal ofElectronics Information and Communication Engineering., vol.J77-D-II, no.11, pp .2151-2159.
- Daniel Svozil, Vladimir KvasniEk, JiEPospichal “Introduction to multi-layer feed-forward neural networks” ELSEVIER, Chemometrics and Intelligent Laboratory Systems 39 (1997) 43-62
- C.M.Bishop,(1995) “Neural Networks for Pattern Recognition” London, U.K.:Oxford University Press.
- M.J. Lyons, J. Budynek, S. Akamatsu, Automatic classification of single facial images, IEEE Transactions on Pattern Analysis and MachineIntelligence 21(12) (1999) 1357–1362.
- Weifeng Liu, Yanjiang Wang, Shujuan Li “LBP Feature Extraction for Facial Expression recognition” Journal of Information &Computational Science 8: 3 (2011) 412-421
- Howard Demuth, Mark Beale, “Neural Network Toolbox User‟s Guide For Use with MATLAB” by The MathWorks,Inc.
- Caifeng Shan , Shaogang Gong , Peter W. McOwan “Facial expression recognition based on Local Binary Patterns: A comprehensive study”ELSEVIER Journal on Image and Vision Computing 27 (2009) 803–816
- Prof.K.RamaLinga Reddy, Prof G.R Babu, Prof.Lal Kishore, M.Maanasa “Multiscale feature and single neural network based facerecognition ” Journal of Theoretical and Applied Information Technology 2008
- http://www.face-rec.org
- R. Zabih and J.Woodfill “ A non parametric approach to visual correspondence” IEEE ransactions on Pattern analysis and machineintelligence 1996
- RoyaAsadi, Norwati Mustapha, NasirSulaiman, NematollaahShiri “New Supervised Multi Layer Feed Forward Neural Network Model toAccelerate Classification with High Accuracy” European Journal of Scientific Research Vol.33 No.1 (2009), pp.163-178
- Dr. Rama Kishore,TaranjitKaur “Backpropagation Algorithm: An Artificial Neural Network Approach for Pattern Recognition International Journal of Scientific & Engineering Research, Volume 3, Issue 6, June 2012.
|