ISSN: 2229-371X

All submissions of the EM system will be redirected to Online Manuscript Submission System. Authors are requested to submit articles directly to Online Manuscript Submission System of respective journal.

MORPHOLOGICAL METHOD, PCA AND LDA WITH NEURAL NETWORKSFACE RECOGNITION

Sushma Jaiswal1*, Dr. Sarita Singh Bhadauria2, Dr. Rakesh Singh Jadon3
  1. Lecturer,S.O.S. in Computer Science, Pt.Ravishankar Shukla University,Raipur(C.G.)
  2. Professor & Head,Department of Electornics Engineering , Madhav Institute of Technology & Science,Gwalior(M.P.)
  3. Professor & Head, Department of Computer Application, M. I. T. S. Gwalior, M. P. – 474005, India.
Corresponding Author: Sushma Jaiswal, E-mail: jaiswal1302@gmail.com
Related article at Pubmed, Scholar Google

Visit for more related articles at Journal of Global Research in Computer Sciences

Abstract

The problem is, these forms of machine identification and verification aren’t very secure, because they can be given away, taken away, or lost and motivated people have found ways to forge or circumvent these credentials. The ultimate form of electronic verification of a person’s identity is biometrics; using a physical attribute of the person to make a positive identification. So we need a system, which is similar to the human eye in some sense to identify a person. To cater this need and using the observations of human psychophysics, face recognition as a field emerged. Different approaches have been tried by several groups, working world wide, to solve this problem. Many commercial products have also found their way into the market using one or the other technique. But so far no system or technique exists which has shown satisfactory results in all circumstances. A comparison of these techniques needs to be done. In this paper, we will try to do a comparative study of the performances of three algorithms - PCA, LDA and Morphological methods for face recognition.

Keywords

Morphological Method, PCA, LDA, Neural Networks, Face Recognition, LVQ.

INTRODUCTION AND MOTIVATION

Security is the one of the main concern in today’s world. Whether it is the field of telecommunication, information, network, data security, airport or home security, national security or human security, there are various techniques for the security. Biometric is one of the modes of it. A biometrics is, “Automated methods of recognizing an individual based on their unique physical or behavioral characteristics.” Face recognition is a task humans perform remarkably easily and successfully. This apparent simplicity was shown to be dangerously misleading, as the automatic face recognition seems to be a problem that is still far from solved. In spite of more than 20 years of extensive research, large number of papers published in journals and conferences dedicated to this area, we still cannot claim that artificial systems can measure to human performance. Automatic face recognition is intricate primarily because of difficult imaging conditions (lighting and viewpoint changes induced by body movement) and because of various other effects like aging, facial expressions, occlusions etc. Researchers from computer vision, image analysis and processing, pattern recognition, machine learning and other areas are working jointly, motivated largely by a number of possible practical applications. A general statement of the face recognition problem (in computer vision) can be formulated as follows: Given still or video images of a scene, identify or verify one or more persons in the scene using a stored database of faces. Face recognition is one of the most active and widely used techniques because of its reliability and accuracy in the process of recognizing and verifying a person’s identity. The need is becoming important since people are getting aware of security and privacy. For the Researchers Face Recognition is among the tedious work. It is all because the human face is very robust in nature; in fact, a person’s face can change very much during short periods of time (from one day to another) and because of long periods of time (a difference of months or years). One problem of face recognition is the fact that different faces could seem very similar; therefore, a discrimination task is needed. On the other hand, when we analyze the same face, many characteristics may have changed. These changes might be because of changes in the different parameters. The parameters are: illumination, variability in facial expressions, the presence of accessories (glasses, beards, etc); poses, age, finally background. We can divide face recognition techniques into two big groups, the applications that required face identification and the ones that need face verification. The difference is that the first one uses a face to match with other one on a database; on the other hand, the verification technique tries to verify a human face from a given sample of that face. Principal components analysis (PCA) and linear discriminate analysis (LDA) are widely used in face recognition system. These methods can efficiently reduce the dimensions of biometric data and improve the robustness to disturbing factors like expression variance, wearing glasses, mimic, etc. Due to these advantages, they are popular with commercial face recognition system providers. However, the strong dimension reduction of PCA-LDA algorithms limits its integration with in the template protection techniques.
In our work, we selected three techniques for comparative study and evaluation, using a common face database that contains overall 360 images. The three techniques are Principal Component Analysis (eigenface), Regularized
Linear Discriminant Analysis (R-LDA), and Morphological Method. These all are coupled with artificial neural networks for training and classification of extracted features. These techniques are having apparently promising performances and are representative of new trends in face recognition. All three techniques were reported to have recognition rates of more than 80–90% on databases of moderate sizes (e.g., 16–50 persons). We believe this work would be a useful complement to [1] to [3], where the surveyed techniques were not evaluated on a common database of relatively large size. Indeed, through a more focused and detailed comparative study of three important techniques, our goal is to gain more insights into their underlying principles, interrelations, advantages, limitations, and design tradeoffs and, more generally, into what the critical issues really are for an effective recognition algorithm. Basically we have used two different approaches for feature extraction of image:

MORPHOLOGICAL APPROACH

In morphological approach feature extraction methods can be distinguished into three types: (1) a Generic method is based on the analysis of edges, lines, and curves. (2) featuretemplate- based methods is based on the detection of the facial features such as eyes. (3) Structural matching methods that take into consideration geometrical constraints on the features. The technique we proposed here is independent of the aging factor, illumination and presence of accessories (glasses, beards, etc). Here in this technique we are considering the fiducial points. The points are the distance between eyes; eye and mouth. The distance between these facial points never changes. After drawing out the fiducial points we implement the Neural Network (NN) to the system for training and classification.

NEURAL NETWORK FOR TRAINING

The Back Propagation algorithm looks for the minimum of the error function in weight space using the method of gradient descent. Properly trained back propagation networks tend to give reasonable answers when presented with inputs that they have never seen. Typically, a new input leads to an output similar to the correct output for input vectors used in training that are similar to the new input being presented. This generalization property makes it possible to train a network on a representative set of input pairs and get good results without training the network on all possible input or output pairs. The RBF network performs similar function mapping with the BP, however its structure and function are much different. An RBF is a local network that is trained in a supervised manner contrasts with the BP network that is a global network. A BP performs a global mapping, meaning all inputs cause an output, while an RBF performs a local mapping, meaning only inputs near a receptive field produce activation. The LVQ network has two layers: a layer of input neurons, and a layer of output neurons. The weights of the connections to this neuron are then adapted, i.e. made closer if it correctly classifies the data point or made less similar if it incorrectly classifies it.

APPEARANCE BASED APPROACH

These approaches utilize the pixel intensity or intensityderived features. However, these methods may not perform well in many real-world situations, where the test face appearance is significantly different from the training face data, due to variations in pose, lighting and expression. Usually a face image of size p × q pixels is represented by a vector in p.q dimensional space. In practice, however, these (p.q) -dimensional spaces are too large to allow robust and fast object recognition. A common way to attempt to resolve this problem is to use dimension reduction techniques. Two of the techniques for this purpose are Principal Component Analysis (PCA) and Regularized Linear Discriminant Analysis (R-LDA). In these approaches, the twodimensional face image is considered as a vector, by concatenating each row or column of the image. Each classifier has its own representation of basis vectors of a high dimensional face vector space. The dimension is reduced by projecting the face vector to the basis vectors, and is used as the feature representation of each face images.

BIOMETRIC MODALITIES

As already mentioned biometric modalities are measurable attributes of humans. The goal of identifying individuals with a high rate of correctness is proved [65, 64] – in many cases there are problems differentiating identical twins. The most widely used biometric characteristics are fingerprint, face image, voice, hand geometry, iris image and signature [63]. A general distinction is drawn between behavioral and physiological biometric characteristics. Physiological characteristics are usually determined by the genes (like face or vein pattern), in some cases (fingerprint, iris) they are also influenced by extra-genetic or environmental factors and can in theory be used to distinguish between identical twins (with identical genome). Behavioral modalities are affected by the human genome as well, but their occurrence can be changed deliberately. The process of capturing behavioral modalities is a measurement of an activity. Therefore these modalities are called active modalities.

BEHAVIOURAL BIOMETRIC CHARACTERISTICS

In this subsection common biometrics based on behavior are discussed. So these characteristics are not only defined by the genes but are developed over time.
Voice
Also referred to as speaker-recognition. As mentioned in the introduction-identifying humans by speech is done on a dayto- day basis. Only after a few words we are aware of our dialogue partner even if we cannot see the person. This process can be automated; properties of a specific voice are for example the fundamental frequency (which is defined by the length of the vocal tract), the inflection, nasal tone and speech rhythm. Moreover, it is remarkable that recognition is even possible over a phone line. Nevertheless, a fundamental problem of voice-recognition is obvious: It is really easy for attackers to record a target’s voice and replay the sample to authenticate. Specifying the words for the recognition process can solve this problem. Instead of a fixed word, the system can choose a new set of words that has to be spoken. Other possibilities are text-independent systems or those combining speaker-recognition with secret knowledge.
Handwritten signature
Having a long tradition for signing contracts, signature recognition is a specialized form of writer recognition and can be done off- or on-line. Off-line verification is done with images from paper documents, whereas verification with electrical pens or pads is called on-line. The second version has the advantage of livenesschecking, in addition to the contours of the signature the process of signing itself can be used to add information: It is a function that can be recorded over time. The pressure applied on the pen can be measured as well. These systems remain vulnerable against professional imitation.
Keystroke
Cheap, no need for special capturing-hardware, can be realised with ordinary generic PCs (including keyboard) and software. Not only a password is measured, but the keystroke dynamic is added. Dwell-time (period of key being pressed) and flight-time (period of time between two hits) can be used to identify a trained person. This biometric modality offers an adequate way of further securing login procedures, but is not designed for high security environments.
Gait
Not much effort is spent on this modality yet, but it can be used to identify humans at high distances with standard video capturing devices. Variations of surface, footgear or carried objects may corrupt the results.

PHYSIOLOGICAL BIOMETRIC CHARACTERISTIC

Physiological characteristics are closely coupled to the genes and are in most cases stable over a long period of time, because there is no way of influencing them directly (except recognition based on the face). Injuries and diseases may change the characteristics; sometimes there is a unique relation between those and the shape of the biometric. Access to medical information is delicate and private, therefore they should be protected. In order to increase the confidentiality they should not be stored and used unvaried for the purpose of authentication.
Face
Humans are specialists in recognizing faces. The automation of this intentional process is not easy, but research is sophisticated in 2D face recognition. This modality has a very high user acceptance because of its frequent employment. After 30 years of research, 2D results are quite good [64]. Nevertheless, there are some issues that are hard to cope with taking into account only “flat” images of faces. With 3D depth information lighting conditions and pose variations can be handled more exact. This type of face recognition is an evolving domain that is not yet well investigated. Another advantage of 3D models is that they are harder to copy than 2D images. Simply simply holding a picture in front of a capture device could fool several 2D face recognition systems. Recent systems include liveness detection mechanisms to prevent this kind of attack – possible resolutions are two camera systems that are harder to fool.
Iris
Part of human eyes, situated between the pupil and the sclera. The rich texture for this modality provides for very good performance results. Even differentiating identical twins is possible because the modality is a phenotypic6 feature like fingerprints. User acceptance is problematic though.
Fingerprint
Historically evolved feature, which uses minutiae ridges of the finger skin to recognize humans. It is widely used in forensics as well as in everyday-applications because of the uniqueness of the skin surface and cheap sensing devices. The samples can be acquired with different techniques (optical, capacitance, thermal or ultrasound sensors exist). When touching these sensors, fingerprint images are distorted because of the elasticity of the skin. Feature extraction should be resistant to this fact.
DNA
Feature extraction is very expensive and takes a lot of time (up to several days) but it is referred to as the ultimate biometric characteristic. Deoxyribonucleic acid is available in every cell of each organism; a drawback is the equality of identical twins. Although 99.5 percent of the human genome overlaps between individuals there is still enough information for exact identification. Alleles are alternate forms of the DNA that can be used for feature extraction. DNA can be misused to derive other information (e.g. medical conditions, race or paternity can be extracted) and therefore is absolutely critical in respect of privacy.
RETINA
Blood vessel patterns in the back of the inner eye are taken as reference. This feature is very stable and does not alter. Sensors are expensive and must use visible light, which may annoy users. Retinal images are used in the medical domain to diagnose diseases and are therefore known to be one of the few biometric modalities that carry sensitive information.
Ear Shape
This uncommon modality can also be used for recognition. Employing thermograms instead of normal pictures improves system performance because hairstyle has no effect on it. Ear shape models are often combined with face recognition to improve overall performance. Human bodies provide many more attributes to be captured and taken for comparison. To name a few: Odour, sweat pores, vein patterns, lip motion or skin reflectance. Using multi-modal biometrics7 can improve the system’s performance.

LITERATURE SURVEY

S.Jaiswal et.al.[68] given a comprehensive literature on Image Based human and machine recognition of faces during 1987 to 2010. Machine recognition of faces has several applications. As one of the most successful applications of image analysis and understanding, face recognition has recently received significant attention, especially during the past several years. In addition, relevant topics such as Brief studies, system evaluation, and issues of illumination and pose variation are covered. In this paper numerous method, which related to image based 3D face recognition are discussed.
S.Jaiswal et.al.[69] described an efficient method and algorithm to make individual faces for animation from possible inputs. Proposed algorithm reconstruct 3D facial model for animation from two projected pictures taken from front and side views or from range data obtained from any available resources. It is based on extracting features on a face in automatic way and modifying a generic model with detected feature points with conic section and pixalization. Then the fine modifications follow if range data is available. The reconstructed 3Dface can be animated immediately with given parameters. Several faces by one methodology applied to different input data to get a final Animatable face are illustrated.
S.Jaiswal et.al.[70] the proposed study, 2D photographs image divided into two parts; one part is front view (x, y) and side view (y, z). Necessary condition of this method is that position or coordinate of both images should be equal. We combine both images according to the coordinate then we will get 3D Models (x, y, z) but this 3D model is not accurate in size or shape. In defining other words, we will get 3D animatable face, refinement of 3D animatable face through pixellization and smoothing process. Smoothing is performed to get the more realistic 3D face model for the person.
A formal method of classifying faces was first proposed in [4]. The author proposed collecting facial profiles as curves, finding their norm, and then classifying other profiles by their deviations from the norm. This classification is multimodal, i.e. resulting in a vector of independent measures that could be compared with other vectors in a database. Progress has advanced to the point that face recognition systems are being demonstrated in real-world settings [5]. The rapid development of face recognition is due to a combination of factors: active development of algorithms, the availability of large databases of facial images, and a method for evaluating the performance of face recognition algorithms. In the literatures, face recognition problem can be formulated as: given static (still) or video images of a scene, identify or verify one or more persons in the scene by comparing with faces stored in database.In general, biometric devices can be explained with a three step procedure.
A) A sensor takes an observation. The type of sensor and its observation depend on the type of biometric devices used. This observation gives us a “Biometric Signature” of the individual.
B) A computer algorithm “normalizes” the biometric signature so that it is in the same format (size, resolution, view, etc.) as the signatures on the system’s database. The normalization of the biometric signature gives us a “Normalized Signature” of the individual.
C) A matcher compares the normalized signature with the set (or sub-set) of normalized signatures on the system's database and provides a “similarity score” that compares the individual's normalized signature with each signature in the database set (or sub-set). What is then done with the similarity scores depends on the biometric system’s application. Face recognition starts with the detection of face patterns in Sometimes cluttered Scenes, proceeds by normalizing the face images to account for geometrical and illumination changes, possibly using information about the location and appearance of facial landmarks, identifies the faces using appropriate classification algorithms, and post processes the results using model-based schemes and logistic feedback [6].
All face recognition algorithms consistent of two major parts:
a. Face detection and normalization and (
b. Face identification.
Algorithms that consist of both parts are referred to as fully automatic algorithms and those that consist of only the second part are called partially automatic algorithms. Partially automatic algorithms are given a facial image and the coordinates of the center of the eyes. Fully automatic algorithms are only given facial images. Another way to categorize face recognition techniques is to consider whether they are based on models or exemplars. Models are used in [7] to compute the Quotient Image, and in [8] to derive their Active Appearance Model. These models capture class information (the class face), and provide strong constraints when dealing with appearance variation. At the other extreme, exemplars may also be used for recognition.
The ARENA method in [9] simply stores all training and matches each one against the task image. As far we can tell, current methods that employ models do not use exemplars, and vice versa. This is because these two approaches are by no means mutually exclusive. Recently, [10] proposed a way of combining models and exemplars for face recognition. In which, models are used to synthesize additional training images, which can then be used as exemplars in the learning stage of a face recognition system. Focusing on the aspect of pose invariance, face recognition approaches may be divided into two categories: (i) global approach and (ii) component-based approach.
In global approach, a single feature vector that represents the whole face image is used as input to a classifier. Several classifiers have been proposed in the literature e.g. minimum distance classification in the eigenspace [11,12], Fisher’s discriminant analysis [13], and neural networks [14]. Global techniques work well for classifying frontal views of faces. However, they are not robust against pose changes since global features are highly sensitive to translation and rotation of the face. To avoid this problem an alignment tag can be added before classifying the face.
Aligning an input face image with a reference face image requires computing correspondence between the two face images. The correspondence is usually determined for a small number of prominent points in the face like the center of the eye, the nostrils, or the corners of the mouth. Based on these correspondences, the input face image can be warped to a reference face image. In [15], face recognition was performed by independently Matching templates of three facial regions (eyes, nose and mouth). The configuration of the components during classification was unconstrained since the system did not include a geometrical model of the face. A similar approach with an additional alignment stage was proposed in [16]. In [17], a geometrical model of a face was implemented by a 2D elastic graph. The recognition was based on wavelet coefficients that were computed on the nodes of the elastic graph. In [18], a window was shifted over the face image and the DCT coefficients computed within the window were fed into a 2D Hidden Markov Model. Face recognition research still face challenge in some specific domains such as pose and illumination changes. Although numerous methods have been proposed to solve such problems and have demonstrated significant promise, the difficulties still remain. For these reasons, the matching performance in current automatic face recognition is relatively poor compared to that achieved in fingerprint and iris matching, yet it may be the only available measuring tool for an application. Error rates of 2-25% are typical. It is effective if combined with other biometric measurements. Current systems work very well whenever the test image to be recognized is captured under conditions similar to those of the training images. However, they are not robust enough if there is variation between test and training images [19].
Changes in incident illumination, head pose, facial expression, and hairstyle (include facial hair), cosmetics (including eyewear) and age, all confound the best systems today. We can make two important observations after surveying the research literature: (1) there does not appear to be any feature, set of features, or subspace that is simultaneously invariant to all the variations that a face image may exhibit, (2) given more training images, almost any technique will perform better. These two factors are the major reasons why face recognition is not widely used in real-world applications. The fact is that for many applications, it is usual to require the ability to recognize faces under different variations, even when training images are severely limited.
Eigenfaces (Pca)
Hyun Hoi et.al. describes ,Eigenfaces are a set of standardized face component based on statistical analysis of various face images. Mathematically speaking, eigenfaces are a set of eigenvectors derived from the covariance matrix of a highdimensional vector that represents possible faces of humans. Any human face can be represented by linear combination of eigenface images. For example, one person’s face can be represented by some portion of eigenface of one type and some other portion of eigenface of another type, and so on. In Pentland’s paper [66], motivated by principal component analysis (PCA), the author proposes this method, where principle components of a face are extracted, encoded, and compared with database. PCA techniques are also known as Karhunen-Loeve methods which choose a dimensionality reducing linear projection that maximizes the scatter of all projected samples.
Calculating Eigenfaces, Eigenfaces approach is based on principal component analysis (PCA) to find the vectors that can best represent the distribution of face images in image space. These vectors define subspace (face space) of face. An image is treated as a point (or vector) in high dimensional vector space, and each vector that describes Nby- N image is a linear combination of the original face images of length 2N. For example, typical image of size 256 by 256 describes a vector of dimension 65,536, or a point in 65,535-dimension space. An ensemble image is mapped to a collection of points in this huge space. It is necessary that the average of the training set of face images should be calculated before we calculate the difference between each eigenface and the average. Given the differences between two vectors, one can build the covariance matrix, from which the eigenvectors are taken. The eigenvalues associated with the eigenvectors make it easy to rank the eigenvectors according to their usefulness in characterizing the differences amongst the images.
Using Eigenfaces to Classify a Face Image and Detect Faces in, A new face image is projected onto face space simply by multiplying the difference between the image and the average mentioned in the section 2.1, and the result is multiplied by each eigenvector. The result of this operation will be the weighted contribution of each eigenface in representing the input face image, treating the eigenfaces as a basis set for face images. The Euclidean distance taken from each face class determines the class that best matches the input image.
Through eigenfaces, the system can detect the presence of face as well. The face image projected onto face space does not change radically while any non-face image will look quite different; therefore, it is easy to distinguish between face images and non-face images. Using this basic idea, image is projected onto face space and then Euclidean distance is calculated between the mean-adjusted input image and the projection onto face space. The distance is used as “faceness” so the result of calculating the distance is a “face map”, where low values indicate that there is a face. Evaluation and Issues, For experiment, sixteen subjects were digitized at three head orientations, three head sizes or scales, and three lighting conditions. A six Gaussian pyramid was constructed for each image to simplify 512x512 pixels to 16x16 pixels. Various groups of sixteen images were selected and 2500 images were classified. The system achieved 96% correct classification over light variation, 85% over head orientation, and 64% over head size.
Since eigenfaces method directly applies PCA, it does not destroy any information of image by exclusively processing only certain points, generally providing more accurate recognition results. But these techniques are sensitive to variation in position and scale. Some serious issues are effects of background and head size and orientation. Eigen face analysis, described above, does not distinguish the face from the background. In many cases, significant part of image consists of background which leads to incorrect classification because background is not required information for detection. Another issue is due to the different head size of input image because neighborhood pixel correlation is lost under head size change. The performance over size chance decreased to 64% in correctness and it suggests that there is a need for a multiscale approach where each face class includes images of the individual at several different sizes. Note that the variation of light can also still be a problem if the light source is positioned in some specific directions. This problem is addressed in Belhumeur’s paper [67].
This section gives an overview on the major human face recognition techniques that apply mostly to frontal faces, advantages and disadvantages of each method are also given. The methods considered are eigenfaces (eigenfeatures), geometrical feature matching. The approaches are analyzed in terms of the facial representations they used Eigenface is one of the most thoroughly investigated approaches to face recognition. It is also known as Karhunen- Loève expansion, eigenpicture, eigenvector, and principal component. References [20, 21] used principal component analysis to efficiently represent pictures of faces. They argued that any face images could be approximately reconstructed by a small collection of weights for each face and a standard face picture (eigenpicture). The weights describing each face are obtained by projecting the face image onto the eigenpicture. Reference [28] used eigenfaces, which was motivated by the technique of Kirby and Sirovich, for face detection and identification. In mathematical terms, eigenfaces are the principal components of the distribution of faces, or the eigenvectors of the covariance matrix of the set of face images. The eigenvectors are ordered to represent different amounts of the variation, respectively, among the faces.
Each face can be represented exactly by a linear combination of the eigenfaces. It can also be approximated using only the “best” eigenvectors with the largest eigenvalues. The best M eigenfaces construct an M dimensional space, i.e., the “face space”. The authors reported 96 percent, 85 percent, and 64 percent correct classifications averaged over lighting, orientation, and size variations, respectively. Their database contained 2,500 images of 16 individuals. As the images include a large quantity of background area, the above results are influenced by background. The authors explained the robust performance of the system under different lighting conditions by significant correlation between images with changes in illumination. However, [22] showed that the correlation between images of the whole faces is not efficient for satisfactory recognition performance. Illumination normalization [21] is usually necessary for the eigenfaces approach.
Reference [23] proposed a new method to compute the covariance matrix using three images each was taken in different lighting conditions to account for arbitrary illumination effects, if the object is Lambertian. Reference [24] extended their early work on eigenface to eigenfeatures corresponding to face components, such as eyes, nose, and mouth. They used a modular eigenspace which was composed of the above eigenfeatures (i.e., eigeneyes, eigennose, and eigenmouth). This method would be less sensitive to appearance changes than the standard eigenface method. The system achieved a recognition rate of 95 percent on the FERET database of 7,562 images of approximately 3,000 individuals. In summary, eigenface appears as a fast, simple, and practical method. However, in general, it does not provide invariance over changes in scale and lighting conditions. Recently, in [25] experiments with ear and face recognition, using the standard principal component analysis approach showed that the recognition performance is essentially identical using ear images or face images and combining the two for multimodal recognition results in a statistically significant performance improvement. For example, the difference in the rank-one recognition rate for the day variation experiment using the 197-image training sets is 90.9% for the multimodal biometric versus 71.6% for the ear and 70.5% for the face. There is substantial related work in multimodal biometrics. For example [26] used face and fingerprint in multimodal biometric identification, and [27] used face and voice. However, use of the face and ear in combination seems more relevant to surveillance applications.
Geometrical Feature Matching
Geometrical feature matching techniques are based on the computation of a set of geometrical features from the picture of a face. The fact that face recognition is possible even at coarse resolution as low as 8x6 pixels [28] when the single facial features are hardly revealed in detail implies that the overall geometrical configuration of the face features is sufficient for recognition. The overall configuration can be described by a vector representing the position and size of the main facial features, such as eyes and eyebrows, nose, mouth, and the shape of face outline. Using geometrical features did one of the pioneering works on automated face recognition by using geometrical features by [29] in 1973. Their system achieved a peak performance of 75% recognition rate on a database of 20 people using two images per person, one as the model and the other as the test image. References [30,31] showed that a face recognition program provided with features extracted manually could perform recognition apparently with satisfactory results. Reference [49] automatically extracted a set of geometrical features from the picture of a face, such as nose width and length, mouth position, and chin shape. There were 35 features extracted form a 35 dimensional vector. The recognition was then performed with a Bayes classifier. They reported a recognition rate of 90% on a database of 47 people.
Reference [32] introduced a mixture-distance technique, which achieved 95% recognition rate on a query database of 685 individuals. 30 manually extracted distances represented each face. Reference [33] used Gabor wavelet decomposition to detect feature points for each face image, which greatly reduced the storage requirement for the database. Typically, 35-45 feature points per face were generated. The matching process utilized the information presented in a topological graphic representation of the feature points. After compensating for different centroid location, two cost values, the topological cost, and similarity cost, were evaluated. The recognition accuracy in terms of the best match to the right person was 86% and 94% of the correct person's faces were in the top three candidate matches. In summary, geometrical feature matching based on precisely measured distances between features may be most useful for finding possible matches in a large database such as a Mug shot album. However, it will be dependent on the accuracy of the feature location algorithms. Current automated face feature location algorithms do not provide a high degree of accuracy and require considerable computational time.

PROPOSED MODEL

The proposed model of our work implements the appearance based techniques (PCA and LDA) and feature based technique of face recognition for features extraction and dimension reduction. For Training and classification artificial neural networks Back Propagation, Radial Basis Function and Learning Vector Quantization are used. Overall performance comparison of feature extraction algorithms and training algorithms are discussed at the end. The overall model of our work is shown in fig.2.
image

IMAGE PREPROCESSING

Face recognition task is performed on Grimace Face database, which contains 360 colored face images of 18 individuals forming 18 classes while there are 20 images present for each subject. Database images vary in expression & position. The size of each image has been 200 by 180. Half of the images (180 images) i.e. 10 from each subject are selected for training data set & rest half (180 images) are selected for testing. Images are converted into gray scale & processed with histogram equalization.
Pca Preprocessing
PCA can be used to approximate the original data with lower dimensional feature vectors. The basic approach is to compute the eigenvectors of the covariance matrix of the original data, and approximate it by a linear combination of the leading eigenvectors. By using PCA procedure, the test image can be identified by first, projecting the image onto the eigen face space to obtain the corresponding set of weights, and then comparing with the set of weights of the faces in the training set. The problem of low-dimensional feature representation can be stated as follows:
Let represents the n × N data matrix, where each xi is a face vector of dimension n, concatenated from a p × q face image. Here n represents the total number of pixels (p.q) in the face image and N is the number of face images in the training set. The PCA can be considered as a linear transformation (1) from the original image vector to a projection feature vector, i.e.
image
where Y is the m × N feature vector matrix, m is the dimension of the feature vector, and transformation matrix W is an n×m transformation matrix whose columns are the eigenvectors corresponding to the m largest eigenvalues computed according to the formula (2): ei=Se (2) where ei , are eigenvectors & eigenvalues matrix respectively. Here the total scatter matrix S and the mean image of all samples are defined as
image
Where {w i | i = 1, 2, … ,m} is the set of n –dimensional eigenvectors of S corresponding to the m largest eigen values. In other words, the input vector (face) in an n - dimensional space is reduced to a feature vector in an m - dimensional subspace. We can see that the dimension of the reduced feature vector m is much less than the dimension of the input faces vector n.
R-Lda Preprocessing
The R-LDA method is based on a novel regularized Fisher's discriminant criterion, which is particularly robust against the SSS problem compared to the traditional one used in LDA. The purpose of regularization is to reduce the high variance related to the eigenvalue estimates of the withinclass scatter matrix at the expense of potentially increased bias. The trade-off between the variance and the bias, depending on the severity of the SSS problem, is controlled by the strength of regularization. Given a training set , Z containing C classes
with each class consisting of a number of localized face images zij , a total of face images are available in the set. For computational convenience, each image is represented as a column vector of length J(= Iw × Ih) by lexicographic ordering of the pixel elements, i.e. zij  RJ, where (Iw × Ih) is the image size, and RJ denotes the Jdimensional real space. Let Sb and Sw be the between- and within-class scatter matrices of the training set, respectively.
The regularized Fisher's criterion, which is utilized, in this work instead of the conventional one
image
Where 0<<1 is a regularization parameter. The modified Fisher’s criterion is a function of the parameter , which controls the strength of regularization. Within the variation range of , two extremes should be noted. In one extreme where  = 0, the modified Fisher’s criterion is reduced to the conventional one with no regularization. In contrast with this, rather strong regularization is introduced in another extreme where  = 1.
Preprocessing Output-
After preprocessing images by PCA or R-LDA, feature vectors of reduced dimension are produced. PCA produces feature vector of dimension 20 and R-LDA produces it of dimension 14. As we have 180 samples for training so input to neural network has become the feature vector matrix of size 20 by 180 or 14 by 180 depending on PCA or R-LDA used.
Morphological Approach-
In this section we explain morphological technique to find the facial feature in the still colored image. This methodology of recognition of faces involves four phases: Preprocessing, Segmentation of faces it include face detection from scenes, feature extraction from the face regions and finally recognition of the face.
Preprocessing-
In this section we discuss the various techniques we had used before finding the facial features in the image. Preprocessing is also known as normalization. The intensity of light in the image is not unique so our first step is to make the image equally enrich. First we take the input image as color image.
Segmentation-
Segmentation is one of the very first steps in automatic face recognition systems. The goal of segmentation is making the image more analyzable. In simple words object detection is segmentation. Up to the mid-1990s, Segmentation’s main focused was on single-face segmentation from a simple or complex background. Significant advances have been made in recent years in achieving automatic face detection under various conditions. There is a difference in the object to be segmented and the background image in case of contrast. By calculating changes in contrast within the image we can calculate the gradient of an image. After calculating gradient image, edge and Sobel operators are used to calculate the threshold value, which in turn give a binary gradient image. The processed binary gradient mask images still shows lines of high contrast in the image by using linear structuring elements i.e dilating of the binary gradient image, these linear gaps can be removed.Then region filling to get binary image with filed hole; Extraction of 8-Connected set of pixels components to suppress light structures connected to image border ; filtering; thinning and Pruning [10] are implementation result in segmented image. The segmented image is then superimposed with the initial gray image.

Feature Extraction-

The extracted features like eyebrows, eye, nose and mouth are now in enhanced form. Feature Extraction algorithm includes: a) Selection of the more accurate features b) Determination of Normal Center of Gravity (NCG) The segmented image is processed with the proposed algorithm of finding more accurate features. The algorithm results in removal of small objects and results in morphologically open binary image. Once the features are identified, the algorithm determines the Normal Center of Gravity (NCG) or Intensity-weighted centroids of the each extracted features. Euclidean distance [11] between different facial points is calculated. In 2-D, the Euclidean distance between (x1,y1) and
image
This is the default method for calculating the Euclidean distance. We consider the Euclidean distance between different facial features. The distance between the eyes is calculated first then correspondingly the distance between all facial points is calculated

TRAINING AND CLASSIFICATION

Input matrix to the neural network is of size 20 by 180 or 14 by 180 while target matrix size is determined on the basis of number of classes. Target matrix is of size 18 by 180 where if input feature vector (column wise) belong to class 2 then corresponding output vector will have 1 at 2nd row and 0 at other rows. Here value 1 in any target vector denotes the belongingness of an image to the class denoted by respective row value of target vector. To classify input feature vectors into target vectors, we used Back Propagation (BP), Radial Basis Function (RBF) & Learning Vector Quantization (LVQ). We configured and tested each neural network with various configurations. Variations are made in the following components: Number of input to neural network, Number of hidden layers, Number of nodes in hidden layers, learning rate. In case of RBF SPREAD is also varied considering the condition that SPREAD is large enough so that the active input regions of the radial neurons overlap enough so that several radial neurons always have fairly large outputs at any given moment. However, SPREAD should not be so large that each neuron is effectively responding in the same, large, area of the input space.
Artificial Neural Network
An artificial neural network (ANN) is a mathematical model or computational model based on biological neural networks. It consists of an interconnected group of artificial neurons and processes information using a connectionist approach to computation. In most cases an ANN is an adaptive system that changes its structure based on external or internal information that flows through the network during the learning phase. In more practical terms neural networks are non-linear statistical data modeling tools. They can be used to model complex relationships between inputs and outputs or to find patterns in data.
Perceptron-
Neuron with a single R-element input vector is shown below. Here the individual element inputs p1 , p2, pR are multiplied by weights W1,1 , W 1,2 , W 1,R . and the weighted values are fed to the summing junction. Their sum is simply Wp, the dot product of the (single row) matrix W and the vector p. The neuron has a bias b, which is summed with the weighted inputs to form the net input n. This sum, n is the argument of the transfer function f.
Back Propagation-Multi-Layer Neural Networks as Classifier-
The Back Propagation algorithm is a supervised learning method and looks for the minimal of error function in weight space using the method of gradient descent and hence continuity and differentiability of error function is mandatory. The combination of weight which minimizes the Error function is the solution of the learning problem. The calculated error is back propagated from one layer to the previous one, and is used to adjust the weights between connecting layers. Training stops when error becomes acceptable, or after a predetermined number of iterations. After training, the modified interconnection weights form a sort of internal representation that enables the ANN to generate desired outputs when given the training inputs – or even new inputs that are similar to training inputs. Back propagation usually allows quick convergence on satisfactory local minima for error in the kind of networks to which it is suited. Multilayer perceptrons with one input, one or more hidden layers and one output layer is the necessary condition in back propagation network. Networks that are being trained using backpropagation can have more than two hidden layers, which can make learning complex relationships easier for the network. Other architectures add more connections, which might help networks learn. The employed neural network is a feedforward multilayer neural network hidden layer. The weighting factor of the input-tohidden neurons can be computed by (7)
image
Where k is iteration number; i, j are index of input and hidden neuron, respectively; and  is step size can be calculated from the following series of equations (8)-(11). The error function is given by
image
Where p is the number of output neurons, l is the index of neuron, tl and ol are the target and output values, respectively. The activation function, net function and output function are given by equation (9)
image
image
Where n is the number of input neurons, and m is the number of output neurons. Let us define
image
then we obtain the weight update equation (7) for the inputto- hidden layer by computing Eq. (12) and Eq. (13) with the Eqs. from (8) to (11). Next, vij, hidden–to–output neurons’ weight update can also be derived in the same way. Back Propagation networks often have one or more hidden layers of sigmoid neurons followed by an output layer of linear neurons. Multiple layers of neurons with nonlinear transfer functions allow the network to learn nonlinear and linear relationships between input and output vectors. The linear output layer lets the network produce values outside the range -1 to +1.
Radial Basis Function as Classifier –
The Radial Basis Function network performs similar function mapping with the multi-layer neural network, however its structure and function are much different. A Radial Basis Function is a local network that is trained in a supervised manner. Radial Basis Function performs a local mapping, meaning only inputs near a receptive field produce an activation.The input layer of this network is a set of n units, which accept the elements of an n -dimensional input feature vector. n elements of the input vector x are input to the l hidden functions, the output of the hidden function, which is multiplied by the weighting factor w(i, j), is input to the output layer of the network y (x). For each RBF unit k , k = 1, 2,3,..., l the center is selected as the mean value of the sample patterns belong to class k , i.e.
image
Where is the eigenvector of the i th image in the class k, and Nk is the total number of trained images in class k. Since the RBF neural network is a class of neural networks, the activation function of the hidden units is determined by the distance between the input vector and a prototype vector. Typically the activation function of the Radial Basis. Function units (hidden layer unit) is chosen as a Gaussian function with mean vector μi and variance vector i as follows
image
Note that x is an n -dimensional input feature vector, μi is an n -dimensional vector called the center of the Radial Basis Function unit, i is the width of the i th Radial Basis Function unit and l is the number of the Radial Basis
Function units. The response of the jth output unit for input x is given as:
image
Where w(i, j) is the connection weight of the i -th RBF unit to the j -th output node.
Learning Vector Quantization as Classifier-
Learning Vector Quantization (LVQ) neural network combines the competitive learning with supervised learning and it can realize nonlinear classification effectively. There are several variations of the basic LVQ algorithm. The most common are LVQ1, LVQ2 and LVQ3. The basic LVQ neural network classifier (LVQ1), which is adopted in our work, divides the input space into disjoint regions. A prototype vector represents each region. In order to classify an input vector, it must be compared with all prototypes. The Euclidean distance metric is used to select the closest vector to the input vector. The input vector is classified to the same class as the nearest prototype. The LVQ classifier consists of an input layer, a hidden unsupervised competitive layer, which classifies input vectors into subclasses, and a supervised linear output layer, which combines the subclasses into the target classes. In the hidden layer, only the winning neuron has an input of one and other neurons have outputs of zero. The weight vectors of the hidden layer neurons are the prototypes. The number of the hidden neurons is defined before training and it depends on the complexity of the input-output relationship. Moreover it significantly affects the results of differentiation.We carefully select the number of hidden neurons based on extensive simulation experiments.The learning phase starts by initiating the weight vectors of neurons in hidden layer. The input vectors are presented to the network in turn. For each input vector Xj , the weight vector Wc of a winning neuron i is adjusted. The winning neuron is chosen according to:
image
where 0  (n) 1 is the learning rate. The training algorithm is stopped after reaching a pre-specified error limit. Because the neural network combines the competitive learning with supervised learning, its learning speed is faster than BP network.

EXPERIMENTS

Each neural network had different configurations and also took different time for training input feature vectors. RBF neural network was the fastest while LVQ took much time than other neural networks. RBF creates radial basis layer neurons one at a time when training starts. In each iteration network error is lowered by appropriate input vector. This procedure is repeated until the error goal is met, or the maximum number of neurons is reached. In our case RBF creates 135 neurons for PCA input vectors while 169 neurons for R-LDA input vectors. Training graphs of RBF applied to PCA & R-LDA preprocessed training set: Following are the optimized neural network configuration and training graphs for the best output matching:
Bp Configuration and Plots with all Three Feature Extraction Techniques
image
image
The Configuration tables of BPA, RBF and LVQ are their optimum configuration for the given input training feature vectors and expected target output. The analysis of drawn plots shows that the all three neural network are sufficiently trained and are ready to be used for classification task.

RESULTS

After training BPA, RBF and LVQ successfully for the given feature vectors input and targeted output, the stored weights of trained NN are used for Testing. Following are the Execution time and recognition results of different combinations of preprocessing & classification techniques. Table IV shows the execution time of all combinations of feature extractions techniques with neural networks. Table V shows the recognition rate of all combinations of feature extractions techniques with neural networks.
image

CONCLUSION AND FUTURE WORK

Recognition performance using R-LDA with BP is superior to the performance of the Morphological feature Method with BP as per the Table II, The recognition performance using Morphological feature Method with BP is superior to the performance of the PCA with BP as per the Table II, The recognition performance using PCA with RBF is superior to the performance of the PCA with BP and PCA with LVQ as per the Table IV, Similarly recognition performance using R-LDA with RBF is superior to the performance of the R-LDA with BP and R-LDA with LVQ as per the Table V, When we compare results of PCA to RLDA, recognition performance of later image space reduction algorithm seems better with all described Classifiers.
Among both the dimensionality reduction & feature extraction algorithms, R-LDA algorithm has been more efficient with all three classifiers. This may be because of some important information is removed when null space of eigen vectors was being discarded to reduce the subspace in PCA. While R-LDA is able to resolve this problem through the effect of decreasing the larger eigen values and increasing the smaller ones thereby counteracting the biasing. Another effect of the regularization is to stabilize the smallest eigen values. Although BP network has been nearly effective in classification for its mature back propagation mechanism but RBF network achieved greater accuracy as compare to BP & LVQ and it also took less training time than other methods used. Hence using R-LDA which has given more effective feature vectors and RBF classifier, face recognition performance & speed both can be improved significantly. When we were successfully able to extract the optimized & reduced feature vectors for further processing we choose neural networks to make a knowledge base of the individual features of images. When we presented the Testing samples to the already trained neural network we found that recognition rate of R-LDA extracted feature vectors coupled with BP, RBF and LVQ is superior to all other described combinations. And if we see compare on the basis of time complexity of the whole process here also R-LDA method coupled with supervised neural network takes the lead.

References

  1. A. M. Martinez and A. C. Kak, “PCA versus LDA,” IEEE Trans. On pattern Analysis and Machine Intelligence,Vol. 23, No. 2, pp. 228-233, 2001.
  2. Boualleg, A.H.; Bencheriet, Ch.; Tebbikh, H “Automatic Face recognition using neural network- PCA” Information and Communication Technologies, 2006. ICTTA '06. 2nd Volume 1, 24-28 April 2006
  3. Byung-Joo Oh “Face recognition by using neural network classifiers based on PCA and LDA” Systems, man & Cybernetics,2005 IEEE international conference.
  4. Francis Galton, “Personal identification and description,” In Nature, pp. 173-177, June 21, 1888.
  5. W. Zaho, “Robust image based 3D face recognition,” Ph.D. Thesis, Maryland University, 1999.
  6. R. Chellappa, C. L. Wilson, and S. Sirohey, “Human and machine recognition of faces: A survey,” Proc. IEEE, vol. 83, pp. 705–741, May 1995.
  7. T. Riklin-Raviv and A. Shashua, “The Quotient image: Class based recognition and synthesis under varying illumination conditions,” In CVPR, P. II: pp. 566- 571,1999.
  8. G.j. Edwards, T.f. Cootes and C.J. Taylor, “Face recognition using active appearance models,” In ECCV, 1998.
  9. T. Sim, R. Sukthankar, M. Mullin and S. Baluja, “Memory-based face recognition for vistor identification,” In AFGR, 2000.
  10. T. Sim and T. Kanade, “Combing models and exemplars for face recognition: An illuminating example,” In Proceeding Of Workshop on Models Versus Exemplars in Computer Vision, CUPR 2001.
  11. L. Sirovitch and M. Kirby, “Low-dimensional procedure for the characterization of human faces,” Journal of the Optical Society of America A, vol. 2, pp. 519–524, 1987.
  12. M. Turk and A. Pentland “Face recognition using eigenfaces,” In Proc. IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–591, 1991.
  13. P. Belhumeur, P. Hespanha, and D. Kriegman, “Eigenfaces vs fisherfaces: Recognition using class specific linear projection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 19, no. 7, pp. 711–720, 1997.
  14. M. Fleming and G. Cottrell, “Categorization of faces using unsupervised feature extraction,” In Proc. IEEE IJCNN International Joint Conference on Neural Networks, pp. 65–70, 1990.
  15. R. Brunelli and T. Poggio, “Face recognition: Features versus templates,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 15, no. 10, pp. 1042–1052, 1993.
  16. D. J. Beymer, “Face recognition under varying pose,” A.I. Memo 1461, Center for Biological andComputational Learning, M.I.T., Cambridge, MA, 1993.
  17. L. Wiskott, J.-M. Fellous, N. Kruger, and C. von der Malsburg, “Face recognition by elastic bunch graph matching,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 19, no. 7, pp. 775 – 779, 1997.
  18. A. Nefian and M. Hayes, ” An embedded hmm-based approach for face detection and recognition,” In Proc. IEEE International Conference on Acoustics, Speech, and Signal Processing, vol. 6, pp. 3553–3556, 1999.
  19. U.S. Department of Defense, ”Facial Recognition Vendor Test, 2000,” Available: http://www.dodcounterdrug.com/facialrecognition/FRV T2000/frvt2000.htm.
  20. L. Sirovich and M. Kirby, “Low-Dimensional procedure for the characterisation of human faces,” J. Optical Soc. of Am., vol. 4, pp. 519-524, 1987.
  21. M. Kirby and L. Sirovich, “Application of the Karhunen- Loève procedure for the characterisation of human faces,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 12, pp. 831-835, Dec. 1990.
  22. M.A. Grudin, “A compact multi-level model for the recognition of facial images,” Ph.D. thesis, Liverpool John Moores Univ., 1997.
  23. L. Zhao and Y.H. Yang, “Theoretical analysis of illumination in pcabased vision systems,” Pattern Recognition, vol. 32, pp. 547-564, 1999.
  24. A. Pentland, B. Moghaddam, and T. Starner, “View- Based and modular eigenspaces for face recognition,” Proc. IEEE CS Conf. Computer Vision and Pattern Recognition, pp. 84-91, 1994.
  25. K. Chang, K.W. Bowyer and S. Sarkar, “Comarison and combination of ear and face images in appearancebased biometrics,” IEEE Trans. On Pattern analysis and machine intelligence, vol. 25, no. 9, September 2003.
  26. L. Hong and A. Jain, “Integrating faces and fingerprints for personal identification,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 20, no. 12, pp. 1295-1307, Dec. 1998.
  27. P. Verlinde, G. Matre, and E. Mayoraz, “Decision fusion using a multilinear classifier,” Proc. Int’l Conf. Multisource-Multisensor Information Fusion, vol. 1, pp. 47-53, July 1998.
  28. S. Tamura, H. Kawa, and H. Mitsumoto, “Male/Female identification from 8_6 very low resolution face images by neural network,” Pattern Recognition, vol. 29, pp. 331-335, 1996.
  29. T. Kanade, “Picture processing by computer complex and recognition of human faces,” technical report, Dept. Information Science, Kyoto Univ., 1973.
  30. A.J. Goldstein, L.D. Harmon, and A.B. Lesk, “Identification of human faces,” Proc. IEEE, vol. 59, pp. 748, 1971.
  31. Y. Kaya and K. Kobayashi, “A basic study on human face recognition,” Frontiers of Pattern Recognition, S. Watanabe, ed., pp. 265, 1972.
  32. R. Bruneli and T. Poggio, “Face recognition: features versus templates,” IEEE Trans. Pattern Analysis and Machine Intelligence,vol. 15, pp. 1042-1052, 1993.
  33. I.J. Cox, J. Ghosn, and P.N. Yianios, “Feature-Based face recognition using mixture-distance,” Computer Vision and Pattern Recognition, 1996.
  34. B.S. Manjunath, R. Chellappa, and C. von der Malsburg, “A Feature based approach to face recognition,” Proc. IEEE CS Conf. Computer Vision and Pattern Recognition, pp. 373-378, 1992.
  35. Forsyth and Ponce, “Computer Vision: A Modern Approach,” Prentice Hall, 2003.
  36. Hui Kong; Xuchun Li; Lei Wang; Earn Khwang Teoh; Jian-Gang Wang; Venkateswarlu, R “Generalized 2D principal component analysis” Neural Networks, 2005. IJCNN '05. Proceedings. 2005 IEEE International Joint Conference on Volume 1, 31 July-4 Aug. 2005
  37. Juwei Lu, K.N. Plataniotis, and A.N. Venetsanopoulos, "Face Recognition Using LDA Based Algorithms", IEEE Transactions on Neural Networks, Vol. 14, No.1, Page: 195-200, January 2003.
  38. Juwei Lu, K.N. Plataniotis, A.N. Venetsanopoulos, “Regularization Studies of Linear Discriminant Analysis in Small Sample Size Scenarios with Application to Face Recognition”, Pattern Recognition Letter, vol. 26, issue 2, pp. 181-191, 2005.
  39. M.A. Turk and A.P. Pentland, “Face Recognition Using Eigenfaces”, IEEE Conf. on Computer Vision and Pattern Recognition , pp. 586-591, 1991.
  40. M. J. Er, S. Wu, J. Lu, and H. L. Toh, “Face Recognition with Radial Basis Function(RBF) Neural Networks,” IEEE Trans. On Neural Networks, Vol. 13, No.3 pp. 697-710
  41. Thakur, S.; Sing, J.K.; Basu, D.K.; Nasipuri, M.; Kundu, M. “Face Recognition Using Principal Component Analysis and RBF Neural Networks” Emerging Trends in Engineering and Technology, 2008. ICETET '08. First International Conference on 16-18 July 2008.
  42. V. Espinosa-Duro, M. Faundez-Zanuy, “Face Identification by Means of a Neural Net Classifier,” Proceedings of IEEE 33rd Annual 1999 International Carnahan Conf. on Security Technology, pp. 182-186, 1999.
  43. W. Zhao, R. Chellappa, A. Rosenfeld, P. J. Phillips, “Face Recognition: A Literature Survey,” UMD CfAR Technical Report CAR-TR-948, 2000.
  44. X. Lu, Y. Wang and A. K. Jain, “Combining Classifier for Face Recognition,” Proc. of IEEE 2003 Intern. Conf. on Multimedia and Expo. Vol. 3. pp. 13-16, 2003.
  45. Xin Ma; Wei Liu; Yibin Li; Rui Song, “LVQ Neural Network Based Target Differentiation Method for Mobile Robot” Advanced Robotics, 2005. ICAR '05. Proceedings. 12th International Conference on 18-20 July 2005.
  46. Xudong Jiang; Mandal, B.; Kot, A. “Eigenfeature Regularization and Extraction in Face Recognition” Pattern Analysis and Machine Intelligence, IEEE Transactions on Volume 30, Issue 3, March 2008.
  47. Yan Jun Wang Dian-hong “Sequential face recognition based on LVQ networks” VLSI Design and Video Technology, 2005. Proceedings of 2005 IEEE International Workshop.
  48. Intelligent Face Recognition Techniques: A Comparative Study. GVIP Special Issue on Face Recognition, 2007
  49. Sumanth TammaDecember 5, 2002S. Face Recognition Techniques. [Online]. Available: http://ie.emu.edu.tr/ admin/dosyalar/%7Be3F-az2-Rta%7Dneura.pdf
  50. De Vel, O.; Aeberhard, S., “Line-based face recognition under varying pose”.Pattern Analysis and Machine Intelligence, IEEE Transactions on , Volume: 21 Issue: 10 , Oct. 1999, Page(s): 1081 -1088.
  51. B. L. Zhang, H. Zhang, and S. S. Ge, “Face recognition by applyingwavelet subband representation and kernel associative memory,” IEEE Trans. Neural Netw., vol. 15, no. 1, pp. 166–177, Jan. 2004.
  52. Q. Liu, X. Tang, H. Lu, and S. Ma, “Face recognition using kernelscatter-difference-based discriminant analysis,” IEEE Trans. Neural Netw., vol. 17, no. 4, pp. 1081–1085, Jul. 2006.
  53. W. Zheng, X. Zhou, C. Zou, and L. Zhao, “Facial expression recognitionusing kernel canonical correlation analysis (KCCA),” IEEE Trans.Neural Netw., vol. 17, no. 1, pp. 233–238, Jan. 2006.
  54. X. Tan, S. Chen, Z. H. Zhou, and F. Zhang, “Recognizing partially occluded,expression variant faces from single training image per personwith SOM and soft k-NN ensemble,” IEEE Trans. Neural Netw., vol.16, no. 4, pp. 875–886, Jul. 2005.
  55. Rama Chellappa, Charles L. Wilson and SaadSirohey, “Human and machine recognition of faces:A survey,” Proceedings of the IEEEvol.83,no.5.pp.705-740 May1995.
  56. Mehmoud Ismail, Reda A. El-Khoribi “HMT of the Rank let Transform for Face Recognition and Verification,” ICGST’s Graphics Vision and Image Processing Journal, vol.6,issue3,dec06,pp7-13.
  57. Debasis Mazumdar, Santanu Dutta and Soma Mitra “Automatic feature detection of a face and recovery of its pose” Communicated to Journal of IETE.
  58. Breu, Heinz, Joseph Gil, David Kirkpatrick, and Michael Werman, "Linear Time Euclidean Distance Transform Algorithms," IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 17, No. 5, May 1995, pp. 529-533.
  59. http://en.wikipedia.org/wiki/Backpropagation
  60. http://page.mi.fu-berlin.de/rojas/neural/chapter/K7.pdf
  61. Version 5.0 The CSU Face Identi_cation Evaluation System. Web ad-dress:http://www.cs.colostate.edu/ evalfacerec/algorithms5.html. 2003.
  62. K. Delac, M. Grgic, S. Grgic, Statistics in Face Recognition: Analyzing Probability Distributions of PCA, ICA and LDA Performance Results, Proceedings of the 4th International Symposium on Image and Signal Processing and Analysis, ISPA 2005, Zagreb, Croatia, 15-17 September 2005, pp. 289-294
  63. Despina Polemi. Final report - biometric techniques: Review and evaluation of biometric techniques for identification and authentication, including an appraisal of the areas where they are most applicable. European Commission, April 1997, website checked June 2007.
  64. NIST. FRVT 2006 and ICE 2006 Large Scale Results. http://face.nist.gov/frvt/frvt2006/FRVT2006andICE200 6LargeScaleReport.pdf. National Institute of Standards and Technology.
  65. Ruud M. Bolle and Jonathan H. Connell et al. Guide to Biometrics. Springer Verlag, Nov 2003. ISBN-10: 0387400893, ISBN-13: 978-0387400891.
  66. Turk, M. A., Pentland, A. P., Eigenfaces for recognition, 1991, Cognitive Neurosci., V. 3, no.1, pp.71-86.
  67. Belhumeur, V., Hespanda, J., Kiregeman, D., 1997, Eigenfaces vs. fisherfaces: recognition using class specific liear projection, IEEE Trans. on PAMI, V. 19, pp. 711-720.
  68. Sushma Jaiswal, Sarita Singh Bhadauria, Rakesh Singh Jadon and Tarun Kumar Divakar,Brief description of image based 3D face recognition methods ,Volume 1, Number 4, 1-15, DOI: 10.1007/3DRes.04(2010)02.
  69. Sushma Jaiswal , Sarita Singh Bhadauria , Rakesh Singh Jadon “Creation 3D Animatable Face Methodology Using Conic Section-Algorithm”, Information Technology Journal,ISSN:18125638,pages no.292-298,2007.
  70. Sushma Jaiswal , Sarita Singh Bhadauria , Rakesh Singh Jadon,”Automatic 3D Face Model from 2D Image-Through Projection” Journal: Information Technology Journal Year: 2007 Vol: 6 Issue: 7 Pages/record No.: 1075-1079.