ISSN ONLINE(2320-9801) PRINT (2320-9798)
N.Priya Assistant Professor, Department of Computer Science,Bharath University, Chennai, India |
Related article at Pubmed, Scholar Google |
Visit for more related articles at International Journal of Innovative Research in Computer and Communication Engineering
This project entitled “Comparative analysis of advanced Face Recognition Techniques”, it is based on fuzzy c means clustering and associated sub neural network. It deals with the face is a complex multidimensional visual model and developing a computational model for face recognition is difficult. In this paper, it represents a method for face recognition base on similar neural networks. Neural networks (NNs) have been widely used in various fields. However, the computing effectiveness decreases rapidly if the scale of the NN increases. In this paper, a new method of face recognition based on fuzzy clustering and parallel NNs is proposed. The face patterns are divided into several small-scale neural networks based on fuzzy clustering and they are combine to obtain the recognition result. The facial feature vector was compared by PCA and LDA methods. In particular, the proposed method achieved 98.75% recognition accuracy for 240 patterns of 20 registrants and a 99.58% rejection rate for 240 patterns of 20 nonregistrants. Experimental results show that the performance of our new face-recognition method is better than those of the LDA based face recognition system.
INTRODUCTION |
This project entitled “Comparative Analysis of advanced Face Recognition Techniques”, it is based on fuzzy c means clustering and associated sub neural network. It deals with the face recognition plays an important role in many applications such as building/store access control, suspect identification, and surveillance. Over the past 30 years, many different facerecognition techniques have been proposed, motivated by the increased number of real-world applications requiring the recognition of human faces. There are several problems that make automatic face recognition a very difficult task. The face image of a person input to a face-recognition system is usually acquired under different conditions from those of the face image of the same person in the database. |
Therefore, it is important that the automatic face-recognition system be able to cope with numerous variations of images of the same face. The image variations are mostly due to changes in the following parameters: pose, illumination, expression, age, disguise, facial hair, glasses, and background. In many pattern-recognition systems, the statistical approach is frequently used. Although this paradigm has been successfully applied to various problems in pattern classification, it is difficult to express structural information unless an appropriate choice of features is possible. Furthermore, this approach requires much heuristic information to design a classifier. Neural-network (NN)-based paradigms, as new means of implementing various classifiers based on statistical and structural approaches have been proven to possess many advantages for classification because of their learning ability and good generalization. Generally speaking, multilayered networks (MLNs), usually coupled with the back propagation (BP) algorithm, are most widely used for face recognition. In this paper, we propose a new method of face recognition based on fuzzy clustering and parallel NNs. As one drawback of the BP algorithm, when the scale of the NN increases, the computing efficiency decreases rapidly for various reasons, such as the appearance of a local minimum. Therefore, we propose a method in which the individuals in the training set are divided into several small-scale parallel NNs, and they are combined to obtain the recognition result. The HCM is the most well-known conventional (hard) clustering method. The HCM algorithm executes a sharp classification, in which each object is either assigned to a cluster or not. Because the HCM restricts each point of a data set to exactly one cluster and the individuals belonging to each cluster are not overlapped, some similar individuals cannot be assigned to the same cluster, and, hence, they are not learned or recognized in the same NN. In this paper, fuzzy c-means (FCM) is used. In contrast to HCM, the application of fuzzy sets in a Classification function causes the class membership to become a relative one and an object can belong to several clusters at the same time but to different degrees. FCM produces the idea of uncertainty of belonging described by a membership function, and it enables an individual to belong to several networks. Then, all similar patterns can be thoroughly learned and recognized in one NN. |
PREPROCESSING |
A. Lighting Compensation |
We adjusted the locations of the lamps to change the lighting conditions. The total energy of an image is the sum of the squares of the intensity values. The average energy of all the face images in the database is calculated. Then, each face image is normalized to have energy equal to the average energy |
Energy = Σ (Intensity) 2 |
B. Facial-Region Extraction |
The method of detecting and extracting the facial features in a grayscale image is divided into two stages. First, the possible human eye regions are detected by testing all the valley regions in an image. A pair of eye candidates is selected by means of the genetic algorithm to form a possible face candidate. In our method, a square block is used to represent the detected face region. Fig. 1 shows an example of a selected face region based on the location of an eye pair. The relationships between the eye pair and the face size are defined as follows: |
Then, the symmetrical measure of the face is calculated. The nose centerline (the perpendicular bisector of the line linking the two eyes) in each facial image is calculated. The difference between the left half and right half from the nose centerline of a face region should be small due to its symmetry. If the value of the symmetrical measure is less than |
threshold value, the face candidate will be selected for further verification. After measuring the symmetry of a face candidate, the existences of the different facial features are also verified. The positions of the facial features are verified by analyzing the projection of the face candidate region. The facial feature regions will exhibit a low value on the projection. A face region is divided into three parts, each of which contains the respective facial features. The -projection is the average of gray-level intensities along each row of pixels in a window. In order to reduce the effect of the background in a face region, only the white windows, as shown in Fig. 3, are considered in computing the projections. The top window should contain the eyebrows and the eyes, the middle window should contain the nose, and the bottom window should contain the mouth. When a face candidate satisfies the aforementioned constraints, it will be extracted as a face region. The extracted face image is shown in Fig. 4. |
D.Linear Discriminant Analysis (LDA) |
In the PCA based recognition method the feature weights obtained identify the variations among the face images and this includes variations within the faces of the same subject too, under differing illumination and facial feature conditions. |
`However, when we consider the different images of the same subject, these images have a lot of information in common and this common information is not utilized by the PCA technique which de-correlates the face images. |
Hence, a class based approach has been proposed in which the faces of the same subjects are grouped into separate classes and the variations between the images of different classes are discriminated using the eigenvectors at the same time minimizing the covariance within the same class [6]. This method is called the Linear Discriminant Analysis. Here the lowenergy discriminant features of the face are used to classify the images. |
The LDA algorithm described below is the summary of the work published by [16] and from the discussion of [12] and [14]. The transformation matrix W for the LDA is given by the eigenvectors corresponding to the largest eigenvalues of the scatter matrix function (SW--1. SB), where SW is the within-class scatter matrix and SB is the between-class scatter matrix. These scatter matrices are defined as: |
Recognition of a new face is done similar to that of the PCA method. Here the given face is normalized by subtracting the AvgFace and then the weights are calculated by projecting the image onto the basis vectors. Then the Euclidean measure is used as the similarity measure to determine the closest match for the test image with the face in the trained database. |
IMPLEMENTATION |
1. FUZZYC MEANS CLUSTERING: |
In fuzzy clustering each point has a degree of belonging to clusters, as in fuzzy logic, rather than belonging completely to just one cluster. Thus, points on the edge of a cluster may be in the cluster to a lesser degree than points in the center of cluster. For each point x we have a coefficient giving the degree of being in the kth cluster uk(x). Usually, the sum of those coefficients for any given x is defined to be 1: |
With fuzzy c-means, the centroid of a cluster is the mean of all points, weighted by their degree of belonging to the cluster: |
The degree of belonging is related to the inverse of the distance to the cluster center: |
Then the coefficients are normalized and fuzzyfied with a real parameter m > 1 so that their sum is 1. So |
For m equal to 2, this is equivalent to normalizing the coefficient linearly to make their sum 1. When m is close to 1, then cluster center closest to the point is given much more weight than the others, and the algorithm is similar to k-means. The fuzzy c-means algorithm is very similar to the k-means algorithm: |
Choose a number of clusters |
Assign randomly to each point coefficients for being in the clusters. |
Repeat until the algorithm has converged (that is, the coefficients' change between two iterations is no more than , the given sensitivity threshold): |
1) Compute the centroid for each cluster, using the formula above. |
2) For each point, compute its coefficients of being in the clusters, using the formula above. The algorithm minimizes intra-cluster variance as well, but has the same problems as k-means; the minimum is a local minimum, and the results depend on the initial choice of weights. The expectation_maxization algorithm is a more statistically formalized method which includes some of these ideas: partial membership in classes. It has better convergence properties and is in general preferred to fuzzy-c-means. |
2. Parallel NNs : |
The parallel NNs are composed of three-layer BPNNs. A connected NN with 32 input neurons and six output neurons have been simulated (six individuals are permitted to belong to each subnet). The number of hidden units was selected by six fold cross validation from 6 to 300 units. The algorithm added three nodes to the growing network once. The number of hidden units is selected based on the maximum recognition rate. |
i. Learning Algorithm: A standard pattern (average pattern) is obtained from 12 patterns per registrant. Based on the FCM algorithm, 20 standard patterns are divided into several clusters. Similar patterns in one cluster are entered into one subnet. Then, 12 patterns of a registrant are entered into the input layer of the NN to which the registrant belongs. On each subnet, the weights are adapted according to the minus gradient of the squared Euclidean distance between the desired and obtained outputs. |
ii Recognition Algorithm: When a test pattern is input into the parallel NNs, as illustrated in Fig. 5, based on the outputs in each subnet and the similarity values, the final result can be obtained as follows. Step 1.Exclusion by the negation ability of NN. First, all the registrants are regarded as candidates. If the maximum output values are less than the threshold value, corresponding candidates are deleted. The threshold value is set to 0.5, which is determined based on the maximum output value of the patterns of the nonregistrant. Since similar individuals are distributed into one subnet, based on this step, the candidates similar to the desired individual are excluded. Step 2. Exclusion by the negation ability of parallel NNs. Among the candidates remaining after Step 1, the candidate that has been excluded in one subnet will be deleted from other subnets. If all the candidates are excluded in this step, this test pattern is judged as a nonregistrant. When a candidate similar to the desired individual is assigned to several clusters at the same time, it may become the maximum output of the subnets to which the desired individual does not belong and may be selected as the final answer by mistake. By performing Step 2), this possibility is avoided. Step 3.Judgment by the similarity method. If some candidates remain after Step 2, then, the similarity measure is used for judgment. The similarity value between the patterns of each remaining candidate and the test pattern is calculated. The candidate having the greatest similarity value is regarded as the final answer. If this value is less than the threshold value of similarity, the test pattern is regarded as a nonregistrant. |
Table3.1: SUBNETS AFTER PARTIAL UNIFICATION |
EXPERIMENTS |
This paper was analyzed depending upon the time and performance. The face was identified by PCA and LDA method. The table 4.1 shows the result of the performance and time analysis. |
CONCLUSION |
This project proposed a fuzzy clustering and parallel NNs method for face recognition by using PCA and LDA method. The patterns are divided into several small-scale subnets based on FCM. Due to the negation ability of NN and parallel NNs, some candidates are excluded. The similarities between the remaining candidates and the test patterns are calculated. It judged the candidate with the greatest similarity to be the final answer when the similarity value was above the threshold value. Otherwise, it was judged to be nonregistered. This project was compared with LDA and PCA method, and it was analyzed .The result of this analysis is, the PCA method is better than the LDA. |
References |
|