ISSN ONLINE(2320-9801) PRINT (2320-9798)

All submissions of the EM system will be redirected to Online Manuscript Submission System. Authors are requested to submit articles directly to Online Manuscript Submission System of respective journal.

Unconstrained Face Recognition From Blurred and Illumination with Pose Variant Face Image Using SVM

C.Indhumathi
Dept. of CSE, PG Student (SE), Sri Krishna College of Engineering and Technology, Coimbatore, Tamilnadu, India
Related article at Pubmed, Scholar Google

Visit for more related articles at International Journal of Innovative Research in Computer and Communication Engineering

Abstract

Face recognition has been an intensely researched field of computer vision for the past couple of decades. Motivated by the problem of remote face recognition, this paper has addressed the problem of recognizing blurred and poorly illuminated faces. This paper has shown that the set of all images obtained by blurring a given image is a convex set given by the convex hull of shifted versions of the image. Based on this set-theoretic characterization, the work proposed a blur-robust face recognition algorithm DRBF. This algorithm can easily incorporate prior knowledge on the type of blur as constraints. Using the low-dimensional linear subspace model for illumination then showed that the set of all images obtained from a given image by blurring and changing its illumination conditions is a bi-convex set. Again, based on this set-theoretic characterization, this paper proposed a blur and illumination robust algorithm IRBF. The face under different pose can be detected and normalized by using affine transformation parameters to align the input pose image to frontal view. After completing the aforementioned pose normalization process, the resulting final image undergoes illumination normalization. This is performed using the SQI algorithm. Then face can be recognized using incorporating blur and illumination by classifying training and testing data by using SVM.

I. INTRODUCTION

Different problems in face image analysis, such as face detection, face recognition and facial expression recognition have received very much attention in computer vision research. These problems are interesting from the viewpoints of basic research aiming to efficient descriptors for facial images and of applications such as surveillance and human-computer interaction .A key issue in face analysis are finding efficient descriptors for face appearance. Given the low interperson variation in face images, ideal descriptors should be very discriminative. Still, they should be robust to different perturbations and changes such as illumination and pose changes, aging of the subjects, etc. Despite the extensive research efforts towards face descriptors robust to the a fore mentioned disturbances, the problems caused by blur often present in the real world face images have been mostly overlooked. Blur may be present in face images due to motion of the subject or the camera during the exposure, camera not being in focus, or low quality of the imaging device such as analog web camera. For example, in the Face Recognition Grand Challenge dataset, many of the “uncontrolled still” images appear blurred because the autofocus of the digital pocket camera did not focus into the face area. Until now, there has been little work on blur invariant face descriptors. Facial image deblurring for recognition has been addressed in a few publications and references therein, but to the best of authors’ knowledge, explicitly constructing blur-invariant descriptors for face recognition has not been proposed before. In this work they address the challenges caused by blur by applying the recently proposed blur Phase Quantization method to face description.
Face recognition has been an intensely researched field of computer vision for the past couple of decades. Though significant strides have been made in tackling the problem in controlled domains, significant challenges remain in solving it in the unconstrained domain. One such scenario occurs while recognizing faces acquired from distant cameras. The main factors that make this a challenging problem are image degradations due to blur and noise, and variations in appearance due to illumination and pose. In this paper, we specifically address the problem of recognizing faces across blur and illumination. An obvious approach to recognizing blurred faces would be to deblur the image first and then recognize it using traditional face recognition techniques. However, this approach involves solving the challenging problem of blind image deconvolution. This avoid this unnecessary step and propose a direct approach for face recognition. It show that the set of all images obtained by blurring a given image forms a convex set, and more specifically, it show that this set is the convex hull of shifted versions of the original image. Thus with each gallery image can associate a corresponding convex set.
Based on this set-theoretic characterization, a blur-robust face recognition algorithm is proposed. In the basic version of the algorithm compute the distance of a given probe image from each of the convex sets, and assign it the identity of the closest gallery image. The distance-computation steps are formulated as convex optimization problems over the space of blur kernels and do not assume any parametric or symmetric form for the blur kernels; however, if this information is available, it can be easily incorporated into the algorithm, resulting in improved recognition performance. Further, to make algorithm robust to outliers and small pixel misalignments by replacing the Euclidean distance by weighted L1-norm distance and comparing the images in the local binary pattern space. It has been shown that all the images of a Lambertian convex object, under all possible illumination conditions, lie on a low-dimensional linear subspace. Though faces are not exactly convex or Lambertian, they can be closely approximated by one. Thus each face can be characterized by a low-dimensional subspace, and this characterization has been used for designing illumination robust face recognition algorithms.
Based on this illumination model, it show that the set of all images of a face under all blur and illumination variations is a biconvex set. If it fix the blur kernel then the set of images obtained by varying the illumination conditions forms a convex set; and if it fix the illumination condition then the set of all blurred images is also convex. Based on this set-theoretic characterization, this paper propose a blur and illumination robust face recognition algorithm. The basic version of the algorithm computes the distance of a given probe image from each of the bi-convex sets, and assigns it the identity of the closest gallery image. The distance computations steps can be formulated as QCQPs, which solve by alternately optimizing over the blur kernels and the illumination coefficients
Restoration of missing areas in digital images has been intensively studied due to its many useful applications such as removal of unnecessary objects and error concealment. Therefore, many methods for realizing these applications have been proposed. Generally, the methods previously reported in the literature are broadly classified into two categories, structure-based reconstruction and texture based reconstruction. Most algorithms, which focus on texture reconstruction, estimate missing areas by using statistical features of known textures within the target image as training patterns. Specifically, they approximate patches within the target image in lower-dimensional subspaces and derive the inverse projection for the corruption to estimate missing intensities. In this scheme, several multivariate analyses such as PCA and sparse representation have been used for obtaining low-dimensional subspaces. In addition to the above reconstruction schemes, many texture synthesis-based reconstruction methods have been proposed.

II. IMPLEMENTING DIRECT RECOGNITION OF BLURRED FACES

In this module first review the convolution model for blur. Next, show that the set of all images obtained by blurring a given image is convex and finally present the algorithm for recognizing blurred faces.
RECOGNITION OF BLURRED FACES
Face recognition is also sensitive to small pixel misalignments and, hence, the general consensus in face recognition literature is to extract alignment insensitive features, such as LBP, and then perform recognition based on these features. Following this convention instead of doing recognition directly from r j , first compute the optimal blur kernel hj for each gallery image.
Then blur each of the gallery images with the corresponding optimal blur kernels hj and extract LBP features from the blurred gallery images. And finally, compare the LBP features of the probe image with those of the gallery images to find the closest match. To make the algorithm robust to outliers, who could arise due to variations in expression.

III. INCORPORATING THE ILLUMINATION MODEL

In this module the facial images of a person under different illumination conditions can look very different, and hence for any recognition algorithm to work in practice, it must account for these variations. First, the low-dimensional subspace model for handling appearance variations due to illumination. Next, use this model along with the convolution model to define the set of images of a face under all possible lighting conditions and blur and then propose a recognition algorithm based on minimizing the distance of the probe image from such sets.
ILLUMINATION-ROBUST RECOGNITION OF BLURRED FACES (IRBF)
Corresponding to each sharp well-lit gallery image I j , j = 1, 2, . . . , M, obtain the nine basis images I j,m,m = 1, 2, . . . , 9. Given the vectorized probe image ib, for each gallery image I j and find the optimal blur kernel hj and illumination coefficients αj,m . Then transform (blur and re-illuminate) each of the gallery images I j using the computed blur kernel hj and the illumination coefficients αj,m. Next, to compute the LBP features from these transformed gallery images and compare it with those from the probe image Ib to find the closest match of the image.
POSE AND ILLUMINATION NORMALIZATION
Pose normalization is often employed to improve classification accuracy. In principle, it allows us to simultaneously correct for both pose and illumination changes. This comes, however, at a significant computational cost, particularly when processing a high number of faces (regardless of their distribution within images). FACE exploits a less complex yet equally effective approach to pose normalization.
Pose and illumination normalization can be done by Self-Quotient Image algorithm.
Self-Quotient Image algorithm
The SQI technique has been proposed for synthesizing an illumination normalized image from a single face image. The SQI is defined by a face image I (x, y) and a smoothed image S(x, y) as
image
FACE RECOGNITION ACROSS BLUR AND ILLUMINATION USING SVM
In this module Face recognition is a tricky mission because of the changeable illumination conditions. For example, the illumination changes between indoor and outdoor environments are an unsolved problem for face recognition. Moreover, it is a multi-class problem. It can be solved by two familiar approaches. N is the number of classes (e.g. N different individuals). The first proposal is ″one against the rest approach″. This technique includes N binary classifiers, and each of them separates a single class from all the remaining classes. The final output is the class that corresponds to the binary classifier with the highest output value. The second proposal is ″one against one approach″. This technique includes N (N-1)/2 binary classifiers, and each of them separates a pair of classes. The final output is decided by voting, or decision tree.
In this module, the main idea of SVM comes from a nonlinear mapping of the input space to a high dimensional feature space, and given two linearly separable classes, designs the classifier that leaves the maximum margin from two classes in the feature space. SVM displays good performance, has been applied extensively for pattern classification and handwriting recognition.

IV. RELATED WORK

Facial deblur inference using subspace analysis for recognition of blurred faces by m. nishiyama, a. hadid, h. takeshima, j. shotton, t. kozakaya, and o. yamaguchi fadein[1] algorithm map blurred images to a feature space for learning statistical models. Simply vectorizing the images does not generalize well for the feature space, since the vectorized images are not clearly separated and yields in poor performance as experiments. Instead of designing a new ‘magnitude-based feature space’ in the frequency domain, in which the variation of facial appearance caused by blur is larger than the variation of facial appearance between individuals. Faces blurred by the same PSF are similar in the new feature space. In the statistical models each set as a low-dimensional linear subspace using PCA. They compare query images to each subspace during testing. The most similar subspace gives an accurate inferred PSF that is used to deblur the query image. The resulted image can then be fed to standard face recognition algorithm.This extensive experiments on real and artificially blurred face images show high PSF inference and significant face recognition performance improvements in comparison to existing methods.
Recognition of blurred faces using local phase quantization by timo ahonen, esa rahtu, ville ojansivu, janne heikkil[2] Different problems in face image analysis, such as face detection, face recognition and facial expression recognition have received very much attention in computer vision research. These problems are interesting from the viewpoints of basic research aiming to efficient descriptors for facial images and of applications such as surveillance and human-computer interaction. A key issue in face analysis is finding efficient descriptors for face appearance. Given the low interpersonal variation in face images, ideal descriptors should be very discriminative. Still, they should be robust to different perturbations and changes such as illumination and pose changes, aging of the subjects, etc. Despite the extensive research efforts towards face descriptors robust to the aforementioned disturbances, the problems caused by blur often present in the real world face images have been mostly overlooked. Blur may be present in face images due to motion of the subject or the camera during the exposure, camera not being in focus, or low quality of the imaging device such as analog web camera. For example, in the Face Recognition.

V. PROPOSED SYSTEM

In the proposed approach it can be seen that both blur and illumination with pose variation are taken together. At first the blur portion alone is considered. It can be resolved with the help of direct recognition of blurred faces algorithm. Later on it is checked with the illumination correction algorithm. Basically a blurred image consists of sharp image and a blur kernel. Also it does not consider any characteristics for the particular blur. Then faces under different pose has been recognized by normalized it using affine transformation. Here an input face image is normalized to frontal view using the irises information. Use Affine transformation parameters to align the input pose image to frontal view. After completing the aforementioned pose normalization process, the resulting final image undergoes illumination normalization. This is performed using the SQI algorithm. Finally Support vector machine classifier is adapted to uniquely identifying facial characteristics by classifying the face feature in training and testing set.

VI. CONCLUSION

This work proposed robust face detection algorithm for recognizing faces in a distant image. Prior to this work, the effectiveness of this blur insensitive operator was experimentally shown on texture images without blur or with artificial blur.
Here this method analyzed the applicability of the operator for the very challenging task of face recognition and showed that it reaches higher recognition rates than the widely used local binary pattern operator. The problem of remote face recognition has addressed the problem of recognizing blurred and poorly illuminated faces. This paper has shown that the set of all images obtained by blurring a given image is a convex set given by the convex hull of shifted versions of the image. Based on this set-theoretic characterization, proposed a blur-robust face recognition algorithm DRBF. This algorithm can easily incorporate prior knowledge on the type of blur as constraints. Using the low-dimensional linear subspace model for illumination then showed that the set of all images obtained from a given image by blurring and changing its illumination conditions is a bi-convex set. Again, based on this set-theoretic characterization, this paper proposed a blur and illumination robust algorithm IRBF.

References

  1. W. Zhao, R. Chellappa, P. J. Phillips, and A. Rosenfeld, “Face recognition: A literature survey,” ACM Comput. Surv., vol. 35, no. 4, pp. 399–458, Dec. 2003.
  2. J. Ni and R. Chellappa, “Evaluation of state-of-the-art algorithms for remote face recognition,” in Proc. IEEE 17th Int. Conf., Image Process., Sep. 2010, pp. 1581–1584. VAGEESWARAN et al.: BLUR AND ILLUMINATION ROBUST FACE RECOGNITION 1371
  3. M. Nishiyama, A. Hadid, H. Takeshima, J. Shotton, T. Kozakaya, and O. Yamaguchi, “Facial deblur inference using subspace analysis for recognition of blurred faces,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 33, no. 4, pp. 838–845, Apr. 2011.
  4. D. Kundur and D. Hatzinakos, “Blind image deconvolution revisited.”
  5. A. Levin, Y. Weiss, F. Durand, and W. T. Freeman, “Understanding blind deconvolution algorithms,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 33, no. 12, pp. 2354–2367, Apr. 2011.
  6. T. Ojala, M. Pietikäinen, and T. Mäenpää, “Multiresolution gray-scale and rotation invariant texture classification with local binary patterns,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 24, no. 7, pp. 971–987, Jul. 2002.
  7. R. Basri and D. W. Jacobs, “Lambertian reflectance and linear subspaces,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 25, no. 2, pp. 218–233, Feb. 2003.
  8. R. Ramamoorthi and P. Hanrahan, “A signal-processing framework for reflection,” ACM Trans. Graph., vol. 23, no. 4, pp. 1004–1042, 2004.
  9. K.-C. Lee, J. Ho, and D. J. Kriegman, “Acquiring linear subspaces for face recognition under variable lighting,” IEEE Trans. Pattern Anal Mach. Intell., vol. 27, no. 5, pp. 684–698, May 2005.
  10. H. Hu and G. De Haan, “Adaptive image restoration based on local robust blur estimation,” in Proc. Int. Conf. Adv. Concep. Intell. Vis. Syst., 2007, pp. 461–472.