ISSN ONLINE(2278-8875) PRINT (2320-3765)

All submissions of the EM system will be redirected to Online Manuscript Submission System. Authors are requested to submit articles directly to Online Manuscript Submission System of respective journal.

Wavelet Based Face Recognition for Low Quality Images

M.Karthika1, K.Shanmugapriya2, Dr.S.Valarmathy3, M.Arunkumar4
  1. PG Scholar, Department of ECE, Bannari Amman Institute of Technology, Sathyamangalam, Tamilnadu,India
  2. PG Scholar, Department of ECE, Bannari Amman Institute of Technology, Sathyamangalam, Tamilnadu,India
  3. Professor and Head, Department of ECE, Bannari Amman Institute of Technology, Sathyamangalam, Tamilnadu, India
  4. Assistant Professor, Department of ECE, Bannari Amman Institute of Technology, Sathyamangalam, Tamilnadu, India
Related article at Pubmed, Scholar Google

Visit for more related articles at International Journal of Advanced Research in Electrical, Electronics and Instrumentation Engineering

Abstract

The appearance of a face image is severely affected by illumination conditions that hinder the automatic face recognition process. Face recognition (FR) under invariant conditions is challenging, and exacting illumination invariant features is an effective approach to solve this problem. In this work, a multi-resolution feature extraction algorithm for face recognition is proposed based on two-dimensional discrete wavelet transform (2D-DWT), which efficiently exploits the local spatial variations in a face image. Wavelet coefficients corresponding to each local region residing inside those horizontal bands are selected as features. In the selection of the dominant coefficients, a threshold criterion is proposed, which drastically reduces the feature dimension . The contribution of this paper is threefold: 1) an objective measure of illumination quality of a given face image is used to decide if the image should be pre processed to normalize its illumination; 2) the global quality-based normalization scheme is extended to a regional quality-based approach to adaptive illumination normalization; 3) the illumination quality measure is used as a means to adaptively select the weighting parameters of the fused wavelet-based multi stream face recognition scheme.

Keywords

Face Recognition, Illumination, Wavelet, Quality Based, Normalize

INTRODUCTION

Automatic face recognition has widespread applications in security, authentication, surveillance, and criminal identification. Conventional ID card and password based identification methods, although very popular, are no more reliable as before because of the use of several advanced techniques of forgery and password-hacking. As an alternative, biometric, which is defined as an intrinsic physical or behavioral trait of human beings, is being used for identity access management. Among physiological biometrics, face is getting more popularity because of its non-intrusiveness and high degree of security. Moreover, unlike iris or finger-print recognition, face recognition do not require high precision equipment and user agreement, when doing image acquisition, which make face recognition even more popular for video surveillance.
The objective of this paper is to develop a wavelet-based face recognition scheme, which, instead of the entire face image, considers only some high-informative local zones of the image for dominant feature extraction. Discrete wavelet transforms (DWTs), which are multiresolution image analysis tools that decompose an image into low and high frequencies at different scales, have been successfully used in a variety of face recognition schemes as a dimension reduction technique and/or as a tool to extract a multiresolution feature representation of a given face image . The multiresolution property of DWT enables one to efficiently compute a small-sized feature representation that is particularly desirable for face recognition on constrained devices such as mobile phones. Here, we propose a quality-based adaptive approach to face recognition ,which is measured in terms of luminance distortion in comparison to a known reference image, is used as the base for adapting the application of global and region illumination normalization procedures. This method enhances the contrast as well as the edges of face images simultaneously, in the frequency domain using the wavelet transform, to facilitate face recognition tasks.To improve recognition accuracy under varying lighting conditions fusion process is done. The paper is organized as follows. Section II presents a review of approaches to face recognition in the presence of varying lighting conditions using wavelet Transform. The illumination quality measure used in this study and the proposed adaptive approach to face recognition is presented in Section III. Section IV and V deals with the Adaptive Normalization and fusion Process. Section VI evaluates the suitability of the illumination quality measure for the proposed adaptive face recognition scheme. Recognition experiments are presented and discussed in Section VII. Conclusions and future work are presented in Section VIII.

WAVELET TRANSFORM IN FACE RECOGNITION

It is intuitive that images of a particular person captured under different lighting conditions may vary significantly, which can affect the face recognition accuracy. In order to overcome the effect of lighting variation in the proposed method, illumination adjustment is performed prior to feature extraction. Given two images of a single person having different intensity distributions due to variation in illumination conditions, our objective is to provide with similar feature vectors for N these two images irrespective of the different illumination condition. Since in the proposed method, feature extraction is performed in the DWT domain, it is of our interest to analyze the effect of variation in illumination on the DWT-based feature extraction. Different decomposition levels and/or wavelet filters yield different face feature vectors, giving rise to different face recognition schemes and providing opportunities for a multistream identification. In the multistream approach a face image is represented by multiple feature vectors at a given scale (e.g., LL and LH subbands). Two images are compared by first calculating a distance score for each subband representation, followed by a score fusion that can be performed by calculating a weighted average of the scores. The fused score is then used to classify the identity of the individual.

IMAGE-QUALITY-BASED ADAPTIVE FACE RECOGNITION

This work investigates the use of an image quality measure as a base for an adaptive approach to face recognition in the presence of varying illumination. Naturally, the illumination quality of a given face image is to be defined in terms of its luminance distortion in comparison to a known reference image. The universal image quality index (Q), incorporates the necessary ingredients that fits our needs. The Q aims at providing meaningful comparisons across different types of image distortions by modeling any image distortion as a combination of the following three factors: 1) loss of correlation; 2) luminance distortion; and 3) contrast distortion. Here, the luminance distortion factor of Q is used to measure global or regional illumination quality of images. This will be called the luminance quality (LQ) index.

A. Universal Quality Index

Let x = {xi|i = 1, 2, . . .,N} and y = {yi|i = 1, 2, . . . , N} be the reference and the test images, respectively. The universal quality index is defined as
Where,
image
Statistical features are measured locally to accommodate space-variant nature of image quality and then combine them together to a single quality measure for the entire image. A local quality index Qj is calculated by sliding a window of size B × B pixel by pixel from the top-left corner until the window reaches the bottom-right corner of the image. For a total of M steps, the overall quality index is given by
image

B. Global LQ (GLQ) and Region LQ (RLQ) Indexes

The universal quality index Q can be written as a product of three components, i.e.,
image
With a value range of [0, 1], LQ measures how close the mean luminance is between x and y. LQ equals 1, if and only if x = y. The window size used in this paper is the default 8 × 8 pixels. GLQ is calculated similarly to the calculation of a single Q value. RLQ represents the LQ of a region of an image resulting from a 2 × 2 partitioning of the image. The LQ of a region of an image is calculated by partitioning the local quality index map into four regions.

IMAGE-QUALITY-BASED ADAPTIVE NORMALIZATION

The proposed image-quality-based adaptive normalization works by first calculating the GLQ of a given image and then normalizing only if it’s GLQ is less than a predefined threshold. The images tend to exhibit regional variation in image quality as a result of the direction of the light source; the global quality based adaptive normalization is extended by introducing a region quality-based adaptive approach to normalization. A region of an image is normalized only if the RLQ score is lower than a predefined threshold. The commonly used HE is adopted here for illumination normalization. Hence, the two proposed approaches to adaptive normalization will be referred to as global quality-based HE (GQbHE) and regional quality-based HE (RQbHE). The threshold can be determined empirically, depending on the objectives of the applications under consideration.

IMAGE-QUALITY-BASED ADAPTIVE FUSION

The idea is to select fusion weight parameters to adaptively suit the condition of probe images. The quality-based fusion (QbF) works by first calculating the LQ of the input image, and if its LQ score is higher than a predefined fusion threshold, then the approximation subband is given a higher weight than the detail subbands during score fusion. If the LQ score is less than the threshold, the approximation subband gets a very low weight

EVALUATION OF LQ INDEX

A. Evaluation Data

Extended Yale Face Database consists of 38 subjects; each imaged under 64 illumination conditions in frontal pose, capturing a total of 2414 images. These images can be divided into five illumination subsets according to the angle θ of the light source with respect to the optical axis of the camera. The 168 × 192 pixel cropped images in the database are re-sampled to a fixed size of 128 × 128 for the experiments
image
The ORL database consists of 40 subjects, each with ten face images captured against a dark homogeneous background. Images of some subjects were captured at different times. Variations in pose, facial expressions, and facial details are captured in this collection of face images. The original 92 × 112 pixel images are re-sampled to 128 × 128 for the experiments.
image
The calculation of the LQ index for a given face image relies on the use of a reference image, preferably one that is independent of subject and gallery face images. The reference image used in the evaluation of the LQ index as well as for face recognition experiments is the average face image of the 38 individual faces, each one captured in frontal pose and under direct illumination The same 38 images are commonly used as gallery images for face recognition experiments using Extended Yale B.
image

EXPERIMENTAL RESULTS

image
image
image
image

CONCLUSION AND FUTURE WORK

The first part of a project to develop image quality- based adaptive approaches to face recognition. We investigated the challenges of face recognition in the presence of extreme variation in lighting conditions. The luminance component of the already known universal quality index is used to associate a quantitative quality value to an image that measures its luminance distortion in comparison to a predefined reference image. This measure is called the LQ index and it was used to develop global and region quality-based adaptive illumination normalization procedures. Using the well-known Extended Yale Database B, we demonstrated the effectiveness of the proposed image-quality-based illumination normalization schemes in face recognition.
Finally, the face recognition scheme that was developed previously has no objective means of selecting fusion parameters and that it performed differently for face images captured with different lighting conditions and led to the development of a new adaptive approach to face recognition. The illumination-quality-based adaptive fusion approach works by adapting the weights given to each subband according to the LQ values , and again, this led to significantly improved identification accuracy rates. Our future work will investigate other aspects of face image quality such as facial expression, pose, and occlusion.

References

  1. H. Sellahewa and S. A. Jassim, “Illumination and expression invariant face recognition:Toward sample quality-based adaptive fusion,” in Proc. 2nd IEEE Int. Conf. BiometricsTheory, Appl. Syst., , pp. 1–6,Sep. 2008
  2. P. Grother and E. Tabassi, “Performance of biometric quality measures,”IEEE Trans. Pattern Anal. Mach. Intell., vol. 29, no. 4, pp. 531–543, Apr. 2007.
  3. S. A. Jassim and H. Sellahewa, “Multi-stream face recognition on dedicated mobile devices for crime-fighting,” in Proc. SPIE Opt. Photon Counterterrorism Crime Fighting II, vol. 6402, p. 640 2OP,Sep. 2006.
  4. H. Sellahewa, “Wavelet-based automatic face recognition for constrained Devices,” Ph.D. dissertation, Univ. Buckingham, Buckingham, U.K. 2006.
  5. K.-C. Lee, J. Ho, and D. J. Kriegman, “Acquiring linear subspaces for face recognitionunder variable lighting,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 27, no. 5, pp. 684– 698, May 2005.
  6. H. K. Ekenel and B. Sankur, “Multiresolution face recognition,” Image Vis. Comput., vol. 23, no. 5, pp. 469–477, May 2005.
  7. H. Sellahewa and S. Jassim, “Wavelet-based face verification for constrained platforms,” in Proc. SPIE Biometric Technol. Human Identification II, vol. 5779, pp. 173–183.Mar. 2005
  8. K. Kryszczuk, J. Richiardi, P. Prodanov, and A. Drygajlo, “Error handling in multimodal biometric systems using reliability measures,” in Proc. 13th EUSIPCO, Sep. 2005.
  9. T. Okabe and Y. Sato, “Object recognition based on photometric alignment using RANSAC,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog., vol. 1, pp. 221– 228. Jun. 2003.
  10. J.-T. Chien and C.-C.Wu, “Discriminant wavelet faces and nearest feature classifiers for face recognition,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 24, no. 12, pp. 1644–1649, Dec. 2002.