| Keywords | 
        
            | 3D Face Recognition, 3D Registration, Biometric Curvature Descriptor | 
        
            | INTRODUCTION | 
        
            | Image processing is a method to convert an image into digital form and perform some operations on it, in order to       get an enhanced image or to extract some useful information from it. It is a type of signal dispensation in which input is       image, like video frame or photograph and output may be image or characteristics associated with that image. Usually       Image Processing system includes treating images as two dimensional signals while applying already set signal processing       methods to them. It is among rapidly growing technologies today, with its applications in various aspects of a business.       Image Processing forms core research area within engineering and computer science disciplines too. | 
        
            | The purpose of image processing is divided into 5 groups. They are: | 
        
            | • Visualization - Observe the objects that are not visible | 
        
            | • Image sharpening and restoration - To create a better image | 
        
            | • Image retrieval - Seek for the image of interest. | 
        
            | • Measurement of pattern – Measures various objects in an image | 
        
            | • Image Recognition – Distinguish the objects in an image | 
        
            | Biometrics (or biometric authentication) refers to the identification of humans by their characteristics or traits.       Biometrics is used in computer science as a form of identification and access control. It is also used to identify individuals       in groups that are under surveillance. Biometric identifiers are the distinctive, measurable characteristics used to label and       describe individuals. Biometric identifiers are often categorized as physiological versus behavioral characteristics.       Physiological characteristics are related to the shape of the body. Examples include, but are not limited to fingerprint, face       recognition, DNA, Palm print, hand geometry, iris recognition, retina and odour/scent. Behavioral characteristics are       related to the pattern of behavior of a person, including but not limited to: typing rhythm, gait, and voice. | 
        
            | Face recognition using 3-D scans of the face has been recently proposed as an alternative or complementary       solution to conventional 2-D face recognition approaches working on still images or videos. In fact, face representations       based on 3-D data are expected to be much more robust to pose changes and illumination variations than 2-D images, thus       allowing accurate face recognition also in real-world applications with unconstrained acquisition. Encouraged by these       premises, many 3-D face recognition approaches have been proposed and experimented in the last few years. In a       conventional face recognition experiment, it is assumed that both the probe and gallery scans are acquired cooperatively in       a controlled environment so as to precisely capture and represent the whole face. Many of the existing methods followed       this assumption, focusing on face recognition in the presence of expression variations and reporting very high accuracy on       benchmark databases like the Face Recognition Grand Challenge. Differently, solutions enabling face recognition in       uncooperative scenarios are now at tracing an increasing interest. In such a case, probe scans are acquired in unconstrained       conditions that may lead to missing parts, or to occlusions due to hair, glasses, scarves, hand gestures, etc. These       difficulties are further sharpened by the recent advent of 4-D scanners (3-D plus time) capable of acquiring temporal       sequences of 3-D scans. In fact, the dynamics of facial movements captured by these devices can be useful for many       applications, but also increases the acquisition noise and the variability in subjects’ pose. In summary, despite the research       and applicative importance that partial face matching solutions are gaining, just a few works have explicitly addressed the       problem of 3-D face recognition in the case in which some parts of the facial scans are missing. | 
        
            | A facial recognition system is a computer application for automatically identifying or verifying a person from a       digital image or a video frame from a video source. One of the ways to do this is by comparing selected facial features from       the image and a facial database. It is typically used in security systems and can be compared to other biometrics such as       fingerprint or eye iris recognition systems. Three-dimensional face recognition (3D face recognition) is a modality of facial       recognition methods in which the three-dimensional geometry of the human face is used. It has been shown that 3D face       recognition methods can achieve significantly higher accuracy than their 2D counterparts, rivaling fingerprint recognition. | 
        
            | RELATED WORK | 
        
            | Developments in 3-D sensor technologies have increased interest in 3-D face recognition. 3-D face, it is possible       to obtain competitive results when compared with other modalities such as iris and high-resolution 2-D facial images. A       thorough survey of previously proposed 3-D face recognition both in 2-Dand 3-D. | 
        
            | 2-D TECHNIQUES | 
        
            | Although variations caused by pose and expression have attracted increased research effort, the problem of       handling occlusions has not been discussed frequently. In the 2-D face recognition studies, there has been a few approaches       considering occlusion variations. In most of these studies, the aim is occlusion handling for recognition and the registration       problem is not considered: Experimental results are usually reported on databases where the faces are assumed to be       accurately registered prior to recognition. Some studies are based on subspace analysis methods, where the aim is either       occlusion-robust projection or missing data compensation. | 
        
            | Consider occlusions caused only by eyeglasses and propose a method to compensate for the missing data.       Initially, the glasses region is extracted using color and edge information. The offline-generated eigenfaces from a set of       non occluded images are then used together with the extracted glasses region for missing data compensation. | 
        
            | EXISTING SYSTEM | 
        
            | NORMALIZATION | 
        
            | The feature extraction module finds the possible facial features which presents the occluded regions. Once the       artificial occlusion is generated, the Next step normalization is carried out. Wavelet Normalization is used for this purpose.       This method is efficient for illumination and pose conditions. It first decomposes the image into its low frequency and high frequency components.In the two-band multi-resolution wavelet transform, signal can be expressed by wavelet and scaling       basis functions at different scale, in a hierarchical manner. | 
        
            | FEATURE EXTRACTION | 
        
            | The Facial features are extracted from the Normalized facial image. This is done by the dual tree complex wavelet       transform. This provides a direct multi resolution decomposition of a given image. This method works well for the direct       upright frontal images. Frontal images are already obtained by the wavelet normalization method which is already       described. The desirable characteristics of the DT-DWT(S) such as spatial locality, orientation selectivity and excellent       noise cleaning performance provides a framework which renders the extraction of facial features almost invariant to such       disturbances. The norm of complex directional wavelet sub band coefficients is used to create a test statistics for enhancing       the facial feature edge points. The Rayleigh distribution of the derived statistics matches very closely with the true       coefficient distribution in the 6 directional sub bands. The use of the complex wavelet transform helps to detect more facial       feature edge points due to its improved directionality. Additionally, it eliminates the effects of non-uniform illumination       very effectively. By combining the edge information obtained by using DT-DWT(S) and the non-skin areas obtained from       the skin color statistics, the facial features can be extracted. | 
        
            | FACE RESTORATION | 
        
            | The facial feature extraction provides the necessary information for subsequent face restoration and recognition       process. The key idea in restoration is to use the available information provided by the facial feature extraction. For this a       preliminary mask is computed by calculating Distance From Feature Space(DFFS), by thresholding vector ‘e’. This results       in the preliminary mask calculation. | 
        
            | FACE RECOGNITION | 
        
            | The method used for face recognition is Average Regional Model (ARM). The aim of the method is to find       regional correspondences between any two face. It consists of the following steps, i) coarse and dense ARM-based       registration, ii) region-based matching, and iii) classifier fusion. Global coarse registration is carried out to roughly align a       given 3D face image to the AFM. ARMs are constructed on the AFM by determining the semantic regions manually. The       whole facial model is divided into four parts: eye-forehead, nose, cheeks, and mouth-chin regions. Dense registration is       carried out by aligning local regions with ARMs using the ICP algorithm. Each region over the test face is registered to its       corresponding average regional model separately. Registered regions are then regularly resampled. Therefore, after local       dense registration, facial components are automatically determined over the given facial surface | 
        
            | 3D FEATURE EXTRACTION | 
        
            | The range images provided by the FRGC consist of texture-range data pairs consistently registered: each pixel on       the texture image is associated to its 3D point in the range data, making straightforward the determination of the 3D       coordinates associated to any point in the 2D image. Which correspond to a segment connecting points automatically       determined in the texture image? | 
        
            | PROPOSED SYSTEM | 
        
            | This work introduces a new technique called masked projection for subspace analysis with incomplete data. We       use the system outlined in the pre-processing module includes the registration and occlusion removal steps. For alignment,       the adaptive registration module1 of is utilized, which registers the occluded surfaces. By adaptively selecting the model, it       is possible to discard the effect of occluding surfaces on registration. | 
        
            | The occlusions are detected on the registered surfaces by thresholding point distances to an average face model.       The training module works offline to learn the projection matrices from the training set of no occluded faces for different       regions. The classification module uses the occlusion mask of the probe image to compute the masked projection, and       projects the probe image to the adaptive subspace. The identification is handled in the subspace by 1-nearest neighbor (1-       NN) classifier. The proposed system is evaluated on two main 3-D face databases that contain realistic occlusions: (1) The       Bosphorus, and (2) the UMB-DB databases. | 
        
            | MODEL SELECTION | 
        
            | This module we use the Iterative closest point algorithm which is one of the most preferred methods for rigid       registration of 3D surfaces. Iterative Closest Point (ICP) is an algorithm employed to minimize the difference between two       clouds of points. ICP is often used to reconstruct 2D or 3D surfaces from different scans, to localize robots and achieve       optimal path planning (especially when wheel odometry is unreliable due to slippery terrain), to co-register bone models,       etc. We employ the model based registration approach which can cope with occlusion variations. | 
        
            | MASK PROJECTION | 
        
            | This module we analyze the remove extra pixels that are not present in the gallery image and these pixels are       known as occlusion pixels. Analyze the subspace points and give the nodal points. In this module we analyze the non       occlusion points and perform the occlusion removal process. | 
        
            | LAPLACIAN ALGORITHM | 
        
            | The laplacian algorithm used to calculate the nodal points. Each nodal points have some of weighted values. The       weighted values of probe image nodal points match with gallery image. Perform classification to check the incomplete data       Provide the perfect image( Face ). The classification task analyzes the all nodal points that are match with test images.       Finally produce the correct person face image | 
        
            | Facial recognition systems are built on computer programs that analyze images of human faces for the purpose of       identifying them. The programs take a facial image, measure characteristics such as the distance between the eyes, the       length of the nose, and the angle of the jaw, and create a unique file called a "template." Using templates, the software then       compares that image with another image and produces a score that measures how similar the images are to each other.       Typical sources of images for use in facial recognition include video camera signals and pre-existing photos such as those       in driver's license databases. | 
        
            | Facial recognition systems are computer-based security systems that are able to automatically detect and identify       human faces. These systems depend on a recognition algorithm, such as eigenface or the hidden Markov model. The first       step for a facial recognition system is to recognize a human face and extract it for the rest of the scene. Next, the system       measures nodal points on the face, such as the distance between the eyes, the shape of the cheekbones and other       distinguishable features. | 
        
            | These nodal points are then compared to the nodal points computed from a database of pictures in order to find a       match. Obviously, such a system is limited based on the angle of the face captured and the lighting conditions present. New       technologies are currently in development to create three-dimensional models of a person's face based on a digital       photograph in order to create more nodal points for comparison. However, such technology is inherently susceptible to       error given that the computer is extrapolating a three-dimensional model from a two-dimensional photograph. | 
        
            | Laplacian is an algorithm to smooth a polygonal mesh. For each vertex in a mesh, a new position is chosen based       on local information (such as the position of neighbors) and the vertex is moved there. In the case that a mesh is       topologically a rectangular grid (that is, each internal vertex is connected to four neighbors) then this operation produces       the Laplacian of the mesh. More formally, the smoothing operation may be described per-vertex as: | 
        
            |  | 
        
            | Where N is the number of adjacent vertices to node i and xi is the new position for node i | 
        
            | With its neighborhood preserving character, the Laplacian faces seem to be able to capture the intrinsic face       manifold structure to a larger extent. | 
        
            | Figure shows an example that the face images with various pose and expression of a person are mapped into twodimensional       subspace. The face image dataset used here is the same. This dataset contains 1965 face images taken from       sequential frames of a small video. The size of each image is 20×28 pixels, with 256 gray levels per pixel. Thus, each face       image is represented by a point in the 560-dimensional ambient space. However, these images are believed to come from a       sub manifold with few degrees of freedom. We leave out 10 samples for testing, and the remaining 1955 samples are used       to learn the Laplacian faces. As can be seen, the face images are mapped into a two-dimensional space with continuous       change in pose and expression. | 
        
            | The representative face images are shown in the different parts of the space. The face images are divided into two       parts. The left part includes the face images with open mouth, and the right part includes the face images with closed       mouth. This is because in trying to preserve local structure in the embedding, the Laplacian faces implicitly emphasize the       natural clusters in the data. Specifically, it makes the neighboring points in the image face nearer in the face space, and       faraway points in the image face farther in the face space. The bottom images correspond to points along the right path       (linked by solid line), illustrating one particular mode of variability in pose. | 
        
            | CONCLUSIONS | 
        
            | 3D face recognition has matured to match the performance of 2D face recognition. When used together with 2D, it       makes face a very strong biometric: Face as a biometric modality is widely acceptable for the general public, and face       recognition technology is able to meet the accuracy demands of a wide range of applications. While the accuracy of       algorithms have met requirements in controlled tests, 3D face recognition systems have yet to be tested under real       application scenarios. For certain application scenarios such as surveillance system and access control, systems are being tested in the field. The algorithms in these application scenarios will need to be improved to perform robustly under various       occlusion and masked projection. The proposed system is able to work with good performance under substantial       occlusions, expressions, and small pose variations. When we examine the failures, we see that if occlusions are so large that       the nose area is totally invisible, the initial alignment becomes impossible. Similarly, if the face is rotated by more than 30       degrees, it becomes difficult to accomplish the initial alignment. | 
        
            |  | 
        
            | Figures at a glance | 
        
            | 
                
                    
                        |  |  |  |  |  
                        | Figure 1 | Figure 2 | Figure 3 | Figure 4 |  | 
        
            |  | 
        
            | References | 
        
            | 
                A. F. Abate, M. Nappi, D. Riccio, and G. Sabatino,  “2D and 3D face recognition: A survey,” Pattern Recognit. Lett., vol. 28, no.  14, pp. 1885–1906, 2007.
 A. F. Abate,  S. Ricciardi, and G. Sabatino, “3D face recognition in a ambient intelligence  environment scenario,” in Face Recognition, K. Delac and M. Grgic, Eds. Vienna,  Austria: I-Tech, 2007, pp. 1–14.
 N. Alyuz, B.  Gokberk, and L. Akarun, “A 3D face recognition system for expression and  occlusion invariance,” in Proc. Int. Conf. Biometrics: Theory, Applications and  Systems (BTAS), 2008, pp. 1–7.
 N. Alyuz, B. Gokberk, and L. Akarun, “Regional  registration for expression resistant 3-D face recognition,” IEEE Trans. Inf.  Forensics Security, vol. 5, no. 3, pp. 425–440, Sep. 2010.
 N. Alyuz, B.  Gokberk, and L. Akarun, “Adaptive model based 3D face registration for  occlusion invariance,” in Proc. Eur. Conf. Computer  Vision—Workshops—Benchmarking Facial Image Analysis Technologies (BeFIT), Florence,  Italy, 2012.
 N. Alyuz, B. Gokberk, L. Spreeuwers, R. Veldhuis,  and L. Akarun, “Robust 3D face recognition in the presence of realistic  occlusions,” in Proc. Int. Conf. Biometrics (ICB), 2012, pp. 111–118.
 P.  Belhumeur, J. Hespanha, and D. Kriegman, “Eigenfaces vs. fisherfaces:  Recognition using class specific linear projection,” IEEE Trans. Pattern Anal.  Mach. Intell., vol. 19, no. 7, pp. 711–720, Jul. 1997.
 P. J. Besl and H. D. McKay, “A method for  registration of 3D shapes,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 14,  no. 2, pp. 239–256, Feb. 1992.
 K. W.  Bowyer, K. Chang, and P. Flynn, “A survey of approaches and challenges in 3D  and multi-modal face recognition,” Computer Vis. Image Understand., vol. 101,  no. 1, pp. 1–15, 2006.
 K. Chang, W.  Bowyer, and P. Flynn, “Multiple nose region matching for 3D face recognition  under varying facial expression,” IEEE Trans. Pattern Anal. Mach. Intell., vol.  28, no. 10, pp. 1695–1700, Oct. 2006
 |