ISSN ONLINE(2319-8753)PRINT(2347-6710)

Yakışıklı erkek tatil için bir beldeye gidiyor burada kendisine türk Porno güzel bir seksi kadın ayarlıyor Onunla beraber otel odasına gidiyorlar Otel odasına rokettube giren kadın ilk önce erkekle sohbet ederek işi yavaş halletmeye çalışıyor sex hikayeleri Kocası fabrikatör olan sarışın Rus hatun şehirden biraz uzak olan bir türk porno kasabaya son derece lüks bir villa yaptırıp yerleşiyor Kocasını işe gönderip mobil porno istediği erkeği eve atan Rus hatun son olarak fotoğraf çekimi yapmak üzere türk porno evine gelen genç adamı bahçede azdırıyor Güzel hatun zengin bir iş adamının porno indir dostu olmayı kabul ediyor Adamın kendisine aldığı yazlık evde sikiş kalmaya başlayan hatun bir süre sonra kendi erkek arkadaşlarını bir bir çağırarak onlarla porno izle yapıyor Son olarak çağırdığı arkadaşını kapıda üzerinde beyaz gömleğin açık sikiş düğmelerinden fışkıran dik memeleri ile karşılayıp içeri girer girmez sikiş dudaklarına yapışarak sevişiyor Evin her köşesine yayılan inleme seslerinin eşliğinde yorgun düşerek orgazm oluyor

Face Detection Approaches: A Survey

Mitul Modi1, Fedrik Macwan2
PG Scholars, Dept. of Electrical, The M S University, Baroda, Gujarat, India
Related article at Pubmed, Scholar Google

Visit for more related articles at International Journal of Innovative Research in Science, Engineering and Technology


Now a day‘s face plays major role in social intercourse for conveying identity and feelings of person. Persons have marvellous ability to identify different faces than machines. So face detection plays major role in face recognition, facial expression recognition, head-pose estimation, human-computer interaction, etc. Face detection is a computer technology that determines the location and size of human face in arbitrary (digital) image. This paper presents all comprehension and critical survey of algorithms through which face detection is possible.This document gives formatting instructions for authors preparing papers for publication in the Proceedings of an International Journal. The authors must follow the instructions given in the document for the papers to be published. You can use this document as both an instruction set and as a template into which you can type your own text.


Digital image processing, face detection, localizing faces.


Early efforts in face detection have dated back as early as the beginning of the 1970s, where simple heuristic and anthropometric techniques [1] were used. This methods are basically reliable on different hypothesis such as frontal face, fixed or plain background, Passport size photograph scenarios. If any changes are occur in conditions, faces in image are not detected. At the beginning of 1990‘s [2] techniques are proposed focused on the face recognition and video coding systems on and increase the need of face detection. More robust segmentation schemes have been presented, particularly those using motion, color, and generalized information. The use of statistics and neural networks has also enabled faces to be detected from cluttered scenes at different distances from the camera.
The concept of face detection can be implemented in various ways but mainly we use two steps for this implementation. First step is to localize the face that means we are enlightening those parts of an image where a face is present. And Last step is to verify whether the enlightening parts are carrying out a face or not [3]. The concept illustrated above may seemed very simple but when we implement it, we may go through some difficulties [4, 5] like scale, rotation, pose, expression, presence or absence of some structural component, occlusion, illumination variation and image condition. This document is a template. An electronic copy can bedownloaded from the conference website. For questions onpaper guidelines, please contact the conference publicationscommittee as indicated on the conference website. Information about final paper submission is available from the conference website.


Face detection is a computer technology that determines the location and size of human face in arbitrary (digital) image. The facial features are detected and any other objects like trees, buildings and bodies etc are ignored from the digital image. It can be regarded as a ‗specific‘ case of object-class detection, where the task is finding the location and sizes of all objects in an image that belong to a given class. Face detection, can be regarded as a more ‗general‘ case of face localization. In face localization, the task is to find the locations and sizes of a known number of faces (usually one). Basically there are two types of approaches to detect facial part in the given image i.e. feature base and image base approach.Feature base approach tries to extract features of the image and match it against the knowledge of the face features. While image base approach tries to get best match between training and testing images.
A. Feature base Approach
1) Active Shape Model
Active shape models focus on complex non-rigid features like actual physical and higher level appearance of features [6]. Means that Active Shape Models (ASMs) are aimed at automatically locating landmark points that define the shape of any statistically modelled object in an image. When of facial features such as the eyes, lips, nose, mouth and eyebrows. The training stage of an ASM involves the building of a statistical facial model from a training set containing images with manually annotated landmarks. ASMs is classified into three groups i.e. snakes, PDM, Deformable templates.
1.1) Snakes:
The first type uses a generic active contour called snakes, first introduced by Kass et al. in 1987 [7]. Snakes are used to identify head boundaries [8,9,10,11,12]. In order to achieve the task, a snake is first initialized at the proximity around a head boundary. It then locks onto nearby edges and subsequently assume the shape of the head. The evolution of a snake is achieved by minimizing an energy function, Esnake (analogy with physical systems), denoted as
Esnake = Einternal + EExternal WhereEinternal and EExternal are internal and external energy functions.
Internal energy is the part that depends on the intrinsic properties of the snake and defines its natural evolution. The typical natural evolution in snakes is shrinking or expanding. The external energy counteracts the internal energy and enables the contours to deviate from the natural evolution and eventually assume the shape of nearby features—the head boundary at a state of equilibria.
Two main consideration for forming snakes i.e. selection of energy terms and energy minimization. Elastic energy [8, 9, 11, 12] is used commonly as internal energy. Internal energy is vary with the distance between control points on the snake, through which we get contour an elastic-band characteristic that causes it to shrink or expand. On other side external energy relay on image features. Energy minimization process is done by optimization techniques such as the steepest gradient descent. Which needs highest computations. Huang and Chen [9] and Lam and Yan [13] both employ fast iteration methods by greedy algorithms. Snakes have some demerits like contour often becomes trapped onto false image features and another one is that snakes are not suitable in extracting non convex features.
1.2) Deformable Templates:
Deformable templates were then introduced by Yuille et al. [14] to take into account the a priori of facial features and to better the performance of snakes. Locating a facial feature boundary is not an easy task because the local evidence of facial edges is difficult to organize into a sensible global entity using generic contours. The low brightness contrast around some of these features also makes the edge detection process problematic. Yuille et al. [14] took the concept of snakes a step further by incorporating global information of the eye to improve the reliability of the extraction process. Deformable templates approaches are developed to solve this problem. Deformation is based on local valley, edge, peak, and brightness [15]. Other than face boundary, salient feature (eyes, nose, mouth and eyebrows) extraction is a great challenge of face recognition.
E = Ev + Ee + Ep + Ei + Einternal ; where Ev , Ee , Ep , Ei , Einternal are external energy due to valley, edges, peak and image brightness and internal energy.
1.3) PDM (Point Distribution Model)
Independently of computerized image analysis, and before ASMs were developed, researchersdeveloped statistical models of shape [30]. The idea is that once you represent shapes asvectors, you can apply standard statistical methods to them just like any other multivariateobject. These models learn allowable constellations of shape points from training examplesand use principal components to build what is called a Point Distribution Model. These havebeen used in diverse ways, for example for categorizing Iron Age broaches [18].
Ideal Point Distribution Models can only deform in ways that are characteristic of the object. Cootes and his colleagues were seeking models which do exactly that so if a beard, say, covers the chin, the shape model can \override the image" to approximate the position of the chin under the beard. It was therefore natural (but perhaps only in retrospect) to adopt Point Distribution Models. This synthesis of ideas from image processing and statistical shape modelling led to the Active Shape Model.The first parametric statistical shape model for image analysis based on principal components of inter-landmark distances was presented by Cootes and Taylor in [19]. On this approach, Cootes, Taylor, and their colleagues, then released a series of papers that cumulated in what we call the classical Active Shape Model [20 - 24].
2) Low Level Analysis
Based on low level visual features like color, intensity, edges, motion etc.
2.1) Skin Color Base
Color is avital feature of human faces. Using skin-color as a feature for tracking a face has several advantages. Color processing is much faster than processing other facial features. Under certain lighting conditions, color is orientation invariant. This property makes motion estimation much easier because only a translation model is needed for motion estimation [25]. Tracking human faces using color as a feature has several problems like the color representation of a face obtained by a camera is influenced by many factors (ambient light, object movement, etc.).
Majorly three different face detection algorithms are available based on RGB, YCbCr, and HIS color space models.In the implementation of the algorithms there are three main steps viz.
(1) Classify the skin region in the color space,
(2) Apply threshold to mask the skin region and
(3) Draw bounding box to extract the face image.
Crowley and Coutaz [26] suggested simplest skin color algorithms for detecting skin pixels. The perceived human color varies as a function of the relative direction to the illumination. The pixels for skin region can be detected using a normalized color histogram, and can be normalized for changes in intensity on dividing by luminance. Converted an [R, G, B] vector is converted into an [r, g] vector of normalized color which provides a fast means of skin detection. This algorithm fails when there are some more skin region like legs, arms, etc.
Cahi and Ngan [27] suggested skin color classification algorithm with YCbCr color space.Research found that pixels belonging to skin region having similar Cb and Cr values. So that the thresholds be chosen as [Cr1, Cr2] and [Cb1, Cb2], a pixel is classified to have skin tone if the values [Cr, Cb] fall within the thresholds. The skin color distribution gives the face portion in the color image. This algorithm is also having the constraint that the image should be having only face as the skin region. Kjeldson and Kender defined a color predicatein HSV color space to separate skin regionsfrom background [28]. Skin color classification inHSI color space is the same as YCbCr color spacebut here the responsible values are hue (H) andsaturation (S). Similar to above the threshold be chosen as [H1, S1] and [H2, S2], and a pixel isclassified to have skin tone if the values [H,S] fallwithin the threshold and this distribution gives thelocalized face image. Similar to above twoalgorithm this algorithm is also having the same constraint.
2.2) Motion Base
When useof video sequence is available, motion informationcan be used to locate moving objects. Movingsilhouettes like face and body parts can be extractedby simply thresholding accumulated framedifferences [29]. Besides face regions, facial featurescan be located by frame differences [30, 31].
2.3) Gray Scale Base
Gray information within a face canalso be treat as important features. Facial features such as eyebrows, pupils, and lips appear generallydarker than their surrounding facial regions. Various recent feature extraction algorithms [32 – 34] searchfor local gray minima within segmented facial regions. In these algorithms, the input imagesare first enhanced by contrast-stretching and gray-scale morphological routines to improvethe quality of local dark patches and thereby make detection easier. The extraction of darkpatches is achieved by low-level gray-scale thresholding. Based method and consist three levels. Yang and huang [35] presented new approach i.e. faces gray scale behaviour in pyramid (mosaic) images. This system utilizes hierarchical Face location consist three levels. Higher two level based on mosaic images at different resolution. In the lower level, edge detection method is proposed. Moreover this algorithms gives fine response in complex background where size of the face is unknown.
2.4) Edge Base
Face detection based on edges was introduced by Sakai et al. [36]. This workwas based on analysing line drawings of the faces from photographs, aiming to locate facialfeatures. Than later Craw et al. [37] proposed a hierarchical framework based on Sakai et al.‘swork to trace a human head outline. Then after remarkable works were carried out by many researchers in this specific area. Method suggested by Anila and Devarajan [38] was very simple and fast. They proposed frame work which consist three stepsi.e. initially the images are enhanced by applying median filterfor noise removal and histogram equalization for contrast adjustment. In the second step the edge imageis constructed from the enhanced image by applying sobel operator. Then a novel edge trackingalgorithm is applied to extract the sub windows from the enhanced image based on edges. Further they used Back propagation Neural Network (BPN) algorithm to classify the sub-window as either face or non-face.
3) Feature Analysis
These algorithms aimto find structural features that exist even when thepose, viewpoint, or lighting conditions vary, andthen use these to locate faces. These methods aredesigned mainly for face localization.
3.1) Feature Searching
3.1.1) Viola Jones Method
Paul Viola and Michael Jones presented an approach for object detection which minimizes computation time while achieving high detection accuracy. Paul Viola and Michael Jones [39] proposed a fast and robust method for face detection which is 15 times quicker than any technique at the time of release with 95% accuracy at around 17 fps.The technique relies on the use of simple Haar-like features that are evaluated quickly through the use of a new image representation. Based on the concept of an ―Integral Image‖ it generates a large set of features and uses the boosting algorithm AdaBoost to reduce the overcomplete set and the introduction of a degenerative tree of the boosted classifiers provides for robust and fast interferences. The detector is applied in a scanning fashion and used on gray-scale images, the scanned window that is applied can also be scaled, as well as the features evaluated.
3.1.2) Gabor Feature Method
Sharif et al. [39] proposed an Elastic Bunch Graph Map (EBGM) algorithmthat successfullyimplements face detection using Gabor filters. The proposedsystem applies 40 different Gabor filters on an image. As aresult of which 40 images with different angles and orientationare received. Next, maximum intensity points in each filteredimage are calculated and mark them as fiducial points. Thesystem reduces these points in accordance to distance betweenthem. The next step is calculating the distances between thereduced points using distance formula. At last, the distances arecompared with database. If match occurs, it means that thefaces in the image are detected. Equation of Gabor filter [40] is shown below
gives the orientation,
3.2) Constellation Method
All methods discussed so far are able to track faces but still some issue like locating faces of various poses in complex background is truly difficult. To reduce this difficultyinvestigator form a group of facial features in face-like constellations using more robust modellingapproaches such as statistical analysis. Various types of face constellations have been proposed by Burl et al. [41]. They establish use of statistical shape theory on the features detected from a multiscale Gaussian derivative filter. Huang et al. [42] also apply a Gaussian filter for pre-processing in a framework based on image feature analysis.
B. Image Base Approach
1) Neural Network
Neural networks gaining much more attention in many pattern recognition problems, such as OCR, object recognition, and autonomous robot driving. Since face detection can be treated as a two class pattern recognition problem, various neural network algorithms have been proposed. The advantage of using neural networks for face detection is the feasibility of training a system to capture the complex class conditional density of face patterns. However, one demerit is that the network architecture has to be extensively tuned (number of layers, number of nodes, learning rates, etc.) to get exceptional performance. In early days most hierarchical neural network was proposed by Agui et al. [43]. The first stage having twoparallel subnetworks in which the inputs are filtered intensity valuesfrom an original image. The inputs to the second stagenetwork consist of the outputs from the sub networks andextracted feature values. An output at thesecond stage shows the presence of a face in the inputregion.Propp and Samal developed one of the earliest neuralnetworks for face detection [44]. Their network consists offour layers with 1,024 input units, 256 units in the first hiddenlayer, eight units in the second hidden layer, and two outputunits.
Feraud and Bernier presented a detection method using auto associative neural networks [45], [46], [47]. The idea is based on [48] which shows an auto associative network with five layers is able to perform a nonlinear principal component analysis. One auto associative network is used to detect frontal-view faces and another one is used to detect faces turned up to 60 degrees to the left and right of the frontal view. After that Lin et al. presented a face detection system using probabilistic decision-based neural network (PDBNN) [49]. The architecture of PDBNN is similar to a radial basis function (RBF) network with modified learning rules and probabilistic interpretation.
2) Linear Sub Space Method
2.1) Eigen faces Method
An early example of employing eigenvectors in facerecognition was done by Kohonen [50] in which a simpleneural network is demonstrated to perform face recognitionfor aligned and normalized face images. Kirby and Sirovich suggested that images of faces canbe linearly encoded using a modest number of basis images[51]. The idea is arguably proposed first byPearson in 1901 [52] and then by Hotelling in 1933 [53].Given a collection of n by m pixel training imagesrepresented as a vector of size m X n, basis vectors spanningan optimal subspace are determined such that the meansquare error between the projection of the training imagesonto this subspace and the original images is minimized.They call the set of optimal basis vectors Eigenpictures sincethese are simply the eigenvectors of the covariance matrixcomputed from the vectorized face images in the training set.Experiments with a set of 100 images show that a face imageof 91 X 50 pixels can be effectively encoded using only50 Eigenpictures, while retaining a reasonable likeness (i.e.,capturing 95 percent of the variance).
3) Statistical Approach
3.1) Support Vector Machine (SVM)
SVMs were first introduced Osuna et al. [54]for face detection. SVMs work as a new paradigm to train polynomial function, neural networks, or radial basis function (RBF) classifiers.SVMs works on induction principle, called structural risk minimization, which targets to minimize an upper bound on the expected generalization error. An SVM classifier is a linear classifier where the separating hyper plane is chosen to minimize the expected classification error of the unseen test patterns.In [54], Osunaet al. developed an efficient method to train anSVMfor large scale problems,andapplied it to face detection. Basedon two test sets of 10,000,000 test patterns of 19 X 19 pixels, their system has slightly lower error rates and runs approximately30 times faster than the system by Sung and Poggio [55]. SVMs have also been used to detect faces and pedestrians in the wavelet domain.


Face detection technology can be useful and necessary in a wide range of applications. Such as
o Biometric identification
o Video Conferencing
o Human – Computer Interaction
o Access control Systems


Face detection is currently a very active research area and the technology has come a long way. The last couple of years have shown great advances in algorithms dealing with complex environments such as low quality gray-scale images and cluttered backgrounds. Some of the best algorithms are still too computationally expensive to be applicable or real-time processing, but this is likely to change with coming improvements in computer hardware. This paper presents various numerous feature based and image based techniques that are available to detect human face. All methods have their own merits and demerits. Moreover using image based methods like neural network, SVM, PCA, uncertainty of the features in feature based approaches can be easily resolved.


[1] T. Sakai, M. Nagao, and T. Kanade, Computer analysis and classification of photographs of human faces, in Proc. First USA—Japan Computer Conference, 1972, p. 2.7.

[2] R. Chellappa, C. L.Wilson, and S. Sirohey. Human and machine recognition of faces: A survey, Proc. IEEE 83, 5, 1995.

[3] Rein-Lien Hsu, Mohamed Abdel-Mottaleb, and Ani1 K. Jain. Face detection in color images,IEEE pp.1046-1048,2001

[4] Ming-Hsuan Yang, David J. Kriegman, Narendra Ahuja, ―Detecting Faces in Images: A Survey, IEEETrans. PAMI, vol. 24, No.1, pp. 1-25, Jan 2002.

[5] Pang WaiTian, ―Remote Monitoring System‖,BEHE, Jan 2008

[6] J. C. Gower, ―Generalized Procrustes Analysis,‖ Psychometrika, vol. 40, no. 1, pp. 33-51, March 1975

[7] M. Kass, A. Witkin, and D. Terzopoulos, Snakes: active contour models, in Proc. of 1st Int Conf. onComputer Vision, London, 1987.

[8]K. M. Lam, H. Yan, "Locating and Extractingthe Eye in Human Face Images",1995.

[9] S. R. Gunn and M. S. Nixon, A dual active contour for head and boundary extraction, in IEEEColloquium on Image Processing for Biometric Measurement, London, Apr. 1994, pp. 6/1.

[10] C. L. Huang and C. W. Chen, Human facialfeature extraction for face interpretation andrecognition, Pattern Recog. 25, 1992, 1435–1444.

[11] H. Wu, T. Yokoyama, D. Pramadihanto, and M.Yachida, Face and facial feature extraction fromcolour image, in IEEE Proc. of 2nd Int. Conf. onAutomatic Face and Gesture Recognition, Vermont,Oct. 1996, pp. 345–349.

[12] T. Yokoyama, Y. Yagi, and M. Yachida, Facialcontour extraction model, in IEEE Proc. of 3rd Int.Conf. On Automatic Face and Gesture Recognition,1998.

[13]K. M. Lam and H. Yan, Fast greedy algorithmfor locating head boundaries, Electron. Lett. 30,1994, 21–22.

[14] A. L.Yuille, P. W.Hallinan, and D. S. Cohen, Feature extraction from faces using deformable templates,Int. J. Comput. Vision 8, 1992, 99– 111.

[15] B. Scassellati, "Eye Finding via Face Detectionfor a Foveated, Active VisionAmerican Association for Artificial Intelligence(

[16] Ming-Hsuan Yang, David J. Kriegman, NarendraAhuja, Detecting Faces in Images: A Survey., IEEETrans. PAMI, vol. 24, pp. 1-25, Jan 2002.

[17] I. L. Dryden and Kanti V. Mardia. Statistical Shape Analysis. Wiley, 1998.

[18] Christopher G. Small. The Statistical Theory of Shape. Springer, 96.

[19] T. F. Cootes, D. H. Cooper, C. J. Taylor, and J. Graham. A Trainable Method ofParametric Shape Description. 2nd British Machine VisionConference 54{61. Springer-Verlag, 1991.

[20] T. F. Cootes and C. J.Taylor. Active Shape Models. 3rd British Machine Vision Confer-ence, pages 266-275. Springer-Verlag, 1992.

[21] T. F. Cootes, C. J.Taylor, D. Cooper, and J. Graham. Training Models of Shape fromSets of Examples. 3rd British Machine Vision Conference, pages 9-18. Springer-Verlag,1992.

[22] T. F. Cootes and C. J. Taylor. Active Shape Model Search using Local Grey-Level Mod-els: A Quantitative Evaluation. 4th British Machine Vision Conference, pages 639{648.BMVA Press, 1993.

[23]T. F. Cootes, C. J. Taylor, and A. Lanitis. Active Shape Models: Evaluation of a Multi-resolution Method for Improving Image Search. 5th British Machine Vision Conference,pages 327-336, York, 1994.

[24] T. F. Cootes, C. J. Taylor, A. Lanitis, D. H. Cooper, and J. Graham. Building and UsingFlexible Models Incorporating Grey-level Information. 4th International Conference onComputer Vision, pages 355-365. IEEE Computer Society Press, 1993.

[25] Sanjay Kr. Singh, D. S. Chauhan, MayankVatsa, Richa Singh. A Robust Skin Color Based Face Detection Algorithm, Tamkang Journal of Science and Engineering, Vol. 6, No. 4, pp. 227-234 (2003)

[26] Crowley, J. L. and Coutaz, J., ―Vision forMan Machine Interaction,” Robotics andAutonomous Systems, Vol. 19, pp. 347-358(1997).

[27] Cahi, D. and Ngan, K. N., ―Face SegmentationUsing Skin-Color Map inVideophone Applications,‖ IEEETransaction on Circuit and Systems forVideo Technology, Vol. 9, pp. 551-564(1999).

[28] Kjeldsen, R. and Kender., J., ―Finding Skinin Color Images,‖ Proceedings of theSecond International Conference onAutomatic Face and Gesture Recognition,pp. 312-317 (1996).

[29] M J. T. Rainders, P. J. L. van Beek, B. Sankur,J. C. A. van der Lubbe, "Facial Feature Localizationand Adaptation of a Generic Face Model for Model-Based Coding", 1994.

[30] J. L. Crowley, F. Berard, "Multi-model Trackingof faces for Video Communications".

[31] B. K. Low, M. K. Ibrahim, "A Fast and AccurateAlgorithm Facial Feature Segmentation", 1997.

[32] P. J. L. Van Beek, M. J. T. Reinders, B. Sankur, and J. C. A. Van Der Lubbe, Semantic segmentation ofvideophone image sequences, in Proc. of SPIE Int. Conf. on Visual Communications and Image Processing,1992, pp. 1182–1193.

[33] H. P. Graf, E. Cosatto, D. Gibson, E. Petajan, and M. Kocheisen, Multi-modal system for locating headsand faces, in IEEE Proc. of 2nd Int. Conf. on Automatic Face and Gesture Recognition, Vermont, Oct. 1996,pp. 277–282.

34] K. M. Lam and H. Yan, Facial feature location and extraction for computerised human face recognition, inInt. Symposium on information Theory and Its Applications, Sydney, Australia, Nov. 1994.

[35] T. Sakai, M. Nagao, and T. Kanade, Computer analysis and classification of photographs of human faces,in Proc. First USA—Japan Computer Conference, 1972, p. 2.7.

[36] I. Craw, H. Ellis, and J. R. Lishman, Automatic extraction of face-feature, Pattern Recog. Lett. Feb. 1987,183–187.

[37] S. Anila, N. Devarajan,Simple and Fast Face Detection System Basedon Edges, International Journal of Universal Computer Sciences (Vol.1- 2010/Iss.2) pp. 54-58

[38] Paul Viola, Michael Jones, Robust Real-Time Face Detection, International Journal of Computer Vision 57(2), 137–154, 2004

[39] Muhammad SHARIF, Adeel KHALID, Mudassar RAZA, Sajjad MOHSIN Face Recognition using Gabor Filters, Journal of Applied Computer Science & Mathematics, no. 11 (5) /2011, Suceavapp 53-57

[40] Jie Chen, Shiguang Shan, Peng Yang , Shengye Yan, Xilin Chen and, Wen Gao,Novel Face Detection Method Based on Gabor Features, S.Z. Li et al. (Eds.): Sinobiometrics 2004, LNCS 3338, pp. 90–99, 2004

[41] M. C. Burl, T. K. Leung, and P. Perona, Facelocalization via shape statistics, in Int. Workshop onAutomatic Face and Gesture Recognition, Zurich,Switzerland, June 1995.

[42] W. Huang, Q. Sun, C. P. Lam, and J. K. Wu, Arobust approach to face and eyes detection fromimages with cluttered background, in Proc. of International Conference on Pattern Recognition,1998.

[43] T. Agui, Y. Kokubo, H. Nagashashi, and T. Nagao, ―Extraction ofFaceRecognition from Monochromatic Photographs Using NeuralNetworks,‖ Proc. Second Int‘l Conf. Automation, Robotics, andComputer Vision, vol. 1, pp. 18.8.1-18.8.5, 1992.

[44] M. Propp and A. Samal, ―Artificial Neural Network Architecturesfor Human Face Detection,‖ Intelligent Eng. Systems throughArtificial Neural Networks, vol. 2, 1992.

[45] R. Fe´raud,H. Wechsler, P.J. Phillips, V. Bruce, F. Fogelman-Soulie, andT.S. Huang, PCA, Neural Networks and Estimation for Face Detection,” Face Recognition: From Theory to Applications eds., vol. 163, pp. 424-432, 1998.

[46] R. Fe´raud and O. Bernier, ―Ensemble and Modular Approachesfor Face Detection: A Comparison,‖ Advances in Neural InformationProcessing Systems 10, M.I. Jordan, M.J. Kearns, and S.A. Solla, eds.,pp. 472-478, MIT Press, 1998.

[47]R. Fe´raud, O.J. Bernier, J.-E. Villet, and M. Collobert, ―A Fast andAccuract Face Detector Based on Neural Networks,‖ IEEE Trans.Pattern Analysis and Machine Intelligence, vol. 22, no. 1, pp. 42-53,Jan. 2001.

[48] M.A. Kramer, ―Nonlinear Principal Component Analysis UsingAutoassociative Neural Networks,‖ Am. Inst. Chemical Eng. J.,vol. 37, no. 2, pp. 233-243, 1991.

[49] S.-H. Lin, S.-Y. Kung, and L.-J. Lin, ―Face Recognition/Detectionby Probabilistic Decision-Based Neural Network,‖ IEEE Trans.Neural Networks, vol. 8, no. 1, pp. 114-132, 1997

[50] T. Kohonen, Self-Organization and Associative Memory. Springer1989.

[51] M. Kirby and L. Sirovich, ―Application of the Karhunen-Loe`veProcedure for the Characterization of Human Faces,‖ IEEE Trans.Pattern Analysis and Machine Intelligence, vol. 12, no. 1, pp. 103-108,Jan. 1990

[52] K. Pearson, ―On Lines and Planes of Closest Fit to Systems ofPoints in Space,‖ Philosophical Magazine, vol. 2, pp. 559-572, 1901

[53] H. Hotelling, ―Analysis of a Complex of Statistical Variables intoPrincipal Components,‖ J. Educational Psychology, vol. 24, pp. 417-441, pp. 498-520, 1933.

[54] E. Osuna, R. Freund, and F. Girosi, ―Training Support VectorMachines: An Application to Face Detection,‖ Proc. IEEE Conf.Computer Vision and Pattern Recognition, pp. 130-136, 1997.

[55] K.-K. Sung and T. Poggio, ―Example-Based Learning for View-Based Human Face Detection,‖ Technical Report AI Memo 1521,Massachusetts Inst. of Technology AI Lab, 1994 |

[56] Dr.Ch.D.V.Subba Rao ,Srinivasulu Asadi,Dr.Ch.D.V.Subba Rao ―A Comparative study ofFace Recognition with Principal Component Analysisand Cross-Correlation Technique, InternationalJournal of Computer Applications (0975 –8887),Volume 10– No.8, November 2010

[57] M. Turk and A. Pentland, ―Eigenfaces for Recognition,‖ J. CognitiveNeuroscience, vol. 3, no. 1, pp. 71-86, 1991.