ISSN ONLINE(2320-9801) PRINT (2320-9798)

All submissions of the EM system will be redirected to Online Manuscript Submission System. Authors are requested to submit articles directly to Online Manuscript Submission System of respective journal.

Periocular and Iris Feature Encoding - A Survey

Vibha S Rao1, P Ramesh Naidu2
  1. Student, Department of CSE, Sri Venkateshwara College of Engineering, Bangalore, India
  2. Assistant Professor, Department of CSE, Sri Venkateshwara College of Engineering, Bangalore, India
Related article at Pubmed, Scholar Google

Visit for more related articles at International Journal of Innovative Research in Computer and Communication Engineering

Abstract

With the fast development of video hardware and software in recent years, intelligent video systems have been widely used in industry, transportation, security, etc. at the same time, a lot of biometric technologies, comprising automated methods for uniquely recognizing people based on their physical or behavioural traits, such as face, fingerprint, palm print, finger-knuckle-print, gait, are also based on video or image analysis. Iris recognition is emerging as one of the important methods of biometrics-based identification systems. Several crucial factors Gait Keystroke pattern Signature of iris biometrics include rich and unique textures, non-invasiveness, and stability of iris pattern throughout the person’s lifetime, public acceptance, and availability of user friendly capturing devices. These factors have attracted the researchers to work in this evolving field over the past decade. The iris recognition consists of iris localization, normalization, encoding and comparison. In this paper, segmentation of periocular features and the encoding part of iris recognition is analysed.

Keywords

Iris features, Periocular features, Segmentation, GeoKey encoding

INTRODUCTION

In imaging science, image processing is any form of signal processing for which the input is an image, such as photograph or video frame; the output of image processing may be either an image or set of characteristics or parameters related to the image. The acquisition of images is referred to as imaging.
Steps in image processing technique:
1. Image acquisition is the first process. Generally this stage involves pre-processing such as scaling.
2. Image enhancement, the idea is to bring out detail that is obscured or simply to highlight certain features of interest in image.
3. Image restoration is an area that deals with improving the appearance of an image. Unlike enhancement, which is subjective, image restoration is objective. Image restoration technique tends to be based on mathematical or probabilistic models of image degradation.
4. Color image processing.
5. Wavelets are the foundation for representing images in various degree of resolution.
6. Compression deals with techniques for reducing the storage required saving an image or the bandwidth required transmitting it.
7. Morphological processing deals with tools for extracting image components that are useful in the representation and description of shape.
8. Segmentation procedures partition an image into its constituent parts or objects.
9. Representation and Description almost always follow the output of segmentation stages which usually is raw pixel data, constituting the boundary of a region. Representation first deals whether the data should be represented as a boundary or as a complete region. Choosing representation is only part of the solution for transformation of raw data into a form suitable for subsequent computer processing. A method must also be specified for describe the data so that features of interest are highlighted.
Description or feature selection deals with extracting attributes that result in some quantitative information of interest or are basic for differentiation of one class of objects from another.
10. Object recognization, the encoding of the captured image take place. This image is then captured with the stored template for recognition.
Identity management refers to the challenge of providing authorized users with secure and easy access to information and services across a variety of networked systems. A reliable identity management system is a critical component in several applications that render their services only to legitimate users. Examples of such applications include physical access control to a secure facility, e-commerce, access to computer networks and welfare distribution. The primary task in an identity management system is the determination of an individual’s identity. Traditional methods of establishing a person’s identity include knowledge-based (e.g., passwords) and token-based (e.g., ID cards) mechanisms. These surrogate representations of the identity can easily be lost, shared or stolen. Therefore, they are not sufficient for identity verification in the modern day world. Biometrics offers a natural and reliable solution to the problem of identity determination by recognizing individuals based on their physiological and/or behavioural characteristics that are inherent to the person.
Vein matching, also called vascular technology is a technique through the analysis of the patterns of blood vessels visible from the surface of the skin. But they generally do not provide enough data points for critical verification decisions. Face recognition does not work well include poor lighting, sunglasses, long hair, or other objects partially covering the subject’s face, and low resolution images. Another serious disadvantage is that many systems are less effective if facial expressions vary.
One limitation of DNA matching is related to misconceptions about what a DNA match really means. Matching DNA from a crime scene to DNA taken from a suspect is not an absolute guarantee of the suspect's guilt.
The iris recognition is a kind of the biometrics technologies based on the physiological characteristics of human body, compared with the feature recognition based on the fingerprint, palm-print, face and sound etc, the iris has some advantages such as uniqueness, stability, high recognition rate, and non-infringing etc. iris patterns have now been tested in many field and laboratory trials, producing no false matches in several million comparison tests. These characteristics make it very attractive for use as a biometric for identifying individuals.
The iris is a thin circular diaphragm, which lies between the cornea and the lens of the human eye. The iris is perforated close to its centre by a circular aperture known as the pupil. The function of the iris is to control the amount of light entering through the pupil, and this is done by the sphincter and the dilator muscles, which adjust the size of the pupil. The average diameter of the iris is 12 mm, and the pupil size can vary from 10% to 80% of the iris diameter.
The human iris begins to form in the third month of gestation and the structure is completed by the eighth month, even though the color and pigmentation continue to build throughout the first year of birth. After that, the structure of the iris remains stable throughout a person’s life, except for direct physical damage or changes caused by eye surgery. This makes the usage of an iris pattern to be as unique as the fingerprint, however, a further advantage is that it is an internal organ and is less susceptible to damages over a person’s lifetime. In Fig.1.1 shows the iris anatomy. The iris anatomy is more relevant to the proposed iris recognition methodologies. Thus, the key visible features, as annoted in Fig. 1.1, can be briefed as below.
Medial canthus: The angle between the upper and lower eyelids near the centre of the face.
Sclera: The white region of an eye image.
Pupil: The darkest part of an eye image.
Pupillary Area: The inner part of the iris whose edges form the contour of the pupil.
Ciliary Area: The iris region from the pupillary area to the ciliary body. This is the region of dilator muscles that open the pupil residing in this zone.
Stroma Fibers: The pigmented fibro vascular tissue that constructs most of the visible iris region.
The iris consists of a number of layers; the lowest is the epithelium layer, which contains dense pigmentation cells. The stromal layer lies above the epithelium layer, and contains blood vessels, pigment cells and the two iris muscles. The density of stromal pigmentation determines the colour of the iris. The externally visible surface of the multi-layered iris contains two zones, which often differ in colour. An outer ciliary zone and an inner pupillary zone, and these two zones are divided by the collarette – which appears as a zigzag pattern.
Image processing techniques can be employed to extract the unique iris pattern from a digitised image of the eye, and encode it into a biometric template, which can be stored in a database. This biometric template contains an objective mathematical representation of the unique information stored in the iris, and allows comparisons to be made between templates. When a subject wishes to be identified by iris recognition system, their eye is first photographed, and then a template created for their iris region. This template is then compared with the other templates stored in a database until either a matching template is found and the subject is identified, or no match is found and the subject remains unidentified. The flowchart of iris recognition system is illustrated in fig 1.2.
In this paper we propose a method where we exploit the periocular feature along with the iris feature to increase the recognition accuracy. The paper is organised as follows: section II consist of survey on how the periocular features can be exploited at the segmentation stage and also the iris encoding strategy for exploiting the iris feature.

LITERATURE SURVEY

A survey gives an oversight of a field and is thus distinguishing from a sort of study which consists of a microscopic examination of a turf; it is a map rather than a detailed plan. The survey must be planned before a start is made. Literature survey gives the preliminary information related to working area of project, it helps in understanding the background related to the topic.
There have been some promising efforts to acquire iris images using visible illumination to overcome the limitations of current iris recognition systems using NIR-based acquisition and develop less cooperative iris recognition systems for higher security and surveillance applications. The use of visible wavelength (VW) imaging can address the shortcomings of acquisition using NIR-based imaging, especially when distant acquisition for iris images is required. The advanced imaging technologies, for example, high resolution CMOS/CCD cameras, are now available to conveniently acquire high resolution images at distances beyond 3 meters using visible illumination and locate iris images suitable for recognition. Conventional iris recognition systems operate in stop-and-stare mode which requires significant cooperation from the users. The usage of visible imaging can relax such requirement and enable iris recognition in less cooperative environment using images acquired at further distance.
A. GeoKey encoding
The iris encoding and matching algorithms in [1] develops an encoding and matching strategy which uses only the iris features to provide accurate iris recognition capability for iris images acquired at a farther distance and under less constrained imaging environments. This iris feature encoding scheme uses the geometry information known as GeoKey, which is co-ordinate pair is considered as a unique key is assigned to each of the subject enrolled to the system, to encode iris texture details from the localized iris region pixels.
Scaling and rotation changes for localized iris region are applied on the unique key i.e., GeoKey rather on the image pixel. This provides the added advantage of efficient and fast comparison operation on the local image patches. This method makes use of Hamming distance for efficient similarity computation. The binarized encoding of such local iris features still has some shortcomings, as they simultaneously doesn’t exploit iris features and periocular features to provide more accurate personal identification capability.
The following section of the paper gives us idea and also on the previous work for incorporating the periocular features at the automatic segmentation stage.
B. Periocular feature segmentation
Many researchers have studied iris recognition techniques in unconstrained environments, where the probability of acquiring non-ideal iris images is very high due to off-angles, noise, blurring and occlusion by eyelashes, eyelids, glasses, and hair. Although there have been many iris segmentation methods, most focus primarily on the accurate detection with iris images which are captured in a closely controlled environment. The non–ideal iris images leads to degraded images and thus makes segmentation challenging.
The segmentation method described in [2] is divided into two parts: detecting noise-free iris regions and parameterizing the iris shape. The first part is further subdivided into two processes, detecting the sclera and then making the required adjustment to exploit the sclera and the occluded region. The second part makes use of two trained neutral network classifiers for exploiting local features to classify image pixels into sclera/non-sclera and iris/non-iris categories. A strong dependency is established as the trained classifiers operated in cascade order by firstly classifying sclera and then feeding the classified sclera pixels into the next classifier for iris pixels classification. This dependency leads to the disadvantage as the errors are propagated from first classifier to the subsequent classifier and thus affect the segmentation accuracy. Furthermore, this approach did not provide a complete automated framework to accommodate the situation when the face images are presented.
To address the problem of noisy artifacts, a novel iris segmentation algorithm [3] was proposed. After reflection removal, an eight-neighbor connection based clustering based coarse iris localisation scheme is performed to cluster the iris images into different parts. Mis-localization on non-iris region is reduced by extracting only the iris region thus non iris region are identified and excluded and followed by integro-differential constellation to enhance the global convergence ability for further pre-processing. In the next step a curvature model and prediction model are used to learn the periocular features like eyelid and eyelashes.
The constellation model is an iterative process that places multiple integro-differential operators, at the current evaluating pixel in order to find the local minimum score. The process is iterated until it converges or the predefined maximum number of iterations is reached. There are a few limitations that can be observed in this method. Firstly, the segmentation model may not effectively segment the real images as they rely on the conventional segmentation approach. Secondly, the performance of the segmentation operation might be affected if the intial clustering pixels parameters are not chosen carefully. Thirdly, the constellation model may lead to a non-optimal iris center.
A unified framework for automatic iris segmentation was proposed in [4], by making use of Zernike moments, the polynomials that is orthogonal to the unit disk to exploit the high order pixel in a local region and then uses the NN/SVM classifier for pixel based classification. Thus it overcomes the problem of convention segmentation mentioned in the previous work [3]. The main drawback of this approach is that Zernike features are computed for every single pixel thus incurring heavy computation cost and thus not suitable for time sensitive application.
Another solution for incorporating periocular features at the segmentation stage was proposed in [5], where the solution is broadly classified into two parts: segmentation and recognition.
In the segmentation part, the input image is firstly pre-processed for noise attenuation and image quality enhancement. It firstly makes use of AdaBoost eye detection in order to compensate for the iris detection errors caused by two circular edge detection operations followed by retinex algorithm to address the problem of illumination variation and provides a high dynamic range compression in order to enhance the quality of the image. The pre-processed image is then segmented using random walker algorithm followed by a sequence of post-processing operations to further refine the coarsely segmented result. The initial phase of the pre-processing i.e., output of the AdaBoost detector is referred as the global periocular region, which is the entire eye region which is considered for recognition is obtained without performing segmentation and normalization. The second phase is referred as local periocular region, in which a localized region is extracted and normalized with respect to the segmented iris information. Texture analysis is then performed on both the extracted global and local periocular regions.
The output of the coarse segmentation is an input for post-processing of the image. The sample segmentation result of iris is represented in figure 2.1.
This phase mainly consists of centre estimation, iris and pupil localization, boundary refinement, eyelid localization and ES detection and periocular normalization and segmentation. They make use of canny edge detector, circular model, adaptive eyelid location approach are used for post-processing the image. The results of localization approach are represented in figure 2.2.
In the recognition part, both the local and global periocular features are extracted using DSIFT and LMF and are classified using trained texton dictionary and number of occurrences for classification thus to form a k-bin histogram. Chi-square distance is used for matching the score between the templates. The min-max normalization technique is employed for matching scores for both periocular and iris features. The weighted sum rule is used for combining the normalized scores.

CONCLUSION

From this paper, we can get an idea of simultaneously exploiting both the periocular and iris feature by using the GeoKey encoding approach proposed in[1] for encoding the iris feature and segmentation approach proposed in[5] for exploiting the periocular feature to provide a improved and efficient iris encoding.

ACKNOWLEDGEMENT

We gracefully thank our college SVCE for providing us with all the necessary help and grooming up in to be Master of Technology. I express my sincere gratitude to Dr. C. Prabhakar Reddy, Principal, SVCE, and Dr.Suresha, HOD, Dept. Of CSE, SCVE Bangalore for providing the required facility and giving me an opportunity to work on this topic. I also extend my sincere thanks to my guide P. Ramesh Naidu, Asst. Professor, Dept. of CSE, SVCE Bangalore for the support and guiding me to work on this topic.
 

Figures at a glance

Figure 1 Figure 2 Figure 3 Figure 4
Figure 1 Figure 2 Figure 3 Figure 4
 

References