ISSN ONLINE(2278-8875) PRINT (2320-3765)

All submissions of the EM system will be redirected to Online Manuscript Submission System. Authors are requested to submit articles directly to Online Manuscript Submission System of respective journal.

Person Identification Based on Recognition of Palmprint Using Orientation Features

Basanti B. Sawant1, M.Talib2,Sagar S. Jondhale3 , Pradeep M.Patil4
  1. Research Scholar, North Maharashtra University, Jalgaon, Maharashtra, India
  2. Assistant Professor, UICT, North Maharashtra University, Jalgaon, Maharashtra, India
  3. Director, Samarth Samaj, Dombivli, Maharashta, India
  4. Director, RMD Sinhgad Technical Institutes Campus, Pune, Maharashtra, India
Related article at Pubmed, Scholar Google

Visit for more related articles at International Journal of Advanced Research in Electrical, Electronics and Instrumentation Engineering

Abstract

In this paper orientation features of palmprint images have been efficiently used for person identification. Performance of the proposed algorithm has been evaluated using standard database of PolyU available online. The region of interest (ROI) is extracted from the palmprint images and the variance feature vector of orientation field has been computed and is treated as the feature vector (template) of that palmprint for the person and stored in the database during enrolment. When a query palmprint image is applied, the ROI and variance feature vector of orientation field is computed and matched with that of the templates in the database using Euclidian distance to find best match. The algorithm performs well for various thresholds on the PolyU database.

Keywords

palmprints, orientation field, region of interest, Euclidian distance.

I.INTRODUCTION:

Biometrics helps to provide the identity of the user based on the physiological or behavioural characteristic of the person. Every biometric technology has its own merits and limitations. Thus, there is no system exist which can be considered as the best for all applications [1]. One of the well known biometrics systems having very high accuracy is iris based system [2]. But iris acquisition system is very expensive and has high failure to enrolment rate. Also it requires very high cooperation from user. Fingerprint based systems are most widely used in the world because of its simplicity, low cost and good accuracy. Small amounts of dirt or grease on the finger may affect the performance of fingerprint based system. Hand geometry based system suffers from high cost and low accuracy. The ear based recognition has a problem of ear being partially or fully occluded due to hair or cap [3]. Face based recognition system is low cost requiring only a camera mounted in a suitable position such as the entrance of a physical access control area. However, face based systems are less acceptable than fingerprint based systems [4].
Palmprint is the region between wrist and fingers and has features like principle lines, wrinkles, datum points, delta point, ridges, minutiae points, singular points and texture pattern that can be considered as biometric characteristics. Compared to other biometric systems, palmprint based identification system has many advantages: 1) Features of the human hand are relatively stable and unique. 2) It needs very less co-operation from users for data acquisition. 3) Collection of data is non-intrusive. 4) Low cost devices are sufficient to acquire good quality of data. 5) The system uses low resolution images but provides high accuracy. 6) Compared to the fingerprint, a palmprint provides a larger surface area so that more features can be extracted. 7) Because of the use of lower resolution imaging sensor to acquire palmprint, the computation is much faster at the pre-processing and feature extraction stages. 8) System based on hand features is found to be most acceptable. 9) palmprint also serves as a reliable human identifier because the print patterns are not found to be duplicated even in mono-zygotic twins [5].
Palmprint based systems make use of structural features, statistical features and multiple combinations. The structural features of palmprint include principle lines, wrinkles, datum points, minutiae points, ridges and crease points. C. Han et al [6] used Sobel and morphological operations to extract line-like features from palmprints. N. Duta et al [7] used isolated points along the principle lines as the features. A system based on ridges of the palmprint eliminating creases has been proposed by J. Funada et al [8]. D. Zhang et al [9] used end points of principle lines referred as datum points. These datum points used as the features found to be location and directional invariant. J. Chen et al [10] proposed a palmprint based system that uses crease points. X. Wu et al [11] considered directional line energy features which arePalmprint based systems make use of structural features, statistical features and multiple combinations. The structural features of palmprint include principle lines, wrinkles, datum points, minutiae points, ridges and crease points. C. Han et al [6] used Sobel and morphological operations to extract line-like features from palmprints. N. Duta et al [7] used isolated points along the principle lines as the features. A system based on ridges of the palmprint eliminating creases has been proposed by J. Funada et al [8]. D. Zhang et al [9] used end points of principle lines referred as datum points. These datum points used as the features found to be location and directional invariant. J. Chen et al [10] proposed a palmprint based system that uses crease points. X. Wu et al [11] considered directional line energy features which are characterised with the help of crease points for identification of palmprint. Like fingerprint, each palmprint also contains ridges and minutiae which can be used for matching palmprint images [12]. The statistical features of palmprint include Principle Component Analysis [13], Linear Discriminant Analysis [14], Independent Component Analysis [15], Fourier Transforms [16], Gabor filter [17], fusion code [18], competitive code [19], ordinal code [20] and Wavelets [21] etc. Fusion of palmprint features with other traits like fingerprint [22], palm veins [23], hand geometry [24], face [25], and iris [26] to improve accuracy of the system have been successfully attempted by various researchers.

II.PREPROCESSING OF THE PALMPRINTS:

In order to make the proposed algorithm rotation and translation invariant, it is necessary to obtain the ROI from the captured palmprint image, prior to feature extraction. The adapted procedure for extraction of ROI is similar to the procedure described for the standard database of PolyU that is available online. Five major steps of palmprint image pre-processing to extract the ROI are as follows:
Step 1: Convolve the captured palmprint image with a low-pass filter. Convert this convolved imprint into a binary, by using a threshold value. This transformation can be represented as,
image
where, B(x,y) and O(x,y) are the binary image and the original image, respectively;
L(x,y) is a lowpass filter, such as Gaussian, and“*” represents an operator of convolution.
Step 2: Extract the boundaries of the holes, (Fixj,Fiyj) (i=1,2), between fingers using a boundary-tracking algorithm. The start points, (Sxi,Syi), and end points, (Exi,Eyi) of the holes are then marked in the process.
Step 3: Compute the center of gravity, (Cxi,Cyi), of each hole with the following equations:
image
where, M(i) represents the number of boundary points in the hole, i. Then construct a line that passes through (Cxi, Cyi) and the midpoint of (Sxi, Syi) and (Exi, Eyi). The line equation is defined as,
image
where, (Mxi, Myi) is the midpoint of (Sxi, Syi) and (Exi, Eyi).
Based on these lines, two key points, (k1, k2), can easily be detected.
Step 4: Line up k1 and k2 to get the Y-axis of the palmprint coordinate system and make a line through their midpoint which is perpendicular to the Y-axis, to determine the origin of the coordinate system. This coordinate system can align different palmprint images.
Step 5: Extract a sub-image with the fixed size on the basis of coordinate system, which is located at the certain part of the palmprint for feature extraction.

III.FEATURE EXTRACTION OF THE PALMPRINTS USING ORIENTATION FIELD:

Orientation field of the palmrprint image defines the local orientation of the ridges contained in the palmprint. The steps for calculating the orientation at pixel (i, j) are as follows:
Step 1: Initially consider a block of size of W×W centred at pixel (i, j) in the normalised palmprint image.
Step 2: For each pixel in the block, compute the gradients δx (i, j) and δy (i, j), which are the gradient magnitudes in the x and y directions, respectively. The horizontal Sobel operator has been used to compute δx (i, j) and is defined as,
image
The vertical Sobel operator has been used to compute δy (i, j) and is defined as,
image
Step 3: The local orientation at pixel (i, j) has been estimated using,
image
where θj(i, j) is the least square estimate of the local orientation at the block centred at pixel (i, j). Step 4: Smooth the orientation field in a local neighbourhood using a Gaussian filter. The orientation image is firstly converted into a continuous vector field, which is defined as:
image
where Φx and Φy are the x and y components of the vector field, respectively. After the vector field has been computed, Gaussian smoothing is then performed as follows:
image
where G is a Gaussian low-pass filter of size wΦ£ wΦ.
The final smoothed orientation field O at pixel (i; j) is defined as:
image
The variance feature vector of orientation field is computed and is treated as a template or feature map.
image
where μkis a mean of kth column of O (i, j ),
m and n are number of columns and rows of O, respectively
The feature vector is given by
image

IV.PALMPRINT IMAGE MATCHING

The variance feature vector of orientation field for query palmprint image has been computed by using the same steps as described earlier (4)-(11). Matching the variance feature vector of orientation field of query and the template image from the stored database has been carried by using the L2 norm.
image
where and v are the feature vectors of query and template palmprint images, respectively.

V.RESULTS AND DISCUSSIONS

We have used the PolyU database available from Hong Kong Polytechnic University (PolyU) which consists of 7752 grayscale images from 193 users corresponding to 386 different palms. Around 17 images per palm are collected in two sessions. The images are collected using CCD at spatial resolution of 75 dots per inch, and 256 gray-levels. Images are captured placing pegs. Sample of PolyU hand image, the extracted palmprint and the orientation field is shown in Figure1. For experimentation purpose the database is classified into training set and testing set. Six images per palm are considered for training (selected randomly) and remaining images are used for testing.
image
image
The performance of algorithm has been measured in terms of false acceptance rate (FAR) and false rejection rate (FRR) for various thresholds. The FAR and FRR are computed as: let N is the number of subjects with 17 palmprints each. Therefore total number of palmprint images in the database are T = 17 × N. A single template per subject has been considered for experimentation. Total trials carried out for finding true claims and imposter claims are N × (T − 6), out of which total true claims are N × 11 and imposter claims are (total trials − true claims), using this we can get,
FRR = (true claims rejected/total true claims) × 100%,
FAR = (imposter claims accepted/total imposter claims) × 100% and
GAR = 100 − FRR in percentage
For every possible combination the algorithm has been tested for computation of FAR and FRR at different thresholds that can be plotted as in Figure 2.

References