Survey of Biometric Recognition System for Iris T. Rakesh, M G Khogare

International Journal of Emerging Technology and Advanced Engineering Website: www.ijetae.com (ISSN 2250-2459, Volume 2, Issue 6, June 2012) Survey o...
Author: Miranda Goodwin
3 downloads 0 Views 458KB Size
International Journal of Emerging Technology and Advanced Engineering Website: www.ijetae.com (ISSN 2250-2459, Volume 2, Issue 6, June 2012)

Survey of Biometric Recognition System for Iris T. Rakesh, M G Khogare PG Department, College Of Engineering, Ambajogai Abstract- Biometric recognition refers to an automatic recognition of individuals based on a feature vector(s) derived from their physiological and/or behavioral characteristic. Iris recognition, a relatively new biometric technology Iris recognition is proving to be one of the most reliable biometric traits for personal identification. In fact, iris patterns have stable, invariant and distinctive features for personal identification. Application of such system includes computer system security, secure electronic banking, mobile phones and credit cards. In this paper, we give a brief overview of different feature extraction methods for iris recognition system.

This paper is organized in 4 chapters .First chapter gives brief introduction about iris recognition system and four main modules in iris recognition system.Second chapter gives information about iris image datasets.Chapter three gives information about different methods used in extraction the features of iris.Chapter four gives the ovear all conclusion on these methods which are studied in this paper. The iris provides one of the most stable biometric signals for identification, with a distinctive texture that is formed before age one and remains constant throughout life unless there is an injury to the eye. Compared with other biometric features such as face, fingerprint and voice, iris patterns are more stable and reliable because of the following advantages:

Keywords: Biometric, Gabor filtering , wavelet transform, cumulative sum, zero crossing and feature vector.

I.

INTRODUCTION

Biometrics deals with automated methods of recognizing a person based on physiological characteristics such as face, fingerprints, hand geometry, iris, retinal, and vein. Biometric authentication technique based on iris patterns is suitable for high level security systems. Iris is the annular ring between the pupil and the sclera of the eye. The structure of iris is fixed from about one year in age and remains constant overtime. It exhibits long-term stability and infrequent re-enrolment requirements. The variations in the gray level intensity values distinguish two individuals. The difference exists between identical twins and even between left and right eye of the same person. As the technology is iris pattern-dependent, not sight dependent, it can be used by blind people. The iris is highly protected, noninvasive and ideal for handling applications requiring management of large user groups, like voter ID management. The iris recognition techniques potentially prevent unauthorized access to ATMs, cellular phones, desktop PCs, workstations, buildings and computer networks. The accuracy of iris recognition systems is proven to be much higher compared to other types of biometric systems like fingerprint, handprint and voiceprint[1].



The iris begins to form in the third month of gestation and the structures creating its pattern are invariant.  The forming of iris depends on the initial environment of embryo, so the iris texture patterns do not correlate with genetic determination.  The left and right irises for a given person are different from each other.  The inner organs of iris are protected by aqueous humor and cornea from the environment.  The iris recognition is non-intrusive.  It is almost impossible that irises are modified by surgical without risk. The idea of iris identification trace back to the Paris prison in eighteenth century, where police discriminated criminal by inspecting their irises color. Daugman was the first to develop the fundamental algorithms which now form the basis for all current commercial iris recognition systems, after he was commissioned by Flom and Safir to conduct intensive and extensive research for implementing automated iris recognition[2]. In 1987, Flom and Safir obtained an unimplemented concept of automated iris biometrics system. A report was published by Johnston in 1992 without any experimental results.

272

International Journal of Emerging Technology and Advanced Engineering Website: www.ijetae.com (ISSN 2250-2459, Volume 2, Issue 6, June 2012) Iris based security systems capture iris patterns of individuals and match these patterns against the record in available databases. Even though significant progress has been made in iris recognition, handling noisy and degraded iris images require further investigation. The iris recognition algorithms need to be developed and tested in diverse environment and configurations. Research issues are based on iris localization, nonlinear normalization, occlusion, segmentation, liveness detection and large scale identification. It is required to achieve lowest false rejection

Morphological operators were applied by Mira and Mayer to obtain iris boundaries. The inner boundary is detected by applying threshold, image opening and closing operators. The outer boundary is detected by applying threshold, closing and opening operators[1].  The third module, feature extraction identifies the most prominent features for classification. Some of the features are x-y coordinates, radius, shape and size of the pupil, intensity values, orientation of the pupil ellipse and ratio between average intensity of two pupils. The features are encoded to a format suitable for recognition[1].  The fourth module, recognition achieves result by comparison of features with stored patterns. The interclass and intra-class variability are used as metrics for pattern classification problems[1]. The figureI show general block diagram of any iris recognition system.

rate and fastest composite time for template creation and matching[1].

A typical iris recognition system involves four main modules.  The first module, image acquisition deals with capturing sequence of iris images from the subject using cameras and sensors. An image acquisition consists of illumination, position and physical capture system. The occlusion, lighting, number of pixels on the iris are factors that affect the image quality. Many iris recognition systems require stern cooperation of the user for image acquisition. Ketchantang proposed a method in which the entire sequence of images is acquired during enrolment and the best feasible images are selected, to increase flexibility[1].  The second module, preprocessing involves various steps such as iris liveness detection, pupil and iris boundary detection, eyelid detection and removal and normalization. Iris liveness detection differentiates live subject from a photograph, a video playback, a glass eye or other artifacts. It is possible that biometric features are forged and illegally used. Several methods like Hough transformation, integrodifferential operator, gradient based edge detection are used to localize the portions of iris and the pupil from the eye image. The contours of upper and lower eyelids are fit using the parabolic arcs resulting the eyelid detection and removal. It is essential to map the extracted iris region to a normalized form. The iris localization methods are based on spring force, morphological operators, gradient, probability and moments. Iris localization method developed by Zhaofeng He is based on spring force-driven iteration scheme using Hooke‟s law. The composite on of forces from all points determines the centre and radius of pupil and iris.

Figure I: Iris Recognition System [1]

II. IRIS IMAGE D ATASETS The accuracy of the iris recognition system depends on the image quality of the iris images. Noisy and low quality images degrade the performance of the system. UBIRIS database is the publicly available database. It consists of images with noise, with and without cooperation from subjects. The UBIRIS database has two versions with images collected in two distinct sessions corresponding to enrolment and recognition stages. The second version images were captured with more realistic noise factors on non-constrained conditions such as at-a-distance, on-themove and visible wavelength. CASIA iris image database images are captured in two sessions.

273

International Journal of Emerging Technology and Advanced Engineering Website: www.ijetae.com (ISSN 2250-2459, Volume 2, Issue 6, June 2012) CASIA-IrisV3 contains a total of 22,051 iris images from more than 700 subjects. It also consists of twins„iris image dataset. ND 2004-2005 database is the superset of Iris Challenge Evaluation (ICE) dataset, uses an Iridian iris imaging system for capturing the images. The system provides voice feedback to guide the user to the correct position. The images are acquired in groups of three called as shot. For each shot, the system automatically selects the best image of the three and reports values of quality metrics and segmentation results for that image. For each person, the left eye and right eye are enrolled separately[3].

And each cortical channels is modeled by a pair of Gabor filters h,(x,y;f, θ) and h,(x,y;f, θ ). The two Gabor filters are of opposite symmetry and are given by he(x, y) =g(x, y).cos[2πf (xcosθ+ysinθ)] ……….1.(a) ho (x, y) =g(x, y).sin[2πf(xcosθ+ysinθ)]…………1.(b) And the 2-D wavelet transform is a good scale analysis tool and has been used for texture discrimination. A 2-D wavelet transform can be treated as two separate I-D wavelet transforms. After applying wavelet transform on an original image, a set of sub-images are obtained at different resolution levels. Iris identification is performed by using Weighted Euclidean distance (WED) classifier[5].

III. IRIS RECOGNITION METHODS 3.1 Approach Using Independent Component Analysis: The iris recognition system developed by Ya-Ping Huang adopts Independent Component Analysis (ICA) to extract iris texture features. Image acquisition is performed at different illumination and noise levels. The iris localization is performed using integrodifferential operator and parabolic curve fitting. From the inner to outer boundary of iris, fixed number of concentric circles n with m samples on each circle is obtained. This is represented as a matrix n x m for a specific iris image which is invariant to rotation and size. The independent components are uncorrelated, determined from the feature coefficients. The feature coefficients are non-Gaussian and mutually independent. The basis function used is kurtosis. The independent components are estimated and encoded. The centre of each class is determined by competitive learning mechanism which is stored as the iris code for a person. The average Euclidean distance classifier is used to recognize iris patterns[1].

3.3 Zero Crossing Representation Method: The method developed by Boles represents features of the iris at different resolution levels based on the wavelet transform zero-crossing. The algorithm is translation, rotation and scale invariant. The input images are processed to obtain a set of 1D signals and its zero crossing representation based on its dyadic wavelet transform. The wavelet function is the first derivative of the cubic spline. The centre and diameter of the iris is calculated from the edge-detected image. The virtual circles are constructed from the center and stored as circular buffers. The information extracted from any of the virtual circles is normalised to have same number of data points and a zero crossing representation is generated. The representation is periodic and independent from the starting point on iris virtual circles. These are stored in the database as iris signatures. The dissimilarity between the iris of the same eye images was smaller compared to the eye images of different eyes. The advantage of this function is that the amount of computation is reduced since the amount of zero crossings is less than the number of data points. But the drawback is that it requires the compared representations to have the same number of zero crossings at each resolution level[1]..

3.2 Multi Channel Gabor filtering and 2-D wavelet transforms: The iris recognition system developed by Yong Zhu adopts multi channel Gabor filtering and 2-D wavelet transform to extract iris texture features. Image acquisition is performed at different illumination and noise level. The iris localization is performed on inner boundary between the pupil and the iris by means of thresholding, and outer boundary by maximizing changes of the perimeter – normalized sum of gray level. The multi-channel Gabor filtering technique is finding the processing of pictorial information in the human visual cortex involves a set of of parallel and quasi-independent mechanisms or cortical channels which can be modeled by band pass filters.

3.4 Iris Recognition Using Cumulative-Sum-Based Change Analysis: The method by Jong Gook Ko is based on cumulative sum based change points.

274

International Journal of Emerging Technology and Advanced Engineering Website: www.ijetae.com (ISSN 2250-2459, Volume 2, Issue 6, June 2012) The iris segmentation uses Daugman‟s method .The segmented image is normalized to 64x300 pixel area. In his paper, a cumulative-sum-based analysis method is used to extract features from iris images.

The image is transformed into polar coordination system. In the feature extraction process Gabor wavelet transform and wavelet transform which are widely used for extracting features–were evaluated. From this evaluation, they found that Haar wavelet transform has better performance than that of Gabor transform. Second, Haar wavelet transform was used for optimizing the dimension of feature vectors in order to reduce processing time and space. With only 87 bits, they could present an iris pattern without any negative influence on the system performance. Last, they improved the accuracy of a classifier, a competitive learning neural network, by proposing an initialization method of the weight vectors and a new winner selection method designed for iris recognition. With these methods, the iris recognition performance increases to 98.4%[7]. The methods are summarized in Table I.

The cumulative algorithm given below.

Step 1. Divide normalized iris image into basic cell regions for calculating cumulative sums. One cell region has 3 (row) ×10 (col) pixels size. An average grey value is used as a representative value of a basic cell region for calculation. Step 2. Basic cell regions are grouped horizontally and vertically as shown in Fig. 3 (Five basic cell regions are grouped together because experimental results show that much better performance is achieved when a group consists of five cells). Step 3. Calculate cumulative sums over each group . Step 4. Generate iris feature codes. Cumulative sums are calculated as follows. Suppose that X1, X2,···, and X5 are five representative values of each cell region within the first group located on the left top corner. • Calculate the average X = (X1+ X2 +….+ X5) / 5. • Calculate cumulative sum from 0: S0 = 0. • Calculate the other cumulative sums by adding the difference between the current value and the average to the previous sum: Si = Si 1 + (X- ) for i = 1, 2…. 5.

TableI:Iris Recognition Methodologies

Cumulative sums are calculated by addition and subtraction, so the cumulative-sums-based feature extraction method creates a lower processing burden than other methods. After calculation, iris codes are generated for each cell. To calculate similarity the iris code, a matching is performed using Hamming Distance[6]. 3.5 Iris Recognition through Improvement of Feature vector and classifier: This method was proposed by Shin young Lim.In this paper; they have proposed a method to making feature vector compact and efficient mechanisms for a competitive learning method such as weight vector initializations and the winner selection. Image acquisition is performed to acquire more clear images through CCD camera and minimize the effect of the reflected lights. In this step the size of the image is 320X240.To localize an iris, edge detection method is used to determine the inner boundary and apply the bisection method to determine the center of the inner boundary. After it, they find the inner boundary and outer boundary using virtual circle.

Group

Size of database

Results

Ya-ping Huang[1]

Real Images

81.3% for blurred images,93.8% for variant illumination and 62.5% for noise interference images

Young Zhu[5]

Real Images

93.8% for set A images and 92.5% for set B images

Boles[1]

Real images

EER:8.13%

Jong Gook Ko[6]

820 images

Recognition Rate:99.24%

Shinyoung Lim[7]

6000 images

Recognition Rate:98.4%

IV . CONCLUSION The purpose of Iris recognition, a biometrical based technology for personal identification and verification, is to recognize a person from his/her iris prints.

275

International Journal of Emerging Technology and Advanced Engineering Website: www.ijetae.com (ISSN 2250-2459, Volume 2, Issue 6, June 2012) The other methods also used to extract the features of iris like wavelet packet analysis and Scale invariant feature transform. In this paper, an attempt has been made to present of different iris recognition methods. The study of different techniques provides a development of new technique in this area as future work. REFERENCES [1].S V Sheela and P A Vijaya,”Iris Recognition Methodssurvey”,IJCA,vol.3 no.5 2010.pp.19-25. [2].Seestina and Jules-R,”Improving Iris-based Personal Identification using Maximum Rectangular Region Detection”,International Conference on DIP,2009IEEE.

[3].Upasana Tiwari,Deepali Kelkar.et.all,”Study of Iris Recognition Methods”,IJCTEE,vol no.2,pp .76-81. [4].Ya ping ,SI-WEI Luo and En-Yi,”An Efficient Iris Recognition System”,International Conference on Machine Learning and Cybernetics,Beijing,4-5 November 2002.

[5]. Y. Zhu, et al; “Biometric Personal Identification Based on Iris Patterns”; IEEE Pattern Recognition, 2000. Proceedings, vol. 2, Sept 2000, pp. 801-804.

[6]. J-G. Ko, et al; “A Novel and Efficient Feature xtraction Method for Iris Recognition”; ETRI Journal, vol. 29, no. 3, Jun 2007, pp. 399401.

[7]. S. Lim, et al; “Efficient Iris Recognition through Improvement of Feature Vector and Classifier”; ETRI Journal, vol. 23, no. 2, Jun 2001.

[8]. W.W. Boles and B. Boashsh;” A Human Identification Technique Using Images of the iris and Wavelet Transform”; IEEE Trans Signal Processing, vol. 46, no. 4, 1998,pp. 1185-1188.

[9]. Y. Wang and J. Han;” Iris Recognition Using Independent Component Analysis”; Int. Conf. Machine Learning and Cybernetics, 2005, pp. 18-21. [10]. C. Tisse, et al; Person identification technique using human iris recognition; Proceedings of Vision Interface, May 2002, pp. 294-299.

276

Suggest Documents