ISSN ONLINE(2319-8753)PRINT(2347-6710)

All submissions of the EM system will be redirected to Online Manuscript Submission System. Authors are requested to submit articles directly to Online Manuscript Submission System of respective journal.

A Novel Method for Banknote Recognition System

R. Bhavani1, A. Karthikeyan2
  1. Department of Computer Science and Engineering, Annamalai University, Chidambaram, India
  2. Department of Computer Science and Engineering, Annamalai University, Chidambaram, India
Related article at Pubmed, Scholar Google

Visit for more related articles at International Journal of Innovative Research in Science, Engineering and Technology

Abstract

The purpose of currency recognition system is to categorize the currencies accurately. In this work, banknotes are recognized using a novel feature extraction technique such as speeded up robust features (SURF) which is a combination of both interest point detector and descriptor. Speeded up robust features are used to extract the local image features of an image. The SURF features extracted are both scale and rotation invariant which makes it robust against various image transformations. The interest points are detected from the test and the template images followed by SURF feature extraction. Then the distance between the SURF descriptors for corresponding matched interest points is calculated and the average distance is taken to find the category of the banknote. The proposed system is evaluated on the dataset and achieves high recognition rate.

Keywords

banknote recognition, computer vision, interest points, local image features, speeded up robust feature

INTRODUCTION

Due to the development of automated cash handling machines, Paper currency recognition system has developed as one of the most important applications of pattern recognition.[1], [2] There are some similar recognition systems, such as face recognition system, fingerprint recognition system. However the theories they use are similar but the techniques and approaches were different.
Although a number of previous studies done on currency recognition [3], [4], they are all restricted to specific and standard environment. For example, the whole banknote must be visible without occlusion, wrinkles, etc. The staffs who work in the financial organizations have to distinguish different types of banknotes, which is not an easy job. This may cause serious problems, especially wrong recognition. So they need an efficient system to help in their work. The aim of our system is to help people who need to recognize Indian paper currency and work with convenience and efficiency and meanwhile being robust against photometric and geometric variations. Most banknote recognition methods employed neural network techniques for classification [9], [10]. Lee [7] proposed a method to extract features from specific parts of Euro banknotes representing the same colour. In order to recognize banknotes, they used two key properties of banknotes: direction (front, rotated front, back, and rotated back) and face values (5 10, 20, 50, 100, 200 and 500). They trained five neural networks for insert direction detection and face value classification. But as mentioned earlier, all of the existing banknote recognition systems are restricted to limited environment.

II. OUTLINE OF THE WORK

The system will be programmed based on MATLAB and includes a user-friendly interface. The main steps in the system are reading image, pre-processing, feature extraction; classification and result .There are 7 denominations of Indian paper currency. Each note has different size and different colour.
image
This system is designed to reduce the human effort and to avoid the purchase of expensive hardware. This system will extract the features of the test image and will match with the features stored in training database (mat file). If the features match it will display the type of currency. This system can be used in ATM Machines, Auto-seller machines, Bank money-counters. Figure 1 shows the proposed system of the banknote recognition system.
The seven denominations of the Indian currencies are shown in figure 2. The extraction of sufficient, stable, and distinctive unique features is significant for accuracy and robustness of a banknote recognition algorithm.
Recent developments of interest points detectors and descriptors in computer vision such as Scale Invariant Feature Transform (SIFT) [11] and Speeded Up Robust Feature (SURF) [5] helps in the extraction of sufficient, stable, and distinctive recognition of unique characteristics by using local image features. In this paper, the SURF interest point detector and descriptor is used to find the banknote category.
image

A. Image Database and Pre-processing

The system database is based on data collected from various websites mainly from reserve bank of India website. The images are initialized to 256x512 pixel size and are stored in JPEG format. JPEG (Joint Photographic Experts Group) is a standard for destructive or loss compromising for digital images.
The noise in the image is removed through various filtering techniques. SURF performs both image segmentation and feature extraction techniques. Here feature extraction based on region of interest is used by selecting the interest point across the image.

B. SURF Based Feature Extraction

In this approach, a key or interest point based method has been used. There are many other key point detectors, the most widely used is SIFT (Scale Invariant Feature Transform). The SURF detector is chosen as it has similar performance but is much faster than SIFT. The Speeded Up Robust Features (SURF) detector and descriptor was introduced by Herbet Bay [5]. The SURF detector is a scale and rotation invariant feature detector and descriptor which has found many applications in computer vision, in particular object detection due to its repeatability and efficiency. This method essentially picks out unique points in an image and describes them. This paper makes use of the Speeded Up Robust Features key point detector which is widely used in computer vision applications
1) Integral Images: The SURF detector is based on Hessian matrix approximation. This enables the use of integral images which greatly improves performance. An integral image quickly generates the sum of values in a rectangular sub-region of an image. The approximated sub regions allow of much faster processing of the images as less computational instructions are required for each region. The integral image as shown in the equation (1) computes a value at each pixel (x, y) that is the sum of the pixel values above and to the left of (x, y).
image
In equation(2), Lxx represents a convolution of the Gaussian second order derivative of the image I at point X. The Gaussians are known to be optimal for scalespace analysis and this is one of the reasons why the SURF detector has scale invariant properties. In order to improve performance of the detector, the second order derivatives are approximated using box filters as shown in figure below using 9x9 box filters.
image
Scale space relationships are usually implemented using image pyramids. In order to do this the images are repeatedly smoothed using a Gaussian function, this is known as a low pass pyramid. This smoothing is done using integral images and box filters for computational efficiency. The pyramid of each different scale is constructed by increasing the box filter window size. Once the Hessian matrix determinant has been approximated at each scale, non-maximum suppression is applied around the neighbourhood to find the maxima. The maxima points are then interpolated in both scale space, and image space which will give stable points. Each stable point is considered to be an interest point or key point.
3) SURF Key Point Description: Once a set of key points are found in a given image, each of these points need to be uniquely described so they can be matched as described later. The main description feature of each key point is the dominant orientation. By taking into account the orientation for each key point, allows the SURF detector to be rotation invariant. In order to find the orientation of each key point, the Haar wavelet responses in the X and Y direction for the point and in a neighbourhood around the point are found. An example of Haar wavelets and again integral images are used for fast computation of the Haar wavelets. The Haar wavelets are calculated in a circular radius of 6s (s is the scale of key point) around each key point. The Haar wavelet responses are represented as vectors were the horizontal response strength is plotted along the horizontal axis; the vertical response strength is plotted along the vertical axis.
image
The dominant orientation of the key point is found by calculating the sum of the horizontal and vertical responses over a sliding window of 60°. The largest vector found across all the sliding windows is the dominant orientation of the current key point. The dominant orientation of each key point is not sufficient to uniquely describe each key point. For each key point a 20s (s is scale of key point) square region is created that is orientated along the dominant direction previously calculated. This square region is further divided into 4x4 sub regions. In each sub region, the X and Y Haar responses are again calculated and summed. Using the Haar responses, a 4D description vector V is formed. This results in a descriptor that has a length of 64 for each key point.

C. Recognizing Banknotes

Given a query image, SURF first detects the interest points and generates corresponding descriptors. The precomputed SURF descriptors of template images in each category are then used to match with the extracted descriptors of the query image.
image
The number of matched points between the query image and template images of different categories is determined. Then the Euclidean distance between the matched points in the template and the query image is calculated and the average is taken. The template image with the shortest distance with the query image is considered to be the possible query image and the result is displayed.

III. EXPERIMENTAL RESULTS AND ANALYSIS

A. Dataset

The dataset presented in our experiment is more challenging than that from other banknote recognition papers. For example, the dataset in most papers were collected by using a scanner to scan the bills which were taken under restricted or standard conditions
image
Thus, our dataset generalizing the conditions of taking banknote images is more challenging and approximates to the real world application environment.

B. Results

Each category of currency images cover all of the conditions of partial occlusion, cluttered background, rotation, scaling change and illumination change, as well as wrinkling. In the recognition experiments evaluated in our testing dataset, the proposed algorithm achieves 96.42% true recognition accuracy for all seven categories. The experimental results have shown the effectiveness of the monetary features extracted by SURF and for banknote recognition. Some neural network based banknote recognition systems achieved recognition rate no large than 95%.
image
Although our algorithm is evaluated in a more challenging dataset, our algorithm achieves much better recognition results and outperforms the existing banknote recognition algorithms.

IV. CONCLUSION

The experimental results have shown the effectiveness of SURF for banknote recognition. The scale-invariant and rotation-invariant interest point detector and descriptor provided by SURF is robust to handle the image rotation, scaling change and illumination change. This recognition system achieves 96.42% true recognition accuracy. The computational cost for recognition include extracting SURF features from testing images and matching them with features of all the template images, and displaying the recognition output.

References

  1. S. Chae, J. Kim and S. Pan, “A Study on the Korean Banknote Recognition Using RGB and UV Information”, Communication in Computer and Information Science, vol. 56, 477-484, 2009.
  2. H. Hassanpour and P. Farahabadi, “Using hidden markov models for paper currency recognition”, Expert System with Applications, vol. 36, pp. no. 10105-10111, 2009.
  3. N. Jahangir and A. Chowdhury, “Bangladeshi Banknote Recognition by Neural Network with Axis Symmetrical Masks”, IEEE Conf. on Computer and Information Technology, 2007.
  4. T. Kosaka, S. Omatsu, and T. Fujinaka, "Bill classification by using the LVQ method," Proc. IEEE Conf. on Systems, Man, and Cybernetics, vol. no. 3, 2001.
  5. H. Bay, T. Tuytellars, and L. Gool, “SURF: Speeded Up Robust Features”, European Conference on Computer Vision, 2006.
  6. T. Kosaka and S. Omatu, “Bill Money Classification by Competitive Learning,” IEEE Midnight-Sun Workshop on Soft Computing Methods in Industrial Applications, 1999.
  7. J. Lee, S. Jeon and H. Kim, “Distinctive Point Extraction and Recognition Algorithm for Various Kinds of Euro Banknotes”, International Journal of Control, Automation, and Systems vol. no. 2, 2004.
  8. F. Takeda and S. Omatu, “High Speed Paper Currency Recognition by Neural Networks”, IEEE Trans. on Neural Networks, vol. no. 6, pp. no. 73-77, 1995.
  9. F. Takeda, L. Sakoobunthu, H. Satou, “Thai Banknote Recognition Using Neural Network and Continues Learning by DSP Unit”, Knowledge-Based Intelligent Information and Engineering Systems, 2003.
  10. F. Takeda and T. Nishikage, “Multiple kinds of paper currency recognition using neural network and application for Euro currency”, IEEE-INNS-ENNS Joint Conf. on Neural Networks, 2000.
  11. D. Lowe, “Distinctive image features from scale-invariant keypoints”, International Journal of Computer Vision, 2004.