ISSN ONLINE(23198753)PRINT(23476710)
Valmik Gholap, Prof. V.S. Dhongde

Related article at Pubmed, Scholar Google 
Visit for more related articles at International Journal of Innovative Research in Science, Engineering and Technology
Coins are used in our daily life. Coin Identification is becomes a basic need that coin will be sorted and counted automatically. In this paper we are going to implementing the Neural Network based coin recognition system and we are also using image rotation invariance for image subtraction method .
Keywords 
Neural Network, Coin Recognition, Image Subtraction 
INTRODUCTION 
We are using the coin in our daily routine almost everywhere just like in banks, markets etc. Coin is a integral part of our day life. Today worlds require a more accurate and efficient automatic coin recognition system. This paper proposes a coin recognition by neural network[8] and using image subtractions with rotation invariance technique[9]. The coin can tested using its size, shape, weight, materials. The image subtraction technique takes two images as input and gives a third image as output, whose pixel values are simply the pixel values of the first image minus the corresponding pixel values of the second image. Artificial neural networks, which are models emulating a biological neuron network, are actively used to perform pattern recognition. 
In 1997 Minoru Fukumi et al. [1] presented a rotational invariant neural pattern recognition system for coin recognition. 
Paul Davidsson [2] in 1996 presented an approach for coin classification using learning characteristic decision trees by controlling the degree of generalization. 
P. Thumwarin et al. [3] at the ARC Seibersdorf research centre in 2006 developed a coin recognition. This system was designed for fast classification of large number of modern coins from 30 different countries. Coin classification was accomplished by correlating the edge image of the coin with a preselected subset of master coins and finding the master coin with lowest distance. Pre selection of master coins was done based on three rotationinvariant features (edge angle distribution, edge distance distribution, occurrences of different rotation invariant patterns on circles centered at edge pixels), coin diameter and thickness. 
In 2011 [9] Vaibhav Gupta et al. presented an approach based on image subtraction technique for recognition of Indian coins. In this approach system performs 3 checks (radius, coarse and fine) on the input coin image. First of all radius is calculated of the input image. Then based on the radius a test image (rotated at certain fixed angle) from database is selected. Then coarse image subtraction between object and test image is done. Then, minima of the resultant image is checked if it is less than a specified threshold then fine image subtraction between object and test image is done otherwise new test image is selected. Then based on fine image subtraction, recognition takes place 
Proposed System 
A) RotationInvariant Image Subtraction Technique 
In the proposed system of image subtraction technique the following blocks are 
1) Image processing 
2) Radius Calculation 
3) Course subtraction and fine subtraction 
4) Comparison and coin identifications 
a) Image Processing 
In image processing the image acquisition and the image segmentation process are done for the coin recognition. A camera with having a good resolution is used for capturing the images . After the camera taken the image that image is given to segmentation this means that the separating the coin images from the background image. 
Process of image segmentation is firstly convert the image into the grayscale image using the following equation 1 
Gray= (0.333*r+ 0.333*g +0.333*b) (1) 
In image processing after the grayscale of image the Binary image is made . 
b) Radius Calculation: 
The diameter is found by taking the difference between maximum and minimum position of the white pixels of the binary image . All Indian coin having the different radius . 
c) Coarse Subtraction 
The test image is given with one full rotation in steps of fixed angular distance. The image subtraction is carried out between the rotated test image and the input object image . 
Subtracted (r,c) = object (r,c) – test (r,c) (2) 
d) Fine subtraction 
This is same like as that coarse subtraction with a little difference of the of rotation of image angle is (1Ã¯Â¿Â½Ã¯Â¿Â½) is small. 
From the coarse or fine subtraction the output images are compared with the grayscale images and after finding the minima we will do the threshold comparison 
e) Threshold Comparison 
After gating the minima of gray scale value sum is compared with the standard value of threshold and perditions is made that the coin matches or not. 
B) Neural Network with MLCPNN Method 
The Network (ANN) is an information processing paradigm that is inspired by the way biological nervous systems, such as the brain, process information. 
In the neural network with MLCPNN algorithm the following steps occurs as shown in fig 2 
a) Acquire RGB Image 
It is the first step of the coin recognition method .In this coin from both side are firstly scanned and that image is given to the for further grayscale conversion . 
b) Convert RGB coin to Grayscale Image 
Firstly the 24 bit image is capture from the camera and that input RGB image is converted into the 8 bit grayscale image. 
c) Remove shadow of coin from Image 
In this step, shadow of the coin from the Grayscale image is removed. As all the coins have circular boundary. So, for removing shadow Hough Transform[6] for Circle Detection is used. For this first of all edge of the coin is detected using Sobel Edge Detection[6]. 
d) Crop The Image 
After shadow removal the image is cropped so that we just have the coin in the image. Then after cropping, coin image is trimmed to make it of equal dimension of 100 × 100 or 50 × 50. 
e) Generate the pattern Averaged Image 
The 100×100 or 50×50 trimmed coin images become the input for the trained neural network. But to reduce the computation and complexity in the neural network these images are further reduced to size 20×20 or 10×10 by segmenting the image using segments of size 5×5 pixels, and then taking the average of pixel values within the segment. 
f) Generate feature vector 
The feature vector as I/P to trained Neural Network. The feature vector from the pattern averaged step is then passed as input trained neural network. This trained neural network classifies the coin into appropriate class based on which the output will be generated. MATLAB provides a Neural Network Toolbox with the help of which Neural Network for pattern recognition can be easily created. 
g) MLCPNN Algorithm 
Tables at a glance 
The MLCPNN(MultiLevel Counter Propagation Neural Network ) [8] has interconnections among the units in the cluster layer. In MLCPNN, after competition, only one unit in that layer will be active and sends a signal to the output layer. The MLCPNN has only one input layer, one output layer and one hidden layer. But the training is performed in two phases. The MLCPNN can be used in interpolation mode and also, more than one Kohonen units have nonzero activation. By using interpolation mode, the accuracy is increased and computing time is reduced. It has many advantages, because, it produces correct output even for partial input. The MLCPNN trains ANN rapidly. The parameters used are given below: 
X  Input training vector 
Y Target output vector 
Zj  Activation of cluster unit 
Wij  Weight from X input layer to Zcluster layer 
Wjk  Weight from Y output layer to Zcluster layer 
KL  Learning rate during Kohonen learning 
GL  Learning rate during Grossberg learning. 
The winning unit is selected either by the dot product or by the Euclidean distance method. To identify the winner unit, the distance is computed by using the Euclidean distance method. The smallest distance is selected as winner unit. The winner unit is calculated during both first and second phase of training. In the first phase of training, Kohonen learning rule is used for weight updating and during the second phase of training, Grossberg learning rule is used for weight updating. 
Algorithm for MLCPNN 
The implementation procedure for MLCPN is as follows: 
Step 0 : Initialize weights (obtained from training) 
Step 1 : Present input vector X 
Step 2 : Find unit J closest to vector X 
Step 3 : Set the activations of output units: Yk = Wjk 
The activation of the cluster unit is 
The image is scanned and broken into subimages. The subimages are then translated into a binary format. The binary data is then fed into a MLCPNN, which has been trained. The output from the neural network is saved as a file. A sample of various denominations including 1, 2, 3, 5, 10, 20, 25, 50, 100 and 200 paise coins are fed into the system. Large volumes of coins are fed to the system for testing purpose and the system yields very good results and the outputs are . The results are displayed in. 
MLCPNN Training Algorithm 
The MLCPNN training algorithm has the following two phases. 
Algorithm for MLCPNN 
Phase I : Finding Winning Cluster 
Step 0 : Initialize weights and learning rates 
Step 1 : While the stopping condition for phase I is false, perform steps 2 to 7 
Step 2 : For each training input X, perform steps 3 to 5 
Step 3 : Initialize input layer X 
Step 4 : Find winning cluster unit 
Step 5 : Update weights on winning cluster unit Wij (new) = Wij (old) + KL (xi  Wij (old)), where, i, j = 1 to n (2) 
Step 6 : Reduce learning rate KL Step 7 : Test the stopping condition for phase training 
Step 8 : While the stopping condition is false for phase II training, perform steps 9 to 15 
Phase II : Adjusting Weights 
Step 9 : For each training input pair X and Y, perform steps 10 to 13 
Step 10 : Initialize input layer X and output layer Y 
Step 11 : Compute the winner cluster unit 
Step 12 : Update weights in unit Z; Wij (new) = Wij (old) + KL (xi – Wij (old)); where, I , j = 1 to n (3) 
Step 13 : Update weights from cluster unit to the output unit Wjk (new) = Wjk (old) + GL (yk  Wjk (old)); where, j, k = l to m (4) 
Step 14 : Reduce learning rate GL 
Step 15 : Test the stopping condition for phase II training Zj = Euclidean distance is Dj = Among the values of Dj , the smallest value of Dj is chosen and it is the winning unit i . 
References 
[1] M. Fukumi, S. Omatu, "Rotation Invariant Neural Pattern Recognition System Estimating a Rotation angle", IEEE Trans. Neural Networks, Vol.8, No. 3,pp.568581,May,1997. [2] Davidsson. P, (1996), “Coin classification using a novel technique for learning characteristic decision trees by controlling the degree of generalization”, Ninth International Conference on Industrial and Engineering Applications of Artificial Intelligence & Expert Systems (IEA/AIE96), Gordon and Breach Science Publishers, pp. 403–412. [3] P. Thumwarin, S. Malila, P. Janthawong, W. Pibulwej, T. Matsuura, “A Robust Coin Recognition with Rotation Invariance”, International Conference on Communications, Circuits and System Proceedings,2006 [4] Fukumi.M, Mitsukura.Y, Norio Akamatsu, (2000), “Design and Evaluation of neural Networks for Coin Recognition by using GA and SA”, IEEE, pp 178183 [5] R. Bremananth, B. Balaji, M. Sankari, A. Chitra, “A New Approach to Coin Recognition using Neural Pattern Analysis”, 2005 annual IEEE, Indicon, pp. 366370 [6] Velu C. M. and Vivekanandan, P. “Indian Coin Recognition System of Image Segmentation by Heuristic Approach and Hough Transform (HT)”, International Journal of Open Problems in Computational Mathematics, Vol1., pp. 254271, 2008. [7] E. Ashbridge, D.I. Perrett, M.W. Oram and T. Jellema, “Effect of Image Orientation and Size on Object Recognition: Responses of Single Units in the Macaque Monkey Temporal Cortex”, Cognitive Neuropsychology Vol. 16: 1/2/3, pp. 13–34, 2000. [8] Velu C. M. , P. Vivekanadan , Kashwan K.R , “Indian Coin recognition and Sum Counting System of image Data Mining Using Artificial Neural Network ” , International Journals of Advanced science and Technology , Vol. 31 , June 2011 [9] Vaibhav Gupta, Rachit Puri, Monir Verma, “ Prompt Indian Coin Recognition with Rotation Invariance using Image Subtraction Technique” Vol. 9781424491902, 2011 