| Keywords | 
        
            | Human-Robot Interaction (HRI) Embedded Systems, Computer Vision, Hand Gesture, Feature        Extraction, Classification. | 
        
            | INTRODUCTION | 
        
            | Gestures are a powerful means of communication among humans. In fact, gesturing is so deeply rooted in our        communication that people often continue gesturing when speaking on the telephone. Hand gestures provide a separate        complementary modality to speech for expressing ones ideas. Information associated with hand gestures in a        conversation is degree, discourse structure, spatial and temporal structure. So, a natural interaction between humans        and computing devices can be achieved by using hand gestures for communication between them. The key problem in        gesture interaction is how to make hand gestures understood by computer. Human-Robot Interaction (HRI) can be        considered as one of the most important Computer Vision domains. It has many applications in a variety of fields such        as: search and rescue, military battle, mine and bomb detection, scientific exploration, law enforcement, entertainment        and hospital care.HRI is the study of interactions between people and robots. | 
        
            | II. RELATED WORK | 
        
            | A survey of existing works in this field is found in [1 2 3 4].Each having strengths and weaknesses, where hidden        markov models [4] and some neural networks [1] methods were used for gesture recognition. Where 70-75 % accuracy        was found. | 
        
            | In our present work, we intend to design and develop mobile robot system. Such that, Fig 1 shows the web camera        connected to the Central Processing System, captures the object (gesture) image. The Central Processing System        processes those images to generate command and sending the command through wireless media so that robot will move        according to gesture. | 
        
            | III. SYSTEM DESCRIPTION | 
        
            | The overall system is divided into five different phases. Fig 2 shows the basic block diagram of all phases. | 
        
            | 1. Robot Design: The main aim is to design a robust robot in low cost, with high reliability as well as with minimal        response time, which can server for Human-Robot Interaction. This Phase mainly deals with hardware portion of our        System, where we have designed a robot using electronic and electronic hardware such as: AVRatmega32        microcontroller to program robot, motor controller, actuators, vision sensors, low power XBee ZB RF transceiver        modules to send the command generated from central system to robot, rechargeable battery, wheels, aluminums chassis        etc [5 6 8]. | 
        
            | 2. Image Input Phase: Here camera is used to take image of object (gesture). | 
        
            | 3. Gesture Identification Phase: Output of 2nd phase i.e. an image is taken as input for this phase. This phase        concentrates on image processing, where gestures are recognized and processed by central processing system. | 
        
            | 4. Robot Control Phase (Control Command Generation): Once gestures are recognized by central processing        system, commands are generated transmitted through zigbee to the robot system. | 
        
            | 5. Robot Mobility Phase: (The embedded controller decodes the received command from central computer and        drives the robot into one the direction among the five movements given below. According to received command, motor        drive and robot has to move in one of the following (the present work confines to following 5 directions as shown in        Fig 3.): | 
        
            | a. Forward move, b. Backward move, c. Left turn, d. Right turn, e. Stop | 
        
            | IV. PROPOSED METHOD | 
        
            | An overview of proposed approach for Gesture Identification: | 
        
            | A Human gesture captured by camera attached to the central processing system will be taken as input to Gesture        Identification System (GIS). This Image naturally will be a color image. The GIS system follows schematic procedure        of Image Process. | 
        
            | 1. Preprocessing: To improve the performance of gesture identification (GI) image preprocessing such as | 
        
            | a) Removal of noise. | 
        
            | b) Discovery of Region of Interest (R.O.I). | 
        
            | c) Clipping of the ROI portion of gesture image (GI) shown in Fig 4. (b). | 
        
            | are used. | 
        
            | 2. Binarization: By adopting optimal threshold schemes GI will be Binarized | 
        
            | Binary Gesture Image (BGI): | 
        
            |  | 
        
            | Algorithm Binarization Input: Color image Output: Segmented Black and White Image | 
        
            | 1. Convert Color image into gray-scale image. | 
        
            | 2. Choose adaptive threshold value. | 
        
            | 3. Apply threshold value on gray-scale image. | 
        
            | 4. Black and White image. | 
        
            | 3. Histogram: Using BGI is a two dimensional array of Zeros and Ones. Number of ones per each row of the array        indicates the frequency of gesture pixels in ROI. This collection of frequency is the distribution of Gesture pixels over        the row which is referred as row Histogram. Similarly one can have Colum referred as Colum Histogram. By using        these two histogram get two different postures of hand gesture [7] as shown in Fig 5. (a) and (b) resulted column and        row wise histogram of forward move. | 
        
            | Algorithm Histogram Input: Binary Image Output: Row-wise and Column-wise histogram | 
        
            | 1. Get size of input image (in number of rows and columns). | 
        
            | 2. For each input image i . | 
        
            | 3. For each column j. | 
        
            | 4. Count number of black/white pixels. | 
        
            | 5. For each row k. | 
        
            | 6. Count number of black/white pixels. | 
        
            | 7. By using results of step 4 and step 6, draw histogram | 
        
            | 4. Feature extraction: Proposed a new method to recognize/classify the gesture. The Histogram derived from the        above are assumed to possess the information regarding the gesture, thus the characteristics of histogram are expected        to capture the inherent characteristics of the gesture. Thus the popular statistical characteristics of a histogram like the        Mean, Variance, skewness, Kurtosis are considered here as feature values for the given gesture. Thus the feature set        for a given gesture will have ROI parameters, that is (X0 , Y0 ) and (X1,Y1) the two corners of ROI rectangle, mean,        variance, skewness, Kurtosis for Row Histogram as well as for the column Histogram, put together 8 values. Hence the        feature vector will have 12 numerical values ( 2+ 2 + 8) Constituting 12-dimensional real feature vector. The gesture        signals are move forward, backward, left turn, right turn or to stop and are realized by using these statistical features.        We have used matlab tool to compute these statistics. | 
        
            | Let fx be the frequency at x(=1,2,…,k) of a random variable X | 
        
            |  Eq (2) | 
        
            | And let | 
        
            |  Eq (3) | 
        
            |  Eq (4) | 
        
            |  Eq (5) | 
        
            |  Eq (6) | 
        
            |  Eq (7) | 
        
            |  Eq (8) | 
        
            | Here  | 
        
            | Considering x as a Row index and corresponding number of black pixels as fx, we have calculated above four        statistical measures and similarly column index as x and the corresponding black pixels as fx, we have calculated four        statistical measures and considered them as a feature set. | 
        
            | 5. Gesture classification: These hand gestures are highly influenced by the human, the one providing the gestures, like        using right hand or left hand, front or back, straight or curved, skewed orientation etc. This makes the identification of        the gesture direction highly complex. In addition to this the influence of elimination and shadow of the hand postures        makes this gesture classification further complex. | 
        
            | The intent of the classification process is to categorize all gesture for their corresponding directions. Here we have five        different gestures to classify. Based on above calculated features for example mean, variance, skewness, kurtosis,        cropped image positions, we have categorized the gestures(which signals: forward, backward, left, right stop motion of        the robot). | 
        
            | Experiment are conducted by collecting gesture signals of several students (mix of male female students shown in        table 1) giving them total freedom to make their gestures. The image is tagged with their corresponding directions. This        wrapped data has been used for building rule based system for gestures, by adapting supervised learning philosophy | 
        
            | To classify the data we have used Weka data-mining classification tool. Taking sample of many gestures and        getting above feature data, we have divided data set into training and test set. Based on training data set, we have        developed a classification model to predict the gesture direction. There are many classification algorithm options        provided by the tool like SimpleCART, ID3, tenfold validations, Neural Networks etc. We have done many        experiments on data set; the CART classification model has given good accuracy (89-90 %) and less error. Hence we        have adapted CART. | 
        
            | V. SIMULATION RESULTS | 
        
            | The following table 1 shows classified computed values of mean using Eq (3), variance Eq (4), skewness Eq (7),        kurtosis Eq (8) etc of different hand gestures which are collected in our lab and processed to control the robot. | 
        
            | Where, | 
        
            | X is used for column-wise histogram, Y for row-wise histogram, s for skewness, k for kurtosis, min., and max. X Y        for cropped image positions, G gesture where 1 is code considered for backward move, 2 for forward, 3 right, 4 left, 5        stop). | 
        
            | Robot Control Phase | 
        
            | Gestures to commands mapping | 
        
            | Obtaining the input from image processing unit regarding gesture classification we need to map the motion        direction to a command code and to generate motion command for the Robot. The command code can be move        forward, backward, take right turn, left turn or to stop the robot. The generated command is used to control the motor        rotational speed. If it is left turn or right turn the motors will rotate in opposite directions. Fig 6 shows the flowchart of        gesture to command mapping, where according to classified gesture command codes are generated and sent to the        robot. | 
        
            | Mapping of Received command to Robot Movement | 
        
            | At robot system end, received command code is taken as input, decoded by the control program. The control        program is written into the robot's microcontroller flash memory. The control program generates sequence of actions        according to the decoded instruction. These generated sequences of actions are executed at actuators and motors levels        to realize desired motion | 
        
            | Experiment / Demonstration | 
        
            | The robot we have developed is shown in Fig 7. | 
        
            | Mobility at Robot End | 
        
            | A. Receive the command code. | 
        
            | B. Fetch the command corresponding to the command code. | 
        
            | C. Command is decoded and control signals are generated to move the robot according to received command        code. | 
        
            | Experiments taken at lab are shown in Fig 8. | 
        
            | VI. CONCLUSION AND FUTURE WORK | 
        
            | In this paper we have successfully assembled all of above mentioned hardware, for building the Robot system.        Images are used as the input for hand gesture detection. We have come-up with new-approach to recognize and to        classify the gestures using mean, variance, skewness, kurtosis, values of binary image histogram. Classification and        recognition is used as an interface for sending the command code to robot. The above method is giving 90% accuracy        of hand gesture direction detection. | 
        
            | The scope of the present work is limited to hand gesture detection, future scope is to track moving objects.. | 
        
            | Tables at a glance | 
        
            | 
                
                    
                        |  |  
                        | Table 1 |  | 
        
            |  | 
        
            | Figures at a glance | 
        
            | 
                
                    
                        |  |  |  |  |  
                        | Figure 1 | Figure 2 | Figure 3 | Figure 4 |  
                
                    
                        |  |  |  |  |  
                        | Figure 5 | Figure 6 | Figure 7 | Figure 8 |  | 
        
            |  | 
        
            | References | 
        
            | 
                Corradini, A.; Gross, H.-M.;  "Camera-based gesture recognition for robot control," Neural  Networks, 2000. IJCNN 2000, Proceedings ofthe IEEE-INNS-ENNSInternational  Joint Conference on , vol.4, no., pp.133-138 vol.4, 2000 doi:  10.1109/IJCNN.2000.860762 URL:http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=860762&isnumber=18666
 PavlosStathis,  ErginaKavallieratou, Nikos Papamarkos : "An Evaluation Technique for  Binarization Algorithms ";journal of UniversalComputer Science, vol. 14,  no. 18 (2008), 3011-3030 submitted: 1/3/08, accepted: 29/9/08, appeared:  1/10/08 © J.UCS
 3.Yilmaz, A., Javed, O., and  Shah, M. 2006.” Object tracking: A survey”. ACM Comput.Surv.38, 4,  Article 13 (Dec. 2006), 45 pages. DOI= 10.1145/1177352.1177355 http://doi.acm.org/10.1145/1177352.1177355
 G. Rigoll, A. Kosmala, and  S. Eickeler \”High Performance Real-Time Gesture Recognition Using Hidden  Markov Models," In Proc.Gesture and Sign Language in Human-Computer  Interaction. International Gesture Workshop, pp. 69-80, Bielefeld, Germany,  Sept. 1997.
 ATmega32 Datasheet
 http://www.digi.com/support/productdetl.jsp?pid=3352&osvid=0&s=316&tp=4
 http://en.wikipedia.org/wiki/Feature_extraction
 How to Make a Robothttp://www.robotshop.com
 |