ISSN ONLINE(2278-8875) PRINT (2320-3765)

All submissions of the EM system will be redirected to Online Manuscript Submission System. Authors are requested to submit articles directly to Online Manuscript Submission System of respective journal.

A Novel Method for Recognizing Sound of the Silent through Gestures - Sign Language Interpreter

S.Madhumitha, Neeraja S. Sathya, Pravya Pinto, Riyan John Stephan
Student, Dept. of ECE, Mar Baselios College of Engineering and Technology, Thiruvananthapuram, Kerala, India
Related article at Pubmed, Scholar Google

Visit for more related articles at International Journal of Advanced Research in Electrical, Electronics and Instrumentation Engineering


The rapid advancement in communication technology has shrunk the world to a global village. But there is still a part of the society impotent in expressing ideas verbally. It is high time to break the barriers faced by vocally challenged people. “Sign language interpreter” is an electronic device that translates the sign language into speech by the use of gesture mapping thereby bridging the communication gap between the vocally challenged community and normal community, facilitating efficient communication. This system recognizes the hand gestures with the help of specially designed gloves fitted with flex sensors along the length of finger and an accelerometer. These recognized hand gestures are converted into speech by software so that normal people can understand their expressions. In this gesture recognition system a single gesture is translated to a complete message, rather than a couple of gestures forming a word as in the existing systems. This makes the communication process much simpler and effective rendering a human interpreter dispensable. The system can be customized by the user according to his needs as the messages are derived from a database developed after predictive analysis.


Flex sensor, Accelerometer, Microcontroller, Zigbee module, Gesture mapping


Communication is the activity of conveying information through the exchange of thoughts, messages, as by speech, visuals, signals, writing, or behaviour. It is the meaningful exchange of information between two or more living creatures. Mankind has always strived to break the language barriers around the world. There has been a transition from interpreters to advanced dictionaries to logical courses. Artificial translators instantly tell us what a person speaking a foreign language is saying with the help of a synthetic voice and some clever coding. Human race will keep pushing the boundaries for what is possible within the technology field, and as a result, we will eventually manage to breakdown the language barriers completely. But there is one language that has been extra hard to implement in this overall mission and that is sign language.
Sign language is the language used by deaf and mute people and it is a communication skill that uses gestures instead of sound to convey meaning simultaneously combining hand shapes, orientations and movement of the hands, arms or body and facial expressions to express fluidly a speaker’s thoughts. In order to facilitate communication between deaf and hearing people, sign language interpreters are often used. Such activities involve considerable effort on the part of the interpreter, since sign languages are distinct natural languages with their own syntax, different from any spoken language. Signs are used to communicate words and sentences to audience. Sign languages are not mime – in other words, signs are conventional, often arbitrary and do not necessarily have a visual relationship to their referent, much as most spoken language is not onomatopoeic. While iconicity is more systematic and widespread in sign languages than in spoken ones, the difference is not categorical. The visual modality allows the human preference for close connections between form and meaning, present but suppressed in spoken languages, to be more fully expressed. This does not mean that sign languages are a visual rendition of a spoken language. They have complex grammars of their own, and can be used to discuss any topic, from the simple and concrete to the lofty and abstract.
‘Sign Language Interpreter’ is a recognition system for the vocally disabled. We have turned to glove-based technique as it is more practical in gesture recognition which involves the use of specially designed sensor glove which produces a signal corresponding to the hand sign. This is possible with the help of flex sensors. Flex sensors are sensors that change resistance depending on the amount of bend on the sensor. The digital glove can also provide sign for letters to perform words that don’t have corresponding sign in that sign language. Sensor gloves technology has been used in a variety of application areas, which demands accurate tracking and interpretation of sign language. As the performance of the glove is not affected by light, electric or magnetic fields or any other disturbance, the data that is generated is accurate. The microcontroller analyses the signal and transmits it via zigbee to a speech synthesis unit. Finally software translates them into speech. As the system uses the low cost and easily available sensors and IC’s the system is very much cost effective. This electronic device can translate sign language into speech and bridges the communication gap between the mute communities and the general public.


There has been always a separation between mute community and normal world, but there have been efforts to bridge this gap with the advent of technology. Many scientists are working in the field of gesture mapping and its recognition. Reference [1] describes a system ‘Glove-Talk II’ which translates hand gestures to speech through an adaptive interface. Hand gestures are mapped continuously to ten control parameters of a speech synthesizer. With Glove-Talk II, the subject can speak slowly but with far more natural sounding pitch variations than a text-to-speech synthesizer. Reference [2] examines the possibility of recognizing sign language gestures using sensor gloves by implementing a project called ‘Talking Hands’ which uses artificial neural networks to recognize the sensor values coming from the sensor glove. Reference [3] proposes a hierarchical gesture recognition framework based on the combined use of multivariate Gaussian distribution, bigram and a set of rules for model and feature set selection, deriving from a detailed analysis of misclassified gestures in the confusion matrix. A latest and wonderful work done in the field of gesture recognition with particular emphasis on hand gestures and facial expressions is described in reference [4]. Reference [5] describes the various categories for gesture recognition. Reference [6] introduces a novel method to recognize and estimate the scale of time-varying human gestures. Hidden markov models (HMM) are used for gesture recognition in reference [7]. Reference [8] describes an augmented reality tool for vision-based hand gesture recognition in a camera-projector system and uses modified Fourier descriptors for the classification of static hand gestures. Reference [9] and [10] discuss the gesture recognition for human robot symbiosis and human robot interaction. Reference [11] describes a methodology using a neighbourhood-search algorithm for tuning system parameters for gesture recognition.


The main technologies incorporated in the project are: Gesture mapping using bend-sensitive resistors, also called flex sensors; and point-to-point wireless transmission using the zigbee protocol. Our implementation consists of a hand glove attached with flex sensors and an accelerometer, which helps in gesture detection. The detected gesture is coded into a specific digital value by the microcontroller. Theses digital values are transmitted wirelessly via a zigbee module which in turn is received by another zigbee module and decoded using MATLAB and spoken words emerge.
1. Gesture Mapping Using Flex Sensors
Gesture mapping, here, refers to the process of detecting a particular movement, and generating corresponding sentences selected from a predesigned database. Thus, the resulting words will convey what the user intends to communicate. The process of detecting the motion requires the use of some sensor or motion detector. The gesture mapping system also needs a decoding unit, like a microcontroller, to analyze the information received from the sensors and accelerometer which are analog in nature and to convert that data into digital values, for ease of data transmission.
Flex sensors are sensors that change in resistance depending on the amount of bend on the sensor. They convert the change in bend to electrical resistance - the more the bend, the more the resistance value. They are usually in the form of a thin strip from 1"-5" long that vary in resistance from approximately 10 to 50 kilo ohms. They are often used in gloves to sense finger movement. The resistance of the flex sensor changes when the metal pads are on the outside of the bend (text on inside of bend). The membrane construction is both resilient and somewhat durable, and can be used within a temperature range of -35ºC to +80ºC for an operational life rating of over 1 million movements if the sensor is secured properly. Flex sensors are used in gaming gloves, auto controls, fitness products, measuring devices, assistive technology, musical instruments, joysticks, and more. Flex sensors are basically analog resistors. They work as variable analog voltage dividers. Inside the flex sensor are carbon resistive elements within a thin flexible substrate. More carbon means less resistance. When the substrate is bent the sensor produces a resistance output relative to the bend radius. With a typical flex sensor, a flex of 0 degrees will give 10K resistance will a flex of 90 will give 30-40 K ohms. The Bend Sensor lists resistance of 30-250 K ohms. The flex sensors can also be used in a variety of ways- as a voltage divider, adjustable buffer, variable deflection threshold switch, resistance to voltage converter. The sensors must be tightly and properly fixed to the hand glove to reduce unwanted bending or noise, due to the nature of the sensors to detect small variations in the bend angle.
Accelerometer functions as a tilt sensor. The output signals are analog voltages that are proportional to acceleration. The accelerometer can measure the static acceleration of gravity in tilt-sensing applications as well as dynamic acceleration resulting from motion, shock, or vibration. It is a small, thin, low power, complete 3-axis accelerometer with signal conditioned voltage outputs. The product measures acceleration with a minimum full-scale range of ±3 g. It can measure the static acceleration of gravity in tilt-sensing applications, as well as dynamic acceleration resulting from motion, shock, or vibration. The user selects the bandwidth of the accelerometer using the CX, CY, and CZ capacitors at the XOUT, YOUT, and ZOUT pins. Bandwidths can be selected to suit the application, with a range of 0.5 Hz to 1600 Hz for the X and Y axes, and a range of 0.5 Hz to 550 Hz for the Z axis.
The accelerometer used in our project is the MMA7361L which is a low power, low profile capacitive micro-machined accelerometer featuring signal conditioning, a 1-pole low pass filter, temperature compensation, self test, 0g-Detect which detects linear freefall, and g-Select which allows for the selection between 2 sensitivities. Zero-g offset and sensitivity are factory set and require no external devices. The MMA7361L includes a Sleep Mode that makes it ideal for handheld battery powered electronics.The output from the sensors and accelerometer is then passed as input to the microcontroller, which is used to convert the corresponding analog values to digital values.
2. Point-to-Point Wireless Transmission
Zigbee is a technology for data transfer in wireless networks. It is a low-cost, low-power, wireless mesh network standard. The low cost allows the technology to be widely deployed in wireless control and monitoring applications. Low power usage allows longer life with smaller batteries. Mesh networking provides high reliability and more extensive range. Zigbee operates in the industrial, scientific and medical (ISM) radio bands: 868 MHz in Europe, 915 MHz in the USA and Australia and 2.4 GHz in most jurisdictions worldwide. Data transmission rates vary from 20 kilobits/second in the 868 MHz frequency band to 250 kilobits/second in the 2.4 GHz frequency band. The zigbee network layer natively supports both star and tree typical networks, and generic mesh networks.
Zigbee builds upon the physical layer and media access control defined in IEEE standard 802.15.4 (2003 version) for low-rate WPANs. Zigbee is not intended to support power line networking but to interface with it at least for smart metering and smart appliance purposes. Because zigbee nodes can go from sleep to active mode in 30 ms or less, the latency can be low and devices can be responsive, particularly compared to Bluetooth wake-up delays, which are typically around three seconds. Because zigbee nodes can sleep most of the time, average power consumption can be low, resulting in long battery life. A zigbee module can be used as both transmitter and a receiver. Zigbee devices can be used with almost every microcontroller boards and can be easily synchronized. They have a highly efficient transmission for around 50 to 100 m of range in a point-to-point setup.


This system consists of two sections: the transmitter section and the receiver section.
1. Transmitter Section
The flex sensors mounted on three fingers of a glove, changes their resistance upon bending. As the fingers bend these sensors produce a voltage which is in proportion to the amount of bending. For the identification of a specific gesture, we take into account the analog signals from three fingers and the signals from accelerometer produced in x and y directions. The accelerometer functions as a tilt sensor. As mentioned above, this project makes use of a series of flex sensors to determine variation in the finger positions. The combination of analog voltages from the three fingers and accelerometer is sent to the microcontroller. But the output from this voltage divider series is a very low undetectable value and this entails the use of a current booster circuit. A LM324 IC is used for this purpose which amplifies the current while maintaining the voltage constant. The microcontroller, the heart of the system is the platform for all the decision-making processes in the system. It is used to convert the analog values to corresponding digital values that help in effective recognition of the hand gesture shown. The zigbee transmitter module is used to transmit the converted values to the receiver section. It is more convenient and user friendly because of wireless transmission. The module used is zigbee tarang module.
2. Receiver Section
The Receiver section consists of a zigbee receiver module which is plugged to a laptop. The wave files of spoken sentences have been stored in a MATLAB database. The combination that was send by the zigbee transmitter module is been received by the zigbee receiver module and the received combination is then checked for a match with the values stored in the MATLAB. Once the sensor data is matched with the database then the speech of text that was prerecorded will be played out through the speaker.
Thus the output signal is an audio signal, dependent on ADC count.Each hand gesture represents a message and taken in a way such that user can modify it as per their requirement. Every bending of the sensor (finger) produces a unique ADC count so that when different hand sign is made different ADC count is produced. A MATLAB program is based on ADC counts is used to decide which audio signal should be fetched and played accordingly. Using this concept of ADC count, more and more no. of hand signs can be used as per user and accuracy can be increased with a little change in the ADC count. Thus, a maximum of 243 hand gestures can be recognized using this prototype.
In the system flow shown in Fig.3, initially the analog values from flex sensors and accelerometer corresponding to a particular hand gesture shown are read and sent to microcontroller. But the output from this voltage divider series is a very low undetectable value and this entails the use of a current booster circuit. The microcontroller converts the analog values into digital values and these are transmitted via zigbee transmitter. At the receiver, the received combination values are checked for a match with the predetermined values stored in MATLAB database. If matched, the corresponding message is played out. The hardware unit developed is shown in Fig.4.


This system will almost bridge the communication gap present between the mute community and the normal world. It can be used at public places like airports, railway stations and counters of banks, hotels etc. where communication is essential. This device may come helpful even in a fire extinguishing process. A mute person can also deliver a lecture using it. This device can be used in a variety of applications like computer gaming. A normal person can easily learn the sign language using this device. The system also has several advantages like low cost, compact, handy and portable, flexible to users, less power consumption etc.


Sign language is a tool used to communicate between the mute community and the normal people. But it is difficult for the mute community to communicate with people who do not understand the sign language. This system prototype was designed to automatically recognize sign language to help normal people to communicate more effectively with speech impaired people. This system recognizes the hand signs using sensor gloves and these recognized hand gestures are converted into speech so that normal people can easily understand. The project aims to lower the communication gap between normal world and the mute community.

Figures at a glance

Figure 1 Figure 2 Figure 3 Figure 4
Figure 1 Figure 2 Figure 3 Figure 4