scholarly journals Speaking System for Mute Peoples

Author(s):  
Jyoti Nigde ◽  
Omkar Parit ◽  
Sagar Parab ◽  
Dr. S. D. Shribahadurkar ◽  
Prof. D. P. Potdar

This interacting device is a microcontroller based system which is basically outline for lessening the communication space between dumb and normal people. This system can be accordingly configured to work as a “smart device”. In this paper, Atmega 328 microcontroller, voice module, LCD display and flex sensors are utilize. The device considered is basically residing of a glove and a microcontroller based system. Data gloves are used to detect the hand motion and microcontroller based system will interpret those few manoeuvre into human place voice. The data glove is furnished with four flex sensors placed on the glove. This system is helpful for dumb people and their hand manoeuvre will be converted into speech signal because of the date glove worn on the hands.

2015 ◽  
Vol 2015 ◽  
pp. 1-9
Author(s):  
Pei-Jarn Chen ◽  
Yi-Chun Du

This paper proposes a portable system for hand motion identification (HMI) using the features from data glove with bend sensors and multichannel surface electromyography (SEMG). SEMG could provide the information of muscle activities indirectly for HMI. However it is difficult to discriminate the finger motion like extension of thumb and little finger just using SEMG; the data glove with five bend sensors is designed to detect finger motions in the proposed system. Independent component analysis (ICA) and grey relational analysis (GRA) are used to data reduction and the core of identification, respectively. Six features are extracted from each SEMG channel, and three features are computed from five bend sensors in the data glove. To test the feasibility of the system, this study quantitatively compares the classification accuracies of twenty hand motions collected from 10 subjects. Compared to the performance with a back-propagation neural network and only using GRA method, the proposed method provides equivalent accuracy (>85%) with three training sets and faster processing time (20 ms). The results also demonstrate that ICA can effectively reduce the size of input features with GRA methods and, in turn, reduce the processing time with the low price of reduced identification rates.


2018 ◽  
Vol 7 (3.34) ◽  
pp. 172
Author(s):  
R Premkumar ◽  
Jayanarayani K ◽  
Lavanya J ◽  
Nirubaa A.G ◽  
Thirupurasundari K

The project is about smart way of collecting and disposing garbage. A system is designed by automatic opening and closing of the lid when the sensor senses the hand motion. The level of the wastes is measured using sonar sensor and the smell from the waste is detected using gas sensor. In order to avoid the smell, sprayer is placed inside the dustbin which is activated when the signal is sent. There is a motor attached to the lid is used to compress the garbage for further dumping. Once the bin is fully filled the lid is closed automatically and a message is send through GSM. The status of the bin will be displayed as a message outside the bin using LCD display. The problem of overflowing of garbage and the smell will be avoided leading to a good hygienic environment.  


Sensors ◽  
2015 ◽  
Vol 15 (12) ◽  
pp. 31644-31671 ◽  
Author(s):  
Ewout Arkenbout ◽  
Joost de Winter ◽  
Paul Breedveld

2015 ◽  
Vol 713-715 ◽  
pp. 1847-1850
Author(s):  
Zhong Zhu Huang ◽  
Zhi Quan Feng ◽  
Na Na He ◽  
Xue Wen Yang

Gesture has different speed in the process of movement. To reflect the different speed of the user, this paper presents a gesture speed estimation method. Firstly, we use data glove and camera to establish the relation between the variation of gesture contour and that of gesture speed. Secondly, we build the gesture speed estimation model by stages. Finally, we get the real-time speed of hand motion through this model and complete interactive task. The main innovation of this paper is that we reveal the relation between the gesture contour and speed to lay the foundation for further capture the interaction of user intention. Experimental results indicate that the time cost of our method decreased by 31% compared with freehand tracking based on behavioral models and the 3D interactive system based on our model is of high the user experience.


our country India consists of very high population. Almost 9 millions of population in this world can be counted as a deaf or dumb people or both. We all know that the most valuable gift from the god given to human is the capability to watch, hear, spoke & to give response as per the situation arises. We all know communication is one of the most important medium through which one may carve up his/her feelings or express the information to others. The key points in communication are ability of listening and speak the word but so many from us are unlucky because they are not gifted by this ability from the god these are deaf & dumb and people. Now a days so many researches are going on to solve difficulties of these people of our society because they had to face very hard to communicate with normal person. It is too hard for mute(deaf & dumb) people to transmit their information to the normal people. As all normal people are not fully trained to understand different sign lingo, the communication between these two types of people becomes too complex. At the time emergency or other days whenever a mute (Deaf & Dumb) people are travelling passing a message (data) becomes too hard. Due to this disability one that has hearing and speaking disability doesn’t want to stand and face the race with normal person. Since the communication for the mute people is image (Visibility), not acoustic (Audio). For these mute people Hand motion plays a very part for communication. The data transmission between mute-deaf & normal people is always a very challenging job. Deaf people make use of sign language or gestures to make understand what he/she trying to say but it is impossible to understand by hearing people. The admittance to various communication based technologies plays an essential role for these (mute) handicapped peoples. Developing a small and compressed gadget for these mute people is a difficult task. Deaf-Dumb people find a difficulty in communicating with normal people and hence they always stay apart in their societies. Basically deaf-dumb ones uses various sign language for communication, & they always face lots of difficulty while communicating with normal people because they all not able to understand sign language every time. Hence there is always a hurdle in communication between these two type of peoples. There are so many researches has done in regulate to search a simple and easy way for mute people to communicate with the normal people and articulate themselves to the rest of the real world. So many improvements have been done in sign language but all are based on American gesture (sign) Language. Our research work is purely designed to provide an aid to this deaf-mute by developing and designing a smart communication module which will help to renovate sign language to text & speech communication with other and to help them lead a life in a much better way. Our paper represents a static gesture recognition algorithm that will be designed and implemented will be used in the smart communication system to bridge the communication gap between deaf & dumb and normal people. If the algorithm is implemented completely it can be can be also used to capture and analyze emotions of the people in area where high security is desired.


Author(s):  
Amal S. Eldesoky ◽  
Kamel Eid ◽  
Aboubakr M. Abdullah

The deaf impairment is among the most substantial health problem worldwide, that can lead to various personal, economical, and social crisis. Therefore, it is critical for developing an efficient way to facilitate communication between deaf-dumb impaired and normal people. Herein, we have rationally designed a new digitally computerized and mobile smart system as an efficient communication tool between deaf impaired and normal Arabian people. This is based on two main steps, including creating a digital output for the hand gestures using gloves flex sensors equipped with a three-axis accelerometer that is controlled using a microcontroller. The digital results are compared to that in a words-based “database”, where Arabs use expressions not alphabet in their communication. The second step is translation or conversion the outputs of the first stage into written texts and voices. The newly developed system allows Arabian deaf to translate words of ordinary people into gestures using a speech recognition system with an impressive accuracy over 90 % without the needing for a webcam, colored gloves, and/or online translator. The presented system can be used on any android or windows.


Author(s):  
Gayathri. R ◽  
K. Sheela Sobana Rani ◽  
R. Lavanya

Silent speakers face a lot of problems when it comes to communicate their thoughts and views. Furthermore, only few people know the sign language of these silent speakers. They tend to feel awkward to take part any exercises with the typical individuals. They require gesture based communication mediators for their interchanges. The solution to this problem is to provide them a better way to take their message across, “Smart Finger Gesture Recognition System for Silent Speakers” which has been proposed. Instead of using sign language, gesture recognition is done with the help of finger movements. The system consists of data glove, flex sensors, raspberry pi. The flex sensors are fitted on the data gloves and it is used to recognize the finger gestures. Then the ADC module is used to convert the analog values into digital form. After signal conversion, the value is given to Raspberry Pi 3, and it converts the signals into audio output as well as text format using software tool. The proposed framework limits correspondence boundary between moronic individuals and ordinary individuals. Therefore, the recognized finger gestures are conveyed into speech and text so that the normal people can easily communicate with dumb people.


Sensors ◽  
2019 ◽  
Vol 19 (21) ◽  
pp. 4680 ◽  
Author(s):  
Linjun Jiang ◽  
Hailun Xia ◽  
Caili Guo

Tracking detailed hand motion is a fundamental research topic in the area of human-computer interaction (HCI) and has been widely studied for decades. Existing solutions with single-model inputs either require tedious calibration, are expensive or lack sufficient robustness and accuracy due to occlusions. In this study, we present a real-time system to reconstruct the exact hand motion by iteratively fitting a triangular mesh model to the absolute measurement of hand from a depth camera under the robust restriction of a simple data glove. We redefine and simplify the function of the data glove to lighten its limitations, i.e., tedious calibration, cumbersome equipment, and hampering movement and keep our system lightweight. For accurate hand tracking, we introduce a new set of degrees of freedom (DoFs), a shape adjustment term for personalizing the triangular mesh model, and an adaptive collision term to prevent self-intersection. For efficiency, we extract a strong pose-space prior to the data glove to narrow the pose searching space. We also present a simplified approach for computing tracking correspondences without the loss of accuracy to reduce computation cost. Quantitative experiments show the comparable or increased accuracy of our system over the state-of-the-art with about 40% improvement in robustness. Besides, our system runs independent of Graphic Processing Unit (GPU) and reaches 40 frames per second (FPS) at about 25% Central Processing Unit (CPU) usage.


2018 ◽  
Vol 39 (4) ◽  
pp. 532-540 ◽  
Author(s):  
Bor-Shing Lin ◽  
I-Jung Lee ◽  
Pei-Ying Chiang ◽  
Shih-Yuan Huang ◽  
Chih-Wei Peng

Sign in / Sign up

Export Citation Format

Share Document