SIGN LANGUAGE TO VOICE RECOGNITION SYSTEM USING RASPBERRY Pi

2020 ◽  
Vol 8 (2) ◽  
pp. 14
Author(s):  
J. MANIKANDAN ◽  
M. THANKAM ◽  
K. P. AISHWARYA ◽  
S. RADHA ◽  
◽  
...  
Author(s):  
Gayathri. R ◽  
K. Sheela Sobana Rani ◽  
R. Lavanya

Silent speakers face a lot of problems when it comes to communicate their thoughts and views. Furthermore, only few people know the sign language of these silent speakers. They tend to feel awkward to take part any exercises with the typical individuals. They require gesture based communication mediators for their interchanges. The solution to this problem is to provide them a better way to take their message across, “Smart Finger Gesture Recognition System for Silent Speakers” which has been proposed. Instead of using sign language, gesture recognition is done with the help of finger movements. The system consists of data glove, flex sensors, raspberry pi. The flex sensors are fitted on the data gloves and it is used to recognize the finger gestures. Then the ADC module is used to convert the analog values into digital form. After signal conversion, the value is given to Raspberry Pi 3, and it converts the signals into audio output as well as text format using software tool. The proposed framework limits correspondence boundary between moronic individuals and ordinary individuals. Therefore, the recognized finger gestures are conveyed into speech and text so that the normal people can easily communicate with dumb people.


Electronics ◽  
2022 ◽  
Vol 11 (1) ◽  
pp. 168
Author(s):  
Mohsen Bakouri ◽  
Mohammed Alsehaimi ◽  
Husham Farouk Ismail ◽  
Khaled Alshareef ◽  
Ali Ganoun ◽  
...  

Many wheelchair people depend on others to control the movement of their wheelchairs, which significantly influences their independence and quality of life. Smart wheelchairs offer a degree of self-dependence and freedom to drive their own vehicles. In this work, we designed and implemented a low-cost software and hardware method to steer a robotic wheelchair. Moreover, from our method, we developed our own Android mobile app based on Flutter software. A convolutional neural network (CNN)-based network-in-network (NIN) structure approach integrated with a voice recognition model was also developed and configured to build the mobile app. The technique was also implemented and configured using an offline Wi-Fi network hotspot between software and hardware components. Five voice commands (yes, no, left, right, and stop) guided and controlled the wheelchair through the Raspberry Pi and DC motor drives. The overall system was evaluated based on a trained and validated English speech corpus by Arabic native speakers for isolated words to assess the performance of the Android OS application. The maneuverability performance of indoor and outdoor navigation was also evaluated in terms of accuracy. The results indicated a degree of accuracy of approximately 87.2% of the accurate prediction of some of the five voice commands. Additionally, in the real-time performance test, the root-mean-square deviation (RMSD) values between the planned and actual nodes for indoor/outdoor maneuvering were 1.721 × 10−5 and 1.743 × 10−5, respectively.


2019 ◽  
Vol 7 (2) ◽  
pp. 43
Author(s):  
MALHOTRA POOJA ◽  
K. MANIAR CHIRAG ◽  
V. SANKPAL NIKHIL ◽  
R. THAKKAR HARDIK ◽  
◽  
...  

2020 ◽  
Vol 14 ◽  
Author(s):  
Vasu Mehra ◽  
Dhiraj Pandey ◽  
Aayush Rastogi ◽  
Aditya Singh ◽  
Harsh Preet Singh

Background:: People suffering from hearing and speaking disabilities have a few ways of communicating with other people. One of these is to communicate through the use of sign language. Objective:: Developing a system for sign language recognition becomes essential for deaf as well as a mute person. The recognition system acts as a translator between a disabled and an able person. This eliminates the hindrances in exchange of ideas. Most of the existing systems are very poorly designed with limited support for the needs of their day to day facilities. Methods:: The proposed system embedded with gesture recognition capability has been introduced here which extracts signs from a video sequence and displays them on screen. On the other hand, a speech to text as well as text to speech system is also introduced to further facilitate the grieved people. To get the best out of human computer relationship, the proposed solution consists of various cutting-edge technologies and Machine Learning based sign recognition models which have been trained by using Tensor Flow and Keras library. Result:: The proposed architecture works better than several gesture recognition techniques like background elimination and conversion to HSV because of sharply defined image provided to the model for classification. The results of testing indicate reliable recognition systems with high accuracy that includes most of the essential and necessary features for any deaf and dumb person in his/her day to day tasks. Conclusion:: It’s the need of current technological advances to develop reliable solutions which can be deployed to assist deaf and dumb people to adjust to normal life. Instead of focusing on a standalone technology, a plethora of them have been introduced in this proposed work. Proposed Sign Recognition System is based on feature extraction and classification. The trained model helps in identification of different gestures.


Author(s):  
Basavaraj N Hiremath ◽  
Malini M Patilb

The voice recognition system is about cognizing the signals, by feature extraction and identification of related parameters. The whole process is referred to as voice analytics. The paper aims at analysing and synthesizing the phonetics of voice using a computer program called “PRAAT”. The work carried out in the paper also supports the analysis of voice segmentation labelling, analyse the unique features of voice cues, understanding physics of voice, further the process is carried out to recognize sarcasm. Different unique features identified in the work are, intensity, pitch, formants related to read, speak, interactive and declarative sentences by using principle component analysis.


2020 ◽  
Vol 67 (1) ◽  
pp. 133-141
Author(s):  
Dmitriy O. Khort ◽  
Aleksei I. Kutyrev ◽  
Igor G. Smirnov ◽  
Rostislav A. Filippov ◽  
Roman V. Vershinin

Technological capabilities of agricultural units cannot be optimally used without extensive automation of production processes and the use of advanced computer control systems. (Research purpose) To develop an algorithm for recognizing the coordinates of the location and ripeness of garden strawberries in different lighting conditions and describe the technological process of its harvesting in field conditions using a robotic actuator mounted on a self-propelled platform. (Materials and methods) The authors have developed a self-propelled platform with an automatic actuator for harvesting garden strawberry, which includes an actuator with six degrees of freedom, a co-axial gripper, mg966r servos, a PCA9685 controller, a Logitech HD C270 computer vision camera, a single-board Raspberry Pi 3 Model B+ computer, VL53L0X laser sensors, a SZBK07 300W voltage regulator, a Hubsan X4 Pro H109S Li-polymer battery. (Results and discussion) Using the Python programming language 3.7.2, the authors have developed a control algorithm for the automatic actuator, including operations to determine the X and Y coordinates of berries, their degree of maturity, as well as to calculate the distance to berries. It has been found that the effectiveness of detecting berries, their area and boundaries with a camera and the OpenCV library at the illumination of 300 Lux reaches 94.6 percent’s. With an increase in the robotic platform speed to 1.5 kilometre per hour and at the illumination of 300 Lux, the average area of the recognized berries decreased by 9 percent’s to 95.1 square centimeter, at the illumination of 200 Lux, the area of recognized berries decreased by 17.8 percent’s to 88 square centimeter, and at the illumination of 100 Lux, the area of recognized berries decreased by 36.4 percent’s to 76 square centimeter as compared to the real area of berries. (Conclusions) The authors have provided rationale for the technological process and developed an algorithm for harvesting garden strawberry using a robotic actuator mounted on a self-propelled platform. It has been proved that lighting conditions have a significant impact on the determination of the area, boundaries and ripeness of berries using a computer vision camera.


IEEE Access ◽  
2021 ◽  
Vol 9 ◽  
pp. 59612-59627
Author(s):  
Mohamed A. Bencherif ◽  
Mohammed Algabri ◽  
Mohamed A. Mekhtiche ◽  
Mohammed Faisal ◽  
Mansour Alsulaiman ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document