scholarly journals Indonesian Sign Language Interpreter Device Based on ATMega328 Microcontroller for Bali Deaf Community Denpasar

2021 ◽  
Vol 1 (2) ◽  
pp. 88-101
Author(s):  
I Wayan Sukadana ◽  
I Nengah Agus Mulia Adnyana ◽  
Erwani Merry Sartika

This study aims to design and build a Sign Language Interpreter Device with Voice Output in the form of an ATMega328 Microcontroller-Based Voice Speaker Module so that in its implementation and later in designing this device the writer focuses on the translation of 16 words that have been predetermined in Indonesian Sign Language especially in Denpasar City by using a Flex Sensor and a Gyro Sensor based on the ATMega328 Microcontroller with Arduino IDE programming. This device is also equipped with a 4GB SD card memory for storing voice recordings, using an ATMega328 microcontroller, four analog Flex sensors, a Gyro sensor, a buzzer and an 8 ohm speaker, and using a 7.4 volt Li-Po battery. The application of this device is aimed for thehearing impaired people who fall into the adult category who can understand writing and understand sign language. The output of this device uses an MP3 player module that is already included in the Sign Language Interpreter Device. The flex sensor readings range from 998-1005 ADC (analog digital converter) in open conditions and the sensor ranges from 1006-10018 ADC in closed conditions. The reading for the gyro pitch (Y axis) ranges from -10º to 76º then on the reading of the gyro Roll (X axis) ranges from -100º to 90º.  Keywords: ATMega328 microcontroller; Buzzer; Flex Sensor; Gyro Sensor

Author(s):  
Rachaell Nihalaani

Abstract: Sign Language is invaluable to hearing and speaking impaired people and is their only way of communicating among themselves. However, it has limitations with its reach as the rest of the people have no information regarding sign language interpretation. Sign language is communicated via hand gestures and visual modes and is therefore used by hearing and speaking impaired people to intercommunicate. These languages have alphabets and grammar of their own, which cannot be understood by people who have no knowledge about the specific symbols and rules. Thus, it has become essential for everyone to interpret, understand and communicate via sign language to overcome and alleviate the barriers of speech and communication. This can be tackled with the help of machine learning. This model is a Sign Language Interpreter that uses a dataset of images and interprets the sign language alphabets and sentences with 90.9% accuracy. For this paper, we have used an ASL (American Sign Language) Alphabet. We have used the CNN algorithm for this project. This paper ends with a summary of the model’s viability and its usefulness for interpretation of Sign Language. Keywords: Sign Language, Machine Learning, Interpretation model, Convoluted Neural Networks, American Sign Language


Author(s):  
Syar Meeze Mohd Rashid ◽  
Norlidah Alias ◽  
Zawawi Ismail

This article discusses issues and challenges faced by special education teachers in using Bahasa Isyarat Malaysia to teach the deaf basics of fardhu ain. Firstly, the shortage of Islamic terminologies in sign language leads to communication barrier between the teacher and students. Besides that, the Fardhu Ain teachers are not well-versed with sign language. Another issue is that the curriculum used is meant for the typical community and unsuitable for the deaf community. Abstrak Artikel ini membincangkan tentang isu dan cabaran yang dihadapi guru  pendidikan  khas dalam  penggunaan  BIM untuk pengajaran PAFA kepada golongan pekak. Isu dan cabaran  yang  pertama  ialah  kekurangan  bahasa  isyarat agama  Islam  yang  menyebabkan  kesukaran  golongan pekak  dan  guru  yang  mengajar  untuk  berkomunikasi bagi  membincangkan  perkara   yang  berkaitan   dengan agama. Selain    itu,    isu    dan    cabaran    kedua    ialah ketidakmahiran    guru    PAFA    dalam    berkomunikasi menggunakan bahasa isyarat. Seterusnya isu dan cabaran ketiga   ialah ketidaksesuaian   kurikulum   PAFA   untuk golongan    pekak    kerana kurikulum    PAFA    yang digunakan kepada golongan pekak turut digunakan sama oleh golongan tipikal Muslim yang lain.


2013 ◽  
Vol 61 (3) ◽  
pp. 691-696 ◽  
Author(s):  
R. Suszynski ◽  
K. Wawryn

Abstract A rapid prototyping method for designing mixed signal systems has been presented in the paper. The method is based on implementation of the field programmable analog array (FPAA) to configure and reconfigure mixed signal systems. A serial algorithmic analog digital converter has been used as an example. Three converter architectures have been selected and implemented FPAA device. To verify and illustrate converters operation and prototyping capabilities, implemented converters have been excited by a sinusoidal signal. Analog sinusoidal excitations, digital responses and sinusoidal waveforms after reconstruction are presented.


Author(s):  
Sukhendra Singh ◽  
G. N. Rathna ◽  
Vivek Singhal

Introduction: Sign language is the only way to communicate for speech-impaired people. But this sign language is not known to normal people so this is the cause of barrier in communicating. This is the problem faced by speech impaired people. In this paper, we have presented our solution which captured hand gestures with Kinect camera and classified the hand gesture into its correct symbol. Method: We used Kinect camera not the ordinary web camera because the ordinary camera does not capture its 3d orientation or depth of an image from camera however Kinect camera can capture 3d image and this will make classification more accurate. Result: Kinect camera will produce a different image for hand gestures for ‘2’ and ‘V’ and similarly for ‘1’ and ‘I’ however, normal web camera will not be able to distinguish between these two. We used hand gesture for Indian sign language and our dataset had 46339, RGB images and 46339 depth images. 80% of the total images were used for training and the remaining 20% for testing. In total 36 hand gestures were considered to capture alphabets and alphabets from A-Z and 10 for numeric, 26 for digits from 0-9 were considered to capture alphabets and Keywords. Conclusion: Along with real-time implementation, we have also shown the comparison of the performance of the various machine learning models in which we have found out the accuracy of CNN on depth- images has given the most accurate performance than other models. All these resulted were obtained on PYNQ Z2 board.


2021 ◽  
Vol 11 (8) ◽  
pp. 3439
Author(s):  
Debashis Das Chakladar ◽  
Pradeep Kumar ◽  
Shubham Mandal ◽  
Partha Pratim Roy ◽  
Masakazu Iwamura ◽  
...  

Sign language is a visual language for communication used by hearing-impaired people with the help of hand and finger movements. Indian Sign Language (ISL) is a well-developed and standard way of communication for hearing-impaired people living in India. However, other people who use spoken language always face difficulty while communicating with a hearing-impaired person due to lack of sign language knowledge. In this study, we have developed a 3D avatar-based sign language learning system that converts the input speech/text into corresponding sign movements for ISL. The system consists of three modules. Initially, the input speech is converted into an English sentence. Then, that English sentence is converted into the corresponding ISL sentence using the Natural Language Processing (NLP) technique. Finally, the motion of the 3D avatar is defined based on the ISL sentence. The translation module achieves a 10.50 SER (Sign Error Rate) score.


2012 ◽  
Vol 15 (2) ◽  
pp. 185-211 ◽  
Author(s):  
Susanne Mohr

The article analyses cross-modal language contact between signed and spoken languages with special reference to the Irish Deaf community. This is exemplified by an examination of the phenomenon of mouthings in Irish Sign Language including its origins, dynamics, forms and functions. Initially, the setup of language contact with respect to Deaf communities and the sociolinguistics of the Irish Deaf community are discussed, and in the main part the article analyses elicited data in the form of personal stories by twelve native signers from the Republic of Ireland. The major aim of the investigation is to determine whether mouthings are yet fully integrated into ISL and if so, whether this integration has ultimately caused language change. Finally, it is asked whether traditional sociolinguistic frameworks of language contact can actually tackle issues of cross-modal language contact occurring between signed and spoken languages.


Resuscitation ◽  
2010 ◽  
Vol 81 (2) ◽  
pp. S96
Author(s):  
F.L. Fernandes ◽  
R. Gianotto-Oliveira ◽  
M. Paula ◽  
M.M. Gonzalez ◽  
S. Timerman ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document