scholarly journals Deep Learning Techniques for Spanish Sign Language Interpretation

2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Ester Martinez-Martin ◽  
Francisco Morillas-Espejo

Around 5% of the world population suffers from hearing impairment. One of its main barriers is communication with others since it could lead to their social exclusion and frustration. To overcome this issue, this paper presents a system to interpret the Spanish sign language alphabet which makes the communication possible in those cases, where it is necessary to sign proper nouns such as names, streets, or trademarks. For this, firstly, we have generated an image dataset of the signed 30 letters composing the Spanish alphabet. Then, given that there are static and in-motion letters, two different kinds of neural networks have been tested and compared: convolutional neural networks (CNNs) and recurrent neural networks (RNNs). A comparative analysis of the experimental results highlights the importance of the spatial dimension with respect to the temporal dimension in sign interpretation. So, CNNs obtain a much better accuracy, with 96.42% being the maximum value.

Author(s):  
Pablo Díaz-Moreno ◽  
Juan José Carrasco ◽  
Emilio Soria-Olivas ◽  
José M. Martínez-Martínez ◽  
Pablo Escandell-Montero ◽  
...  

Neural Networks (NN) are one of the most used machine learning techniques in different areas of knowledge. This has led to the emergence of a large number of courses of Neural Networks around the world and in areas where the users of this technique do not have a lot of programming skills. Current software that implements these elements, such as Matlab®, has a number of important limitations in teaching field. In some cases, the implementation of a MLP requires a thorough knowledge of the software and of the instructions that train and validate these systems. In other cases, the architecture of the model is fixed and they do not allow an automatic sweep of the parameters that determine the architecture of the network. This chapter presents a teaching tool for the its use in courses about neural models that solves some of the above-mentioned limitations. This tool is based on Matlab® software.


Author(s):  
Franc Solina ◽  
Slavko Krapez ◽  
Ales Jaklic ◽  
Vito Komac

Deaf people, as a marginal community, may have severe problems in communicating with hearing people. Usually, they have a lot of problems even with such—for hearing people—simple tasks as understanding the written language. However, deaf people are very skilled in using a sign language, which is their native language. A sign language is a set of signs or hand gestures. A gesture in a sign language equals a word in a written language. Similarly, a sentence in a written language equals a sequence of gestures in a sign language. In the distant past deaf people were discriminated and believed to be incapable of learning and thinking independently. Only after the year 1500 were the first attempts made to educate deaf children. An important breakthrough was the realization that hearing is not a prerequisite for understanding ideas. One of the most important early educators of the deaf and the first promoter of sign language was Charles Michel De L’Epée (1712-1789) in France. He founded the fist public school for deaf people. His teachings about sign language quickly spread all over the world. Like spoken languages, different sign languages and dialects evolved around the world. According to the National Association of the Deaf, the American Sign Language (ASL) is the third most frequently used language in the United States, after English and Spanish. ASL has more than 4,400 distinct signs. The Slovenian sign language (SSL), which is used in Slovenia and also serves as a case study sign language in this chapter, contains approximately 4,000 different gestures for common words. Signs require one or both hands for signing. Facial expressions which accompany signing are also important since they can modify the basic meaning of a hand gesture. To communicate proper nouns and obscure words, sign languages employ finger spelling. Since the majority of signing is with full words, signed conversation can proceed with the same pace as spoken conversation.


Sign language is a visual language that uses body postures and facial expressions. It is generally used by hearing-impaired people as a source of communication. According to the World Health Organization (WHO), around 466 million people (5% of the world population) are with hearing and speech impairment. Normal people generally do not understand this sign language and hence there is a communication gap between hearing-impaired and other people. Different phonemic scripts were developed such as HamNoSys notation that describes sign language using symbols. With the development in the field of artificial intelligence, we are now able to overcome the limitations of communication with people using different languages. Sign language translating system is the one that converts sign to text or speech whereas sign language generating system is the one that converts speech or text to sign language. Sign language generating systems were developed so that normal people can use this system to display signs to hearing-impaired people. This survey consists of a comparative study of approaches and techniques that are used to generate sign language. We have discussed general architecture and applications of the sign language generating system.


Author(s):  
Prof. Namrata Ghuse

Gesture-based communication Recognition through innovation has been ignored idea even though an enormous local area can profit from it. There are more than 3% total population of the world who even can't speak or hear properly. Using hand Gesture-based communication, Every especially impaired people to communicate with each other or rest of world population. It is a way of correspondence for the rest of the world population who are neither speaking and hearing incompetency. Normal people even don't become intimate or close with sign language based communications. It's a reason become a gap between the especially impaired people & ordinary person. The previous systems of the project used to involve the concepts of image generation and emoji symbols. But the previous frameworks of a project are not affordable and not portable for the impaired person.The Main propaganda of a project has always been to interpret Indian Sign Language Standards and American Sign Language Standards and Convert gestures into voice and text, also assist the impaired person can interact with the other person from the remote location. This hand smart glove has been made with the set up with Gyroscope, Flex Sensor, ESP32 Microcontrollers/Micro bit, Accelerometer,25 LED Matrix Actuators/Output &, flex sensor, vibrator etc.


Author(s):  
Rachaell Nihalaani

Abstract: Sign Language is invaluable to hearing and speaking impaired people and is their only way of communicating among themselves. However, it has limitations with its reach as the rest of the people have no information regarding sign language interpretation. Sign language is communicated via hand gestures and visual modes and is therefore used by hearing and speaking impaired people to intercommunicate. These languages have alphabets and grammar of their own, which cannot be understood by people who have no knowledge about the specific symbols and rules. Thus, it has become essential for everyone to interpret, understand and communicate via sign language to overcome and alleviate the barriers of speech and communication. This can be tackled with the help of machine learning. This model is a Sign Language Interpreter that uses a dataset of images and interprets the sign language alphabets and sentences with 90.9% accuracy. For this paper, we have used an ASL (American Sign Language) Alphabet. We have used the CNN algorithm for this project. This paper ends with a summary of the model’s viability and its usefulness for interpretation of Sign Language. Keywords: Sign Language, Machine Learning, Interpretation model, Convoluted Neural Networks, American Sign Language


2021 ◽  
Author(s):  
Anna Ciaunica ◽  
Bruna Petreca ◽  
Aikaterini Fotopoulou ◽  
Andreas Roepstorff

In his paper ‘Whatever next? Predictive brains, situated agents, and the future of cognitive science’ Andy Clark (2013) seminally proposed that the brain’s job is to predict whatever information is coming ‘next’ on the basis of prior inputs and experiences. Perception fundamentally subserves survival and self-preservation in biological agents such as humans. Survival however crucially depends on rapid and accurate information processing of what is happening in the here and now. Hence the term ‘next’ in Clark’s seminal formulation must include not only the temporal dimension (i.e. what is perceived now); but (ii) also the spatial dimension (i.e. what is perceived here or next-to-my-body). In this paper we propose to focus on perceptual experiences that happen ‘next’, i.e. close-to-my-body. This is because perceptual processing of proximal sensory inputs has a key impact on the organism’s survival. Specifically, we focus on tactile experiences mediated by the skin and what we will call the ‘extended skin’ or ‘second skin’, that is immediate objects/materials that envelop closely our skin, namely clothes. We propose that the skin and tactile experiences are not a mere border separating the self and world. Rather they simultaneously and inherently distinguish and connect the bodily self to its environment. Hence these proximal and pervasive tactile experiences be viewed as a ‘transparent bridge’ intrinsically relating and facilitating exchanges between the self and the physical and social world. We conclude with potential implications of this observation for the case of Depersonalisation Disorder, a condition that makes people feel estranged and detached from their self, body and the world.


2021 ◽  
Vol 12 ◽  
Author(s):  
Anna Ciaunica ◽  
Andreas Roepstorff ◽  
Aikaterini Katerina Fotopoulou ◽  
Bruna Petreca

In his paper “Whatever next? Predictive brains, situated agents, and the future of cognitive science,” Andy Clark seminally proposed that the brain's job is to predict whatever information is coming “next” on the basis of prior inputs and experiences. Perception fundamentally subserves survival and self-preservation in biological agents, such as humans. Survival however crucially depends on rapid and accurate information processing of what is happening in the here and now. Hence, the term “next” in Clark's seminal formulation must include not only the temporal dimension (i.e., what is perceived now) but also the spatial dimension (i.e., what is perceived here or next-to-my-body). In this paper, we propose to focus on perceptual experiences that happen “next,” i.e., close-to-my-body. This is because perceptual processing of proximal sensory inputs has a key impact on the organism's survival. Specifically, we focus on tactile experiences mediated by the skin and what we will call the “extended skin” or “second skin,” that is, immediate objects/materials that envelop closely to our skin, namely, clothes. We propose that the skin and tactile experiences are not a mere border separating the self and world. Rather, they simultaneously and inherently distinguish and connect the bodily self to its environment. Hence, these proximal and pervasive tactile experiences can be viewed as a “transparent bridge” intrinsically relating and facilitating exchanges between the self and the physical and social world. We conclude with potential implications of this observation for the case of Depersonalization Disorder, a condition that makes people feel estranged and detached from their self, body, and the world.


Author(s):  
Prof. Namrata Ghuse

Gesture-based communication Recognition through innovation has been ignored idea even though an enormous local area can profit from it. There are more than 3% total population of the world who even can't speak or hear properly. Using hand Gesture-based communication, Every especially impaired people to communicate with each other or rest of world population. It is a way of correspondence for the rest of the world population who are neither speaking and hearing incompetency. Normal people even don't become intimate or close with sign language based communications. It's a reason become a gap between the especially impaired people & ordinary person. The previous systems of the project used to involve the concepts of image generation and emoji symbols. But the previous frameworks of a project are not affordable and not portable for the impaired person.The Main propaganda of a project has always been to interpret Indian Sign Language Standards and American Sign Language Standards and Convert gestures into voice and text, also assist the impaired person can interact with the other person from the remote location. This hand smart glove has been made with the set up with Gyroscope, Flex Sensor, ESP32 Microcontrollers/Micro bit, Accelerometer,25 LED Matrix Actuators/Output &, flex sensor, vibrator etc.


2016 ◽  
pp. 1730-1755 ◽  
Author(s):  
Pablo Díaz-Moreno ◽  
Juan José Carrasco ◽  
Emilio Soria-Olivas ◽  
José M. Martínez-Martínez ◽  
Pablo Escandell-Montero ◽  
...  

Neural Networks (NN) are one of the most used machine learning techniques in different areas of knowledge. This has led to the emergence of a large number of courses of Neural Networks around the world and in areas where the users of this technique do not have a lot of programming skills. Current software that implements these elements, such as Matlab®, has a number of important limitations in teaching field. In some cases, the implementation of a MLP requires a thorough knowledge of the software and of the instructions that train and validate these systems. In other cases, the architecture of the model is fixed and they do not allow an automatic sweep of the parameters that determine the architecture of the network. This chapter presents a teaching tool for the its use in courses about neural models that solves some of the above-mentioned limitations. This tool is based on Matlab® software.


Sign in / Sign up

Export Citation Format

Share Document