scholarly journals Routes to short-term memory indexing: Lessons from deaf native users of American Sign Language

2012 ◽  
Vol 29 (1-2) ◽  
pp. 85-103 ◽  
Author(s):  
Elizabeth A. Hirshorn ◽  
Nina M. Fernandez ◽  
Daphne Bavelier
1981 ◽  
Vol 9 (2) ◽  
pp. 121-131 ◽  
Author(s):  
Howard Poizner ◽  
Don Newkirk ◽  
Ursula Bellugi ◽  
Edward S. Klima

2004 ◽  
Vol 7 (9) ◽  
pp. 997-1002 ◽  
Author(s):  
Mrim Boutla ◽  
Ted Supalla ◽  
Elissa L Newport ◽  
Daphne Bavelier

Author(s):  
Dhanashree Shyam Bendarkar ◽  
Pratiksha Appasaheb Somase ◽  
Preety Kalyansingh Rebari ◽  
Renuka Ramkrishna Paturkar ◽  
Arjumand Masood Khan

Individuals with hearing hindrance utilize gesture based communication to exchange their thoughts. Generally hand movements are used by them to communicate among themselves. But there are certain limitations when they communicate with other people who cannot understand these hand movements. There is a need to have a mechanism that can act as a translator between these people to communicate. It would be easier for these people to interact if there exists direct infrastructure that is able to convert signs to text and voice messages. As of late, numerous such frameworks for gesture based communication acknowledgment have been developed. But most of them are made either for static gesture recognition or dynamic gesture recognition. As sentences are generated using combinations of static and dynamic gestures, it would be simpler for hearing debilitated individuals if such computerized frameworks can detect both the static and dynamic motions together. We have proposed a design and architecture of American Sign Language (ASL) recognition with convolutional neural networks (CNN). This paper utilizes a pretrained VGG-16 architecture for static gesture recognition and for dynamic gesture recognition, spatiotemporal features were learnt with the complex architecture, called deep learning. It contains a bidirectional convolutional Long Short Term Memory network (ConvLSTM) and 3D convolutional neural network (3DCNN) and this architecture is responsible to extract  2D spatio temporal features.


2018 ◽  
Vol 24 (2) ◽  
pp. 999-1004 ◽  
Author(s):  
Erdefi Rakun ◽  
Aniati M Arymurthy ◽  
Lim Y Stefanus ◽  
Alfan F Wicaksono ◽  
I. Wayan W Wisesa

2008 ◽  
Vol 20 (12) ◽  
pp. 2198-2210 ◽  
Author(s):  
Judy Pa ◽  
Stephen M. Wilson ◽  
Herbert Pickell ◽  
Ursula Bellugi ◽  
Gregory Hickok

Despite decades of research, there is still disagreement regarding the nature of the information that is maintained in linguistic short-term memory (STM). Some authors argue for abstract phonological codes, whereas others argue for more general sensory traces. We assess these possibilities by investigating linguistic STM in two distinct sensory–motor modalities, spoken and signed language. Hearing bilingual participants (native in English and American Sign Language) performed equivalent STM tasks in both languages during functional magnetic resonance imaging. Distinct, sensory-specific activations were seen during the maintenance phase of the task for spoken versus signed language. These regions have been previously shown to respond to nonlinguistic sensory stimulation, suggesting that linguistic STM tasks recruit sensory-specific networks. However, maintenance-phase activations common to the two languages were also observed, implying some form of common process. We conclude that linguistic STM involves sensory-dependent neural networks, but suggest that sensory-independent neural networks may also exist.


Electronics ◽  
2021 ◽  
Vol 10 (9) ◽  
pp. 1035
Author(s):  
Miguel Rivera-Acosta ◽  
Juan Manuel Ruiz-Varela ◽  
Susana Ortega-Cisneros ◽  
Jorge Rivera ◽  
Ramón Parra-Michel ◽  
...  

In this paper, we present a novel approach that aims to solve one of the main challenges in hand gesture recognition tasks in static images, to compensate for the accuracy lost when trained models are used to interpret completely unseen data. The model presented here consists of two main data-processing stages. A deep neural network (DNN) for performing handshape segmentation and classification is used in which multiple architectures and input image sizes were tested and compared to derive the best model in terms of accuracy and processing time. For the experiments presented in this work, the DNN models were trained with 24,000 images of 24 signs from the American Sign Language alphabet and fine-tuned with 5200 images of 26 generated signs. The system was real-time tested with a community of 10 persons, yielding a mean average precision and processing rate of 81.74% and 61.35 frames-per-second, respectively. As a second data-processing stage, a bidirectional long short-term memory neural network was implemented and analyzed for adding spelling correction capability to our system, which scored a training accuracy of 98.07% with a dictionary of 370 words, thus, increasing the robustness in completely unseen data, as shown in our experiments.


Sign in / Sign up

Export Citation Format

Share Document