american sign
Recently Published Documents


TOTAL DOCUMENTS

1171
(FIVE YEARS 250)

H-INDEX

47
(FIVE YEARS 4)

Cognition ◽  
2022 ◽  
Vol 220 ◽  
pp. 104979
Author(s):  
Gabriela Meade ◽  
Brittany Lee ◽  
Natasja Massa ◽  
Phillip J. Holcomb ◽  
Katherine J. Midgley ◽  
...  

2021 ◽  
pp. 1-12
Author(s):  
William Matchin ◽  
Deniz İlkbaşaran ◽  
Marla Hatrak ◽  
Austin Roth ◽  
Agnes Villwock ◽  
...  

Abstract Areas within the left-lateralized neural network for language have been found to be sensitive to syntactic complexity in spoken and written language. Previous research has revealed that these areas are active for sign language as well, but whether these areas are specifically responsive to syntactic complexity in sign language independent of lexical processing has yet to be found. To investigate the question, we used fMRI to neuroimage deaf native signers' comprehension of 180 sign strings in American Sign Language (ASL) with a picture-probe recognition task. The ASL strings were all six signs in length but varied at three levels of syntactic complexity: sign lists, two-word sentences, and complex sentences. Syntactic complexity significantly affected comprehension and memory, both behaviorally and neurally, by facilitating accuracy and response time on the picture-probe recognition task and eliciting a left lateralized activation response pattern in anterior and posterior superior temporal sulcus (aSTS and pSTS). Minimal or absent syntactic structure reduced picture-probe recognition and elicited activation in bilateral pSTS and occipital-temporal cortex. These results provide evidence from a sign language, ASL, that the combinatorial processing of anterior STS and pSTS is supramodal in nature. The results further suggest that the neurolinguistic processing of ASL is characterized by overlapping and separable neural systems for syntactic and lexical processing.


Author(s):  
Mohit Panwar ◽  
Rohit Pandey ◽  
Rohan Singla ◽  
Kavita Saxena

Every day we see many people, who are facing illness like deaf, dumb etc. There are not as many technologies which help them to interact with each other. They face difficulty in interacting with others. Sign language is used by deaf and hard hearing people to exchange information between their own community and with other people. Computer recognition of sign language deals from sign gesture acquisition and continues till text/speech generation. Sign gestures can be classified as static and dynamic. However static gesture recognition is simpler than dynamic gesture recognition but both recognition systems are important to the human community. The ASL American sign language recognition steps are described in this survey. There are not as many technologies which help them to interact with each other. They face difficulty in interacting with others. Image classification and machine learning can be used to help computers recognize sign language, which could then be interpreted by other people. Earlier we have Glove-based method in which the person has to wear a hardware glove, while the hand movements are getting captured. It seems a bit uncomfortable for practical use. Here we use visual based method. Convolutional neural networks and mobile ssd model have been employed in this paper to recognize sign language gestures. Preprocessing was performed on the images, which then served as the cleaned input. Tensor flow is used for training of images. A system will be developed which serves as a tool for sign language detection. Tensor flow is used for training of images. Keywords: ASL recognition system, convolutional neural network (CNNs), classification, real time, tensor flow


Author(s):  
Mohd Arifullah ◽  
Fais Khan ◽  
Yash Handa

Actual-time signal language translator is a crucial milestone in facilitating communication among the deaf community and the general public. Introducing the development and use of yanked sign Language Spelling Translator (ASL) based on the convolutional neural network. We use the pre-skilled Google Net architecture educated inside the ILSVRC2012 database, in addition to the ASL database for Surrey University and Massey university ASL to apply gaining knowledge of switch in this task. We have developed a sturdy version that constantly separates the letters a-e from the original users and any other that separates the spaced characters in maximum cases. Given the limitations of the information sets and the encouraging consequences acquired, we are assured that with similarly studies and further facts, we can produce a totally customized translator for all ASL characters. Keywords: Sign Language, Image Recognition, American Sign Language, Expressions signals, CNN


Sign in / Sign up

Export Citation Format

Share Document