american sign language
Recently Published Documents


TOTAL DOCUMENTS

1331
(FIVE YEARS 333)

H-INDEX

49
(FIVE YEARS 4)

Cognition ◽  
2022 ◽  
Vol 220 ◽  
pp. 104979
Author(s):  
Gabriela Meade ◽  
Brittany Lee ◽  
Natasja Massa ◽  
Phillip J. Holcomb ◽  
Katherine J. Midgley ◽  
...  

2022 ◽  
pp. 258-279
Author(s):  
Zanthia Yvette Smith

Few research-based family studies have focused specifically on the perceptions of African American hearing parents' use of home literacy strategies. This study was conducted with a small group of African American families, taking into account family's individual literacy needs (African American hearing families with deaf/hard of hearing [DHH] children), African-American culture and language, emergent literacy research, American Sign Language (ASL), and parent-child book reading strategies. The purpose of this study was to document parental perception of the literacy process, while establishing opportunities for parents to practice under the guidance of mentors and within the home environment. Recordings documented parental progress and their comments about the reading process. Field notes were generated from the mentors' discussions with parents. This exploratory case study identified changes in parental perception of communication and literacy development during a nine-week intervention and records their reactions to those support strategies.


2021 ◽  
pp. 1-12
Author(s):  
William Matchin ◽  
Deniz İlkbaşaran ◽  
Marla Hatrak ◽  
Austin Roth ◽  
Agnes Villwock ◽  
...  

Abstract Areas within the left-lateralized neural network for language have been found to be sensitive to syntactic complexity in spoken and written language. Previous research has revealed that these areas are active for sign language as well, but whether these areas are specifically responsive to syntactic complexity in sign language independent of lexical processing has yet to be found. To investigate the question, we used fMRI to neuroimage deaf native signers' comprehension of 180 sign strings in American Sign Language (ASL) with a picture-probe recognition task. The ASL strings were all six signs in length but varied at three levels of syntactic complexity: sign lists, two-word sentences, and complex sentences. Syntactic complexity significantly affected comprehension and memory, both behaviorally and neurally, by facilitating accuracy and response time on the picture-probe recognition task and eliciting a left lateralized activation response pattern in anterior and posterior superior temporal sulcus (aSTS and pSTS). Minimal or absent syntactic structure reduced picture-probe recognition and elicited activation in bilateral pSTS and occipital-temporal cortex. These results provide evidence from a sign language, ASL, that the combinatorial processing of anterior STS and pSTS is supramodal in nature. The results further suggest that the neurolinguistic processing of ASL is characterized by overlapping and separable neural systems for syntactic and lexical processing.


Author(s):  
Mohit Panwar ◽  
Rohit Pandey ◽  
Rohan Singla ◽  
Kavita Saxena

Every day we see many people, who are facing illness like deaf, dumb etc. There are not as many technologies which help them to interact with each other. They face difficulty in interacting with others. Sign language is used by deaf and hard hearing people to exchange information between their own community and with other people. Computer recognition of sign language deals from sign gesture acquisition and continues till text/speech generation. Sign gestures can be classified as static and dynamic. However static gesture recognition is simpler than dynamic gesture recognition but both recognition systems are important to the human community. The ASL American sign language recognition steps are described in this survey. There are not as many technologies which help them to interact with each other. They face difficulty in interacting with others. Image classification and machine learning can be used to help computers recognize sign language, which could then be interpreted by other people. Earlier we have Glove-based method in which the person has to wear a hardware glove, while the hand movements are getting captured. It seems a bit uncomfortable for practical use. Here we use visual based method. Convolutional neural networks and mobile ssd model have been employed in this paper to recognize sign language gestures. Preprocessing was performed on the images, which then served as the cleaned input. Tensor flow is used for training of images. A system will be developed which serves as a tool for sign language detection. Tensor flow is used for training of images. Keywords: ASL recognition system, convolutional neural network (CNNs), classification, real time, tensor flow


Author(s):  
Mohd Arifullah ◽  
Fais Khan ◽  
Yash Handa

Actual-time signal language translator is a crucial milestone in facilitating communication among the deaf community and the general public. Introducing the development and use of yanked sign Language Spelling Translator (ASL) based on the convolutional neural network. We use the pre-skilled Google Net architecture educated inside the ILSVRC2012 database, in addition to the ASL database for Surrey University and Massey university ASL to apply gaining knowledge of switch in this task. We have developed a sturdy version that constantly separates the letters a-e from the original users and any other that separates the spaced characters in maximum cases. Given the limitations of the information sets and the encouraging consequences acquired, we are assured that with similarly studies and further facts, we can produce a totally customized translator for all ASL characters. Keywords: Sign Language, Image Recognition, American Sign Language, Expressions signals, CNN


2021 ◽  
Author(s):  
Judith Borghouts ◽  
Martha Neary ◽  
Kristina Palomares ◽  
Cinthia De Leon ◽  
Stephen M Schueller ◽  
...  

BACKGROUND Mental health concerns are a significant issue among the Deaf and Hard of Hearing Community, but community members can face several unique challenges to accessing appropriate resources. OBJECTIVE This study investigated the mental health needs of the Deaf and Hard of Hearing Community, and how mental health digital therapeutics, such as apps, may be able to support these needs. METHODS Ten members of the Deaf and Hard of Hearing Community participated in a focus group and survey to provide their views. Participants were members of the Center on Deafness Inland Empire team, which comprises people with lived experience as members of and advocates for the Deaf and Hard of Hearing Community. RESULTS Findings identified a spectrum of needs for digital therapeutics including offering American Sign Language and English support, increased education of mental health to reduce stigma around mental health, direct communication with a Deaf worker, and apps that are accessible to a range of community members in terms of culture, resources required and location. CONCLUSIONS These findings can inform the development of digital mental health interventions and outreach strategies that are appropriate for the Deaf and Hard of Hearing Community.


2021 ◽  
Author(s):  
Lorna C Quandt ◽  
Athena Willis ◽  
Carly Leannah

Signed language users communicate in a wide array of sub-optimal environments, such as in dim lighting or from a distance. While fingerspelling is a common and essential part of signed languages, the perception of fingerspelling in varying visual environments is not well understood. Signed languages such as American Sign Language (ASL) rely on visuospatial information that combines hand and bodily movements, facial expressions, and fingerspelling. Linguistic information in ASL is conveyed with movement and spatial patterning, which lends itself well to using dynamic Point Light Display (PLD) stimuli to represent sign language movements. We created PLD videos of fingerspelled location names. The location names were either Real (e.g., KUWAIT) or Pseudo-names (e.g., CLARTAND), and the PLDs showed either a High or a Low number of markers. In an online study, Deaf and Hearing ASL users (total N = 283) watched 27 PLD stimulus videos that varied by Realness and Number of Markers. We calculated accuracy and confidence scores in response to each video. We predicted that when signers see ASL fingerspelled letter strings in a suboptimal visual environment, language experience in ASL will be positively correlated with accuracy and self-rated confidence scores. We also predicted that Real location names would be understood better than Pseudo names. Our findings show that participants were more accurate and confident in response to Real place names than Pseudo names and for stimuli with High rather than Low markers. We also discovered a significant interaction between Age and Realness, which shows that as people age, they can better use outside world knowledge to inform their fingerspelling success. Finally, we examined the accuracy and confidence in fingerspelling perception in sub-groups of people who had learned ASL before the age of four. Studying the relationship between language experience with PLD fingerspelling perception allows us to explore how hearing status, ASL fluency levels, and age of language acquisition affect the core abilities of understanding fingerspelling.


Sign in / Sign up

Export Citation Format

Share Document