scholarly journals Referring strategies in American Sign Language and English (with co-speech gesture): The role of modality in referring to non-nameable objects

2018 ◽  
Vol 39 (5) ◽  
pp. 961-987 ◽  
Author(s):  
ZED SEVCIKOVA SEHYR ◽  
BRENDA NICODEMUS ◽  
JENNIFER PETRICH ◽  
KAREN EMMOREY

ABSTRACTAmerican Sign Language (ASL) and English differ in linguistic resources available to express visual–spatial information. In a referential communication task, we examined the effect of language modality on the creation and mutual acceptance of reference to non-nameable figures. In both languages, description times reduced over iterations and references to the figures’ geometric properties (“shape-based reference”) declined over time in favor of expressions describing the figures’ resemblance to nameable objects (“analogy-based reference”). ASL signers maintained a preference for shape-based reference until the final (sixth) round, while English speakers transitioned toward analogy-based reference by Round 3. Analogy-based references were more time efficient (associated with shorter round description times). Round completion times were longer for ASL than for English, possibly due to gaze demands of the task and/or to more shape-based descriptions. Signers’ referring expressions remained unaffected by figure complexity while speakers preferred analogy-based expressions for complex figures and shape-based expressions for simple figures. Like speech, co-speech gestures decreased over iterations. Gestures primarily accompanied shape-based references, but listeners rarely looked at these gestures, suggesting that they were recruited to aid the speaker rather than the addressee. Overall, different linguistic resources (classifier constructions vs. geometric vocabulary) imposed distinct demands on referring strategies in ASL and English.

2021 ◽  
pp. 095679762199155
Author(s):  
Amanda R. Brown ◽  
Wim Pouw ◽  
Diane Brentari ◽  
Susan Goldin-Meadow

When we use our hands to estimate the length of a stick in the Müller-Lyer illusion, we are highly susceptible to the illusion. But when we prepare to act on sticks under the same conditions, we are significantly less susceptible. Here, we asked whether people are susceptible to illusion when they use their hands not to act on objects but to describe them in spontaneous co-speech gestures or conventional sign languages of the deaf. Thirty-two English speakers and 13 American Sign Language signers used their hands to act on, estimate the length of, and describe sticks eliciting the Müller-Lyer illusion. For both gesture and sign, the magnitude of illusion in the description task was smaller than the magnitude of illusion in the estimation task and not different from the magnitude of illusion in the action task. The mechanisms responsible for producing gesture in speech and sign thus appear to operate not on percepts involved in estimation but on percepts derived from the way we act on objects.


2015 ◽  
Vol 19 (2) ◽  
pp. 128-148 ◽  
Author(s):  
Joshua Williams ◽  
Isabelle Darcy ◽  
Sharlene Newman

AbstractLittle is known about the acquisition of another language modality on second language (L2) working memory (WM) capacity. Differential indexing within the WM system based on language modality may explain differences in performance on WM tasks in sign and spoken language. We investigated the effect of language modality (sign versus spoken) on L2 WM capacity. Results indicated reduced L2 WM span relative to first language span for both L2 learners of Spanish and American Sign Language (ASL). Importantly, ASL learners had lower L2 WM spans than Spanish learners. Additionally, ASL learners increased their L2 WM spans as a function of proficiency, whereas Spanish learners did not. This pattern of results demonstrated that acquiring another language modality disadvantages ASL learners. We posited that this disadvantage arises out of an inability to correctly and efficiently allocate linguistic information to the visuospatial sketchpad due to L1-related indexing bias.


2020 ◽  
pp. 1-31
Author(s):  
IRIS BERENT ◽  
OUTI BAT-EL ◽  
DIANE BRENTARI ◽  
QATHERINE ANDAN ◽  
VERED VAKNIN-NUSBAUM

Does knowledge of language transfer spontaneously across language modalities? For example, do English speakers, who have had no command of a sign language, spontaneously project grammatical constraints from English to linguistic signs? Here, we address this question by examining the constraints on doubling. We first demonstrate that doubling (e.g. panana; generally: ABB) is amenable to two conflicting parses (identity vs. reduplication), depending on the level of analysis (phonology vs. morphology). We next show that speakers with no command of a sign language spontaneously project these two parses to novel ABB signs in American Sign Language. Moreover, the chosen parse (for signs) is constrained by the morphology of spoken language. Hebrew speakers can project the morphological parse when doubling indicates diminution, but English speakers only do so when doubling indicates plurality, in line with the distinct morphological properties of their spoken languages. These observations suggest that doubling in speech and signs is constrained by a common set of linguistic principles that are algebraic, amodal and abstract.


2013 ◽  
Vol 25 (4) ◽  
pp. 517-533 ◽  
Author(s):  
Karen Emmorey ◽  
Stephen McCullough ◽  
Sonya Mehta ◽  
Laura L. B. Ponto ◽  
Thomas J. Grabowski

Biological differences between signed and spoken languages may be most evident in the expression of spatial information. PET was used to investigate the neural substrates supporting the production of spatial language in American Sign Language as expressed by classifier constructions, in which handshape indicates object type and the location/motion of the hand iconically depicts the location/motion of a referent object. Deaf native signers performed a picture description task in which they overtly named objects or produced classifier constructions that varied in location, motion, or object type. In contrast to the expression of location and motion, the production of both lexical signs and object type classifier morphemes engaged left inferior frontal cortex and left inferior temporal cortex, supporting the hypothesis that unlike the location and motion components of a classifier construction, classifier handshapes are categorical morphemes that are retrieved via left hemisphere language regions. In addition, lexical signs engaged the anterior temporal lobes to a greater extent than classifier constructions, which we suggest reflects increased semantic processing required to name individual objects compared with simply indicating the type of object. Both location and motion classifier constructions engaged bilateral superior parietal cortex, with some evidence that the expression of static locations differentially engaged the left intraparietal sulcus. We argue that bilateral parietal activation reflects the biological underpinnings of sign language. To express spatial information, signers must transform visual–spatial representations into a body-centered reference frame and reach toward target locations within signing space.


1996 ◽  
Vol 6 (1) ◽  
pp. 65-86 ◽  
Author(s):  
Marina L. McIntire ◽  
Judy Reilly

Abstract In this study, we compared storytelling of a pictured narrative, Frog, Where Are You?, by 6 Deaf and 6 hearing mothers in American Sign Language (ASL) and in English, respectively. How do these mothers construct their stories, that is, how do they mark episodes? And how do English-speakers' strategies differ from ASL-users' strategies? We found that stories in ASL contained more explicit markers to signal both local and global relations of the narrative. Because of modality and grammatical differences between English and ASL, Deaf mothers seemed to have more strategies available to use. Although the overall pattern of use throughout the story was similar, Deaf mothers appeared to be more "dramatic" in their storytelling than were hearing mothers. Both groups of parents used a variety of markers to call their children's attention to the theme of the story. (Psychology)


2020 ◽  
Vol 25 (4) ◽  
pp. 447-456 ◽  
Author(s):  
Kristen Secora ◽  
Karen Emmorey

Abstract As spatial languages, sign languages rely on spatial cognitive processes that are not involved for spoken languages. Interlocutors have different visual perspectives of the signer’s hands requiring a mental transformation for successful communication about spatial scenes. It is unknown whether visual-spatial perspective-taking (VSPT) or mental rotation (MR) abilities support signers’ comprehension of perspective-dependent American Sign Language (ASL) structures. A total of 33 deaf ASL adult signers completed tasks examining nonlinguistic VSPT ability, MR ability, general ASL proficiency (ASL-Sentence Reproduction Task [ASL-SRT]), and an ASL comprehension test involving perspective-dependent classifier constructions (the ASL Spatial Perspective Comprehension Test [ASPCT] test). Scores on the linguistic (ASPCT) and VSPT tasks positively correlated with each other and both correlated with MR ability; however, VSPT abilities predicted linguistic perspective-taking better than did MR ability. ASL-SRT scores correlated with ASPCT accuracy (as both require ASL proficiency) but not with VSPT scores. Therefore, the ability to comprehend perspective-dependent ASL classifier constructions relates to ASL proficiency and to nonlinguistic VSPT and MR abilities.


1991 ◽  
Vol 34 (6) ◽  
pp. 1346-1361
Author(s):  
Paula M. Brown ◽  
Susan D. Fischer ◽  
Wynne Janis

This study provides a cross-linguistic replication, using American Sign Language (ASL), of the Brown and Dell (1987) finding that when relaying an action involving an instrument, English speakers are more likely to explicitly mention the instrument if it is atypically, rather than typically, used to accomplish that action. Subjects were 20 hearing-impaired users of English and 20 hearing-impaired users of ASL. Each subject read and retold, in either English or ASL, 20 short stories. Analyses of the stories revealed production decision differences between ASL and English, but no differences related to hearing status. In ASL, there is more explicitness, and importance seems to play a more pivotal role in instrument specification. The results are related to differences in the typology of English and ASL and are discussed with regard to secondlanguage learning and translation


Author(s):  
Greg Evans

Linguistic theory has traditionally defined language in terms of speech and has, as a result, labelled sign languages as non-linguistic systems. Recent advances in sign language linguistic research, however, indicate that modern linguistic theory must include sign language research and theory. This paper examines the historical bias linguistic theory has maintained towards sign languages and refutes the classification of sign languages as contrived artificial systems by surveying current linguistic research into American Sign Language. The growing body of American Sign Language research demonstrates that a signed language can have all the structural levels of spoken language despite its visual-spatial mode. This research also indicates that signed languages are an important source of linguistic data that can help further develop a cognitive linguistic theory.


2012 ◽  
Vol 3 ◽  
pp. 19
Author(s):  
Kate Mesh

This study compares the performance of two groups on an American Sign Language (ASL) perception task. Twenty-two L1 signers of ASL and twelve sign-naive English speakers watched a filmed lecture in ASL and pressed a response pad to identify "natural breaks" in the signing. Responses from each subject group were analyzed into agreement clusters--time slices of up to 2 seconds in which a substantial percentage of participants identified a boundary. Comparison of the response patterns of signers and non-signers revealed a one-way implication between signer agreement clusters and non-signer agreement clusters. That is, where signers agreed about the location of a boundary, non-signers did as well, but it was not the case that non-signer agreement about a boundary was a predictor that signers would identify the same boundary.


2021 ◽  
pp. 003335492110267
Author(s):  
Tyler G. James ◽  
Michael M. McKee ◽  
Meagan K. Sullivan ◽  
Glenna Ashton ◽  
Stephen J. Hardy ◽  
...  

Objectives Deaf American Sign Language (ASL) users comprise a linguistic and cultural minority group that is understudied and underserved in health education and health care research. We examined differences in health risk behaviors, concerns, and access to health care among Deaf ASL users and hearing English speakers living in Florida. Methods We applied community-engaged research methods to develop and administer the first linguistically accessible and contextually tailored community health needs assessment to Deaf ASL users living in Florida. Deaf ASL users (n = 92) were recruited during a 3-month period in summer 2018 and compared with a subset of data on hearing English speakers from the 2018 Florida Behavioral Risk Factor Surveillance System (n = 12 589). We explored prevalence and adjusted odds of health behavior, including substance use and health care use. Results Mental health was the top health concern among Deaf participants; 15.5% of participants screened as likely having a depressive disorder. Deaf people were 1.8 times more likely than hearing people to engage in binge drinking during the past month. In addition, 37.2% of participants reported being denied an interpreter in a medical facility in the past 12 months. Conclusion This study highlights the need to work with Deaf ASL users to develop context-specific health education and health promotion activities tailored to their linguistic and cultural needs and ensure that they receive accessible health care and health education.


Sign in / Sign up

Export Citation Format

Share Document