scholarly journals Implicit causality biases and thematic roles in American Sign Language

Author(s):  
Anne Therese Frederiksen ◽  
Rachel I. Mayberry

AbstractImplicit causality (IC) biases, the tendency of certain verbs to elicit re-mention of either the first-mentioned noun phrase (NP1) or the second-mentioned noun phrase (NP2) from the previous clause, are important in psycholinguistic research. Understanding IC verbs and the source of their biases in signed as well as spoken languages helps elucidate whether these phenomena are language general or specific to the spoken modality. As the first of its kind, this study investigates IC biases in American Sign Language (ASL) and provides IC bias norms for over 200 verbs, facilitating future psycholinguistic studies of ASL and comparisons of spoken versus signed languages. We investigated whether native ASL signers continued sentences with IC verbs (e.g., ASL equivalents of ‘Lisa annoys Maya because…’) by mentioning NP1 (i.e., Lisa) or NP2 (i.e., Maya). We found a tendency towards more NP2-biased verbs. Previous work has found that a verb’s thematic roles predict bias direction: stimulus-experiencer verbs (e.g., ‘annoy’), where the first argument is the stimulus (causing annoyance) and the second argument is the experiencer (experiencing annoyance), elicit more NP1 continuations. Verbs with experiencer-stimulus thematic roles (e.g., ‘love’) elicit more NP2 continuations. We probed whether the trend towards more NP2-biased verbs was related to an existing claim that stimulus-experiencer verbs do not exist in sign languages. We found that stimulus-experiencer structure, while permitted, is infrequent, impacting the IC bias distribution in ASL. Nevertheless, thematic roles predict IC bias in ASL, suggesting that the thematic role-IC bias relationship is stable across languages as well as modalities.

Gesture ◽  
2013 ◽  
Vol 13 (1) ◽  
pp. 1-27 ◽  
Author(s):  
Rachel Sutton-Spence ◽  
Donna Jo Napoli

Sign Language poetry is especially valued for its presentation of strong visual images. Here, we explore the highly visual signs that British Sign Language and American Sign Language poets create as part of the ‘classifier system’ of their languages. Signed languages, as they create visually-motivated messages, utilise categoricity (more traditionally considered ‘language’) and analogy (more traditionally considered extra-linguistic and the domain of ‘gesture’). Classifiers in sign languages arguably show both these characteristics (Oviedo, 2004). In our discussion of sign language poetry, we see that poets take elements that are widely understood to be highly visual, closely representing their referents, and make them even more highly visual — so going beyond categorisation and into new areas of analogue.


2020 ◽  
Vol 40 (5-6) ◽  
pp. 585-591
Author(s):  
Lynn Hou ◽  
Jill P. Morford

The visual-manual modality of sign languages renders them a unique test case for language acquisition and processing theories. In this commentary the authors describe evidence from signed languages, and ask whether it is consistent with Ambridge’s proposal. The evidence includes recent research on collocations in American Sign Language that reveal collocational frequency effects and patterns that do not constitute syntactic constituents. While these collocations appear to resist fully abstract schematization, further consideration of how speakers create exemplars and how they link exemplar clouds based on tokens and how much abstraction is involved in their creation is warranted.


2021 ◽  
pp. 095679762199155
Author(s):  
Amanda R. Brown ◽  
Wim Pouw ◽  
Diane Brentari ◽  
Susan Goldin-Meadow

When we use our hands to estimate the length of a stick in the Müller-Lyer illusion, we are highly susceptible to the illusion. But when we prepare to act on sticks under the same conditions, we are significantly less susceptible. Here, we asked whether people are susceptible to illusion when they use their hands not to act on objects but to describe them in spontaneous co-speech gestures or conventional sign languages of the deaf. Thirty-two English speakers and 13 American Sign Language signers used their hands to act on, estimate the length of, and describe sticks eliciting the Müller-Lyer illusion. For both gesture and sign, the magnitude of illusion in the description task was smaller than the magnitude of illusion in the estimation task and not different from the magnitude of illusion in the action task. The mechanisms responsible for producing gesture in speech and sign thus appear to operate not on percepts involved in estimation but on percepts derived from the way we act on objects.


2021 ◽  
Author(s):  
Kathryn Woodcock ◽  
Steven L. Fischer

<div>"This Guide is intended for working interpreters, interpreting students and educators, and those who employ or purchase the services of interpreters. Occupational health education is essential for professionals in training, to avoid early attrition from practice. "Sign language interpreting" is considered to include interpretation between American Sign Language (ASL) and English, other spoken languages and corresponding sign languages, and between sign languages (e.g., Deaf Interpreters). Some of the occupational health issues may also apply equally to Communication Access Realtime Translation (CART) reporters, oral interpreters, and intervenors. The reader is encouraged to make as much use as possible of the information provided here". -- Introduction.</div><div><br></div>


Gesture ◽  
2001 ◽  
Vol 1 (1) ◽  
pp. 51-72 ◽  
Author(s):  
Evelyn McClave

This paper presents evidence of non-manual gestures in American Sign Language (ASL). The types of gestures identified are identical to non-manual, spontaneous gestures used by hearing non-signers which suggests that the gestures co-occurring with ASL signs are borrowings from hearing culture. A comparison of direct quotes in ASL with spontaneous movements of hearing non-signers suggests a history of borrowing and eventual grammaticization in ASL of features previously thought to be unique to signed languages. The electronic edition of this article includes audio-visial data.


1977 ◽  
Vol 6 (3) ◽  
pp. 379-388 ◽  
Author(s):  
James Woodward ◽  
Susan Desantis

ABSTRACTThis paper examines Negative Incorporation in various lects of two historically related sign languages, French Sign Language and American Sign Language. Negative Incorporation not only offers interesting insights into the structure of French and American Sign Language, but also into the descriptive and explanatory power of variation theory. By viewing Negative Incorporation in a dynamic framework, we are able to describe the variable usage of Negative Incorporation as a phonological process in French Sign Language and as a grammatical process in American Sign Language, to argue for possible early creolization in American Sign Language, to show the historical continuum between French Sign Language and American Sign Language despite heavy restructuring, and to demonstrate the influences of social variables on language variation and change, especially illustrating the progressive role of women in sign language change and the conservative forces in French Sign Language as compared with American Sign Language. (Sociolinguistics, sign language, creolization, linguistic changes.)


2009 ◽  
Vol 21 (2) ◽  
pp. 193-231 ◽  
Author(s):  
Adam Schembri ◽  
David McKee ◽  
Rachel McKee ◽  
Sara Pivac ◽  
Trevor Johnston ◽  
...  

AbstractIn this study, we consider variation in a class of signs in Australian and New Zealand Sign Languages that includes the signs think, name, and clever. In their citation form, these signs are specified for a place of articulation at or near the signer's forehead or above, but are sometimes produced at lower locations. An analysis of 2667 tokens collected from 205 deaf signers in five sites across Australia and of 2096 tokens collected from 138 deaf signers from three regions in New Zealand indicates that location variation in these signs reflects both linguistic and social factors, as also reported for American Sign Language (Lucas, Bayley, & Valli, 2001). Despite similarities, however, we find that some of the particular factors at work, and the kinds of influence they have, appear to differ in these three signed languages. Moreover, our results suggest that lexical frequency may also play a role.


A Gesture Vocalizer is a small scale or a large scale system that provides a way for dumb and mute people to communicate easily. The research paper defines a technique, Finger Gesture Vocalizer which includes sensors attached to the gloves above the fingers of the person who wants to communicate. The sensors are arranged in such a way on the gloves, that they can capture the movements of the fingers and based on the change in resistance of the sensors, it can be identified what the person wants to say. The message is displayed on the LCD and is also converted to audio using the APR33A3 audio processing unit. Standard sign languages such as that of American Sign Language which is used by dumb and mute people to communicate can be employed while wearing these gloves.


2020 ◽  
Vol 23 (1-2) ◽  
pp. 17-37
Author(s):  
Lily Kwok ◽  
Stephanie Berk ◽  
Diane Lillo-Martin

Abstract Sign languages are frequently described as having three verb classes. One, ‘agreeing’ verbs, indicates the person/number of its subject and object by modification of the beginning and ending locations of the verb. The second, ‘spatial’ verbs, makes a similar appearing modification of verb movement to represent the source and goal locations of the theme of a verb of motion. The third class, ‘plain’ verbs, is characterized as having neither of these types of modulations. A number of researchers have proposed accounts that collapse all of these types, or the person-agreeing and spatial verbs. Here we present evidence from late learners of American Sign Language and from the emergence of new sign languages that person agreement and locative agreement have a different status in these conditions, and we claim their analysis should be kept distinct, at least in certain ways.


Target ◽  
1995 ◽  
Vol 7 (1) ◽  
pp. 135-149 ◽  
Author(s):  
William P. Isham

Abstract Research using interpreters who work with signed languages can aid us in understanding the cognitive processes of interpretation in general. Using American Sign Language (ASL) as an example, the nature of signed languages is outlined first. Then the difference between signed languages and manual codes for spoken languages is delineated, and it is argued that these two manners of communicating through the visual channel offer a unique research opportunity. Finally, an example from recent research is used to demonstrate how comparisons between spoken-language interpreters and signed-language interpreters can be used to test hypotheses regarding interpretation.


Sign in / Sign up

Export Citation Format

Share Document