Person vs. locative agreement

2020 ◽  
Vol 23 (1-2) ◽  
pp. 17-37
Author(s):  
Lily Kwok ◽  
Stephanie Berk ◽  
Diane Lillo-Martin

Abstract Sign languages are frequently described as having three verb classes. One, ‘agreeing’ verbs, indicates the person/number of its subject and object by modification of the beginning and ending locations of the verb. The second, ‘spatial’ verbs, makes a similar appearing modification of verb movement to represent the source and goal locations of the theme of a verb of motion. The third class, ‘plain’ verbs, is characterized as having neither of these types of modulations. A number of researchers have proposed accounts that collapse all of these types, or the person-agreeing and spatial verbs. Here we present evidence from late learners of American Sign Language and from the emergence of new sign languages that person agreement and locative agreement have a different status in these conditions, and we claim their analysis should be kept distinct, at least in certain ways.

2021 ◽  
pp. 095679762199155
Author(s):  
Amanda R. Brown ◽  
Wim Pouw ◽  
Diane Brentari ◽  
Susan Goldin-Meadow

When we use our hands to estimate the length of a stick in the Müller-Lyer illusion, we are highly susceptible to the illusion. But when we prepare to act on sticks under the same conditions, we are significantly less susceptible. Here, we asked whether people are susceptible to illusion when they use their hands not to act on objects but to describe them in spontaneous co-speech gestures or conventional sign languages of the deaf. Thirty-two English speakers and 13 American Sign Language signers used their hands to act on, estimate the length of, and describe sticks eliciting the Müller-Lyer illusion. For both gesture and sign, the magnitude of illusion in the description task was smaller than the magnitude of illusion in the estimation task and not different from the magnitude of illusion in the action task. The mechanisms responsible for producing gesture in speech and sign thus appear to operate not on percepts involved in estimation but on percepts derived from the way we act on objects.


2021 ◽  
Author(s):  
Kathryn Woodcock ◽  
Steven L. Fischer

<div>"This Guide is intended for working interpreters, interpreting students and educators, and those who employ or purchase the services of interpreters. Occupational health education is essential for professionals in training, to avoid early attrition from practice. "Sign language interpreting" is considered to include interpretation between American Sign Language (ASL) and English, other spoken languages and corresponding sign languages, and between sign languages (e.g., Deaf Interpreters). Some of the occupational health issues may also apply equally to Communication Access Realtime Translation (CART) reporters, oral interpreters, and intervenors. The reader is encouraged to make as much use as possible of the information provided here". -- Introduction.</div><div><br></div>


Gesture ◽  
2013 ◽  
Vol 13 (1) ◽  
pp. 1-27 ◽  
Author(s):  
Rachel Sutton-Spence ◽  
Donna Jo Napoli

Sign Language poetry is especially valued for its presentation of strong visual images. Here, we explore the highly visual signs that British Sign Language and American Sign Language poets create as part of the ‘classifier system’ of their languages. Signed languages, as they create visually-motivated messages, utilise categoricity (more traditionally considered ‘language’) and analogy (more traditionally considered extra-linguistic and the domain of ‘gesture’). Classifiers in sign languages arguably show both these characteristics (Oviedo, 2004). In our discussion of sign language poetry, we see that poets take elements that are widely understood to be highly visual, closely representing their referents, and make them even more highly visual — so going beyond categorisation and into new areas of analogue.


1977 ◽  
Vol 6 (3) ◽  
pp. 379-388 ◽  
Author(s):  
James Woodward ◽  
Susan Desantis

ABSTRACTThis paper examines Negative Incorporation in various lects of two historically related sign languages, French Sign Language and American Sign Language. Negative Incorporation not only offers interesting insights into the structure of French and American Sign Language, but also into the descriptive and explanatory power of variation theory. By viewing Negative Incorporation in a dynamic framework, we are able to describe the variable usage of Negative Incorporation as a phonological process in French Sign Language and as a grammatical process in American Sign Language, to argue for possible early creolization in American Sign Language, to show the historical continuum between French Sign Language and American Sign Language despite heavy restructuring, and to demonstrate the influences of social variables on language variation and change, especially illustrating the progressive role of women in sign language change and the conservative forces in French Sign Language as compared with American Sign Language. (Sociolinguistics, sign language, creolization, linguistic changes.)


2009 ◽  
Vol 21 (2) ◽  
pp. 193-231 ◽  
Author(s):  
Adam Schembri ◽  
David McKee ◽  
Rachel McKee ◽  
Sara Pivac ◽  
Trevor Johnston ◽  
...  

AbstractIn this study, we consider variation in a class of signs in Australian and New Zealand Sign Languages that includes the signs think, name, and clever. In their citation form, these signs are specified for a place of articulation at or near the signer's forehead or above, but are sometimes produced at lower locations. An analysis of 2667 tokens collected from 205 deaf signers in five sites across Australia and of 2096 tokens collected from 138 deaf signers from three regions in New Zealand indicates that location variation in these signs reflects both linguistic and social factors, as also reported for American Sign Language (Lucas, Bayley, & Valli, 2001). Despite similarities, however, we find that some of the particular factors at work, and the kinds of influence they have, appear to differ in these three signed languages. Moreover, our results suggest that lexical frequency may also play a role.


A Gesture Vocalizer is a small scale or a large scale system that provides a way for dumb and mute people to communicate easily. The research paper defines a technique, Finger Gesture Vocalizer which includes sensors attached to the gloves above the fingers of the person who wants to communicate. The sensors are arranged in such a way on the gloves, that they can capture the movements of the fingers and based on the change in resistance of the sensors, it can be identified what the person wants to say. The message is displayed on the LCD and is also converted to audio using the APR33A3 audio processing unit. Standard sign languages such as that of American Sign Language which is used by dumb and mute people to communicate can be employed while wearing these gloves.


Author(s):  
Anne Therese Frederiksen ◽  
Rachel I. Mayberry

AbstractImplicit causality (IC) biases, the tendency of certain verbs to elicit re-mention of either the first-mentioned noun phrase (NP1) or the second-mentioned noun phrase (NP2) from the previous clause, are important in psycholinguistic research. Understanding IC verbs and the source of their biases in signed as well as spoken languages helps elucidate whether these phenomena are language general or specific to the spoken modality. As the first of its kind, this study investigates IC biases in American Sign Language (ASL) and provides IC bias norms for over 200 verbs, facilitating future psycholinguistic studies of ASL and comparisons of spoken versus signed languages. We investigated whether native ASL signers continued sentences with IC verbs (e.g., ASL equivalents of ‘Lisa annoys Maya because…’) by mentioning NP1 (i.e., Lisa) or NP2 (i.e., Maya). We found a tendency towards more NP2-biased verbs. Previous work has found that a verb’s thematic roles predict bias direction: stimulus-experiencer verbs (e.g., ‘annoy’), where the first argument is the stimulus (causing annoyance) and the second argument is the experiencer (experiencing annoyance), elicit more NP1 continuations. Verbs with experiencer-stimulus thematic roles (e.g., ‘love’) elicit more NP2 continuations. We probed whether the trend towards more NP2-biased verbs was related to an existing claim that stimulus-experiencer verbs do not exist in sign languages. We found that stimulus-experiencer structure, while permitted, is infrequent, impacting the IC bias distribution in ASL. Nevertheless, thematic roles predict IC bias in ASL, suggesting that the thematic role-IC bias relationship is stable across languages as well as modalities.


2014 ◽  
Vol 17 (1) ◽  
pp. 82-101 ◽  
Author(s):  
Jesse Stewart

In spoken languages, disfluent speech, narrative effects, discourse information, and phrase position may influence the lengthening of segments beyond their typical duration. In sign languages, however, the primary use of the visual-gestural modality results in articulatory differences not expressed in spoken languages. This paper looks at sign lengthening in American Sign Language (ASL). Comparing two retellings of the Pear Story narrative from five signers, three primary lengthening mechanisms were identified: elongation, repetition, and deceleration. These mechanisms allow signers to incorporate lengthening into signs which may benefit from decelerated language production due to high information load or complex articulatory processes. Using a mixed effects model, significant differences in duration were found between (i) non-conventionalized forms vs. lexical signs, (ii) signs produced during role shift vs. non-role shift, (iii) signs in phrase-final/initial vs. phrase-medial position, (iv) new vs. given information, and (v) (non-disordered) disfluent signing vs. non-disfluent signing. These results provide insights into duration effects caused by information load and articulatory processes in ASL.


2021 ◽  
Author(s):  
Kathryn Woodcock ◽  
Steven L. Fischer

<div>"This Guide is intended for working interpreters, interpreting students and educators, and those who employ or purchase the services of interpreters. Occupational health education is essential for professionals in training, to avoid early attrition from practice. "Sign language interpreting" is considered to include interpretation between American Sign Language (ASL) and English, other spoken languages and corresponding sign languages, and between sign languages (e.g., Deaf Interpreters). Some of the occupational health issues may also apply equally to Communication Access Realtime Translation (CART) reporters, oral interpreters, and intervenors. The reader is encouraged to make as much use as possible of the information provided here". -- Introduction.</div><div><br></div>


2009 ◽  
Vol 11 (2) ◽  
pp. 139-183 ◽  
Author(s):  
Hsin-Hsien Lee

Handshape change is the change of handshape from one configuration to another. It is a unique and pervasive pattern attested in sign languages. The issue here is how to represent this change of handshape phonologically. Previous studies on handshape change were mostly done on American Sign Language (ASL) and two different ways of representing handshape change have been proposed. Some models (e.g. Liddell 1990; Uyechi 1996) propose that all surface handshapes are represented underlyingly, whereas others (e.g. Brentari 1998; Corina 1993; Sandler 1989) suggest representing only one handshape because the other handshape is predictable. Handshape change attested in monomorphemic signs in Taiwan Sign Language (TSL) will be described and analyzed in this paper. I argue that the TSL data support the latter view. In addition, a restrictive model is preferred if it can describe the data adequately and at the same time does not over-generate.


Sign in / Sign up

Export Citation Format

Share Document