Real space blends in spoken language

Gesture ◽  
2004 ◽  
Vol 4 (1) ◽  
pp. 75-89 ◽  
Author(s):  
David MacGregor

In analyzing the use of space in American Sign Language (ASL), Liddell (2003) argues convincingly that no account of ASL can be complete without a discussion of how linguistic signs and non-linguistic gestures and gradient phenomena work together to create meaning. This represents a departure from the assumptions of much of linguistic theory, which has attempted to describe purely linguistic phenomena as part of an autonomous system. It also raises the question of whether these phenomena are peculiar to ASL and other sign languages, or if they also apply to spoken language. In this paper, I show how Liddell’s approach can be applied to English data to provide a fuller explanation of how speakers create meaning. Specifically, I analyze Jack Lemmons’ use of space, gesture, and voice in a scene from the movie “Mr. Roberts”.

Author(s):  
Greg Evans

Linguistic theory has traditionally defined language in terms of speech and has, as a result, labelled sign languages as non-linguistic systems. Recent advances in sign language linguistic research, however, indicate that modern linguistic theory must include sign language research and theory. This paper examines the historical bias linguistic theory has maintained towards sign languages and refutes the classification of sign languages as contrived artificial systems by surveying current linguistic research into American Sign Language. The growing body of American Sign Language research demonstrates that a signed language can have all the structural levels of spoken language despite its visual-spatial mode. This research also indicates that signed languages are an important source of linguistic data that can help further develop a cognitive linguistic theory.


2021 ◽  
pp. 095679762199155
Author(s):  
Amanda R. Brown ◽  
Wim Pouw ◽  
Diane Brentari ◽  
Susan Goldin-Meadow

When we use our hands to estimate the length of a stick in the Müller-Lyer illusion, we are highly susceptible to the illusion. But when we prepare to act on sticks under the same conditions, we are significantly less susceptible. Here, we asked whether people are susceptible to illusion when they use their hands not to act on objects but to describe them in spontaneous co-speech gestures or conventional sign languages of the deaf. Thirty-two English speakers and 13 American Sign Language signers used their hands to act on, estimate the length of, and describe sticks eliciting the Müller-Lyer illusion. For both gesture and sign, the magnitude of illusion in the description task was smaller than the magnitude of illusion in the estimation task and not different from the magnitude of illusion in the action task. The mechanisms responsible for producing gesture in speech and sign thus appear to operate not on percepts involved in estimation but on percepts derived from the way we act on objects.


2021 ◽  
Author(s):  
Kathryn Woodcock ◽  
Steven L. Fischer

<div>"This Guide is intended for working interpreters, interpreting students and educators, and those who employ or purchase the services of interpreters. Occupational health education is essential for professionals in training, to avoid early attrition from practice. "Sign language interpreting" is considered to include interpretation between American Sign Language (ASL) and English, other spoken languages and corresponding sign languages, and between sign languages (e.g., Deaf Interpreters). Some of the occupational health issues may also apply equally to Communication Access Realtime Translation (CART) reporters, oral interpreters, and intervenors. The reader is encouraged to make as much use as possible of the information provided here". -- Introduction.</div><div><br></div>


Gesture ◽  
2013 ◽  
Vol 13 (1) ◽  
pp. 1-27 ◽  
Author(s):  
Rachel Sutton-Spence ◽  
Donna Jo Napoli

Sign Language poetry is especially valued for its presentation of strong visual images. Here, we explore the highly visual signs that British Sign Language and American Sign Language poets create as part of the ‘classifier system’ of their languages. Signed languages, as they create visually-motivated messages, utilise categoricity (more traditionally considered ‘language’) and analogy (more traditionally considered extra-linguistic and the domain of ‘gesture’). Classifiers in sign languages arguably show both these characteristics (Oviedo, 2004). In our discussion of sign language poetry, we see that poets take elements that are widely understood to be highly visual, closely representing their referents, and make them even more highly visual — so going beyond categorisation and into new areas of analogue.


1977 ◽  
Vol 6 (3) ◽  
pp. 379-388 ◽  
Author(s):  
James Woodward ◽  
Susan Desantis

ABSTRACTThis paper examines Negative Incorporation in various lects of two historically related sign languages, French Sign Language and American Sign Language. Negative Incorporation not only offers interesting insights into the structure of French and American Sign Language, but also into the descriptive and explanatory power of variation theory. By viewing Negative Incorporation in a dynamic framework, we are able to describe the variable usage of Negative Incorporation as a phonological process in French Sign Language and as a grammatical process in American Sign Language, to argue for possible early creolization in American Sign Language, to show the historical continuum between French Sign Language and American Sign Language despite heavy restructuring, and to demonstrate the influences of social variables on language variation and change, especially illustrating the progressive role of women in sign language change and the conservative forces in French Sign Language as compared with American Sign Language. (Sociolinguistics, sign language, creolization, linguistic changes.)


2015 ◽  
Vol 19 (2) ◽  
pp. 128-148 ◽  
Author(s):  
Joshua Williams ◽  
Isabelle Darcy ◽  
Sharlene Newman

AbstractLittle is known about the acquisition of another language modality on second language (L2) working memory (WM) capacity. Differential indexing within the WM system based on language modality may explain differences in performance on WM tasks in sign and spoken language. We investigated the effect of language modality (sign versus spoken) on L2 WM capacity. Results indicated reduced L2 WM span relative to first language span for both L2 learners of Spanish and American Sign Language (ASL). Importantly, ASL learners had lower L2 WM spans than Spanish learners. Additionally, ASL learners increased their L2 WM spans as a function of proficiency, whereas Spanish learners did not. This pattern of results demonstrated that acquiring another language modality disadvantages ASL learners. We posited that this disadvantage arises out of an inability to correctly and efficiently allocate linguistic information to the visuospatial sketchpad due to L1-related indexing bias.


2009 ◽  
Vol 21 (2) ◽  
pp. 193-231 ◽  
Author(s):  
Adam Schembri ◽  
David McKee ◽  
Rachel McKee ◽  
Sara Pivac ◽  
Trevor Johnston ◽  
...  

AbstractIn this study, we consider variation in a class of signs in Australian and New Zealand Sign Languages that includes the signs think, name, and clever. In their citation form, these signs are specified for a place of articulation at or near the signer's forehead or above, but are sometimes produced at lower locations. An analysis of 2667 tokens collected from 205 deaf signers in five sites across Australia and of 2096 tokens collected from 138 deaf signers from three regions in New Zealand indicates that location variation in these signs reflects both linguistic and social factors, as also reported for American Sign Language (Lucas, Bayley, & Valli, 2001). Despite similarities, however, we find that some of the particular factors at work, and the kinds of influence they have, appear to differ in these three signed languages. Moreover, our results suggest that lexical frequency may also play a role.


A Gesture Vocalizer is a small scale or a large scale system that provides a way for dumb and mute people to communicate easily. The research paper defines a technique, Finger Gesture Vocalizer which includes sensors attached to the gloves above the fingers of the person who wants to communicate. The sensors are arranged in such a way on the gloves, that they can capture the movements of the fingers and based on the change in resistance of the sensors, it can be identified what the person wants to say. The message is displayed on the LCD and is also converted to audio using the APR33A3 audio processing unit. Standard sign languages such as that of American Sign Language which is used by dumb and mute people to communicate can be employed while wearing these gloves.


2017 ◽  
Vol 2 ◽  
pp. 14 ◽  
Author(s):  
Marjorie Herbert ◽  
Acrisio Pires

The audiologically deaf members of the American Deaf community display bilingual competence in American Sign Language (ASL) and English, although their language acquisition trajectories often involve delayed exposure to one or both languages. There is a great deal of variation in terms of production among these signers, ranging from very ASL-typical to productions that seem to display heavy English influence. The latter, mixed productions, coined “Contact Signing” by Lucas & Valli (1992), could be representative of a type of codeswitching, referred to as ‘code-blending’ in sign language-spoken language contexts (e.g. Baker & Van den Bogaerde 2008), in which bilinguals invoke knowledge of their two grammars in concert, or these productions could be more like a mixed language, in which a third grammar, distinct from both ASL and English, constrains them. We argue, based on the analysis of our corpus of naturalistic data collected in an all-deaf sociolinguistic environment, that Contact Signing provides evidence for code-blending, given the distribution of English vs. ASL-based language properties in the production data from the participants in our study.


Author(s):  
Anne Therese Frederiksen ◽  
Rachel I. Mayberry

AbstractImplicit causality (IC) biases, the tendency of certain verbs to elicit re-mention of either the first-mentioned noun phrase (NP1) or the second-mentioned noun phrase (NP2) from the previous clause, are important in psycholinguistic research. Understanding IC verbs and the source of their biases in signed as well as spoken languages helps elucidate whether these phenomena are language general or specific to the spoken modality. As the first of its kind, this study investigates IC biases in American Sign Language (ASL) and provides IC bias norms for over 200 verbs, facilitating future psycholinguistic studies of ASL and comparisons of spoken versus signed languages. We investigated whether native ASL signers continued sentences with IC verbs (e.g., ASL equivalents of ‘Lisa annoys Maya because…’) by mentioning NP1 (i.e., Lisa) or NP2 (i.e., Maya). We found a tendency towards more NP2-biased verbs. Previous work has found that a verb’s thematic roles predict bias direction: stimulus-experiencer verbs (e.g., ‘annoy’), where the first argument is the stimulus (causing annoyance) and the second argument is the experiencer (experiencing annoyance), elicit more NP1 continuations. Verbs with experiencer-stimulus thematic roles (e.g., ‘love’) elicit more NP2 continuations. We probed whether the trend towards more NP2-biased verbs was related to an existing claim that stimulus-experiencer verbs do not exist in sign languages. We found that stimulus-experiencer structure, while permitted, is infrequent, impacting the IC bias distribution in ASL. Nevertheless, thematic roles predict IC bias in ASL, suggesting that the thematic role-IC bias relationship is stable across languages as well as modalities.


Sign in / Sign up

Export Citation Format

Share Document