Pragmatic and Linguistic Constraints on Message Formulation

1991 ◽  
Vol 34 (6) ◽  
pp. 1346-1361
Author(s):  
Paula M. Brown ◽  
Susan D. Fischer ◽  
Wynne Janis

This study provides a cross-linguistic replication, using American Sign Language (ASL), of the Brown and Dell (1987) finding that when relaying an action involving an instrument, English speakers are more likely to explicitly mention the instrument if it is atypically, rather than typically, used to accomplish that action. Subjects were 20 hearing-impaired users of English and 20 hearing-impaired users of ASL. Each subject read and retold, in either English or ASL, 20 short stories. Analyses of the stories revealed production decision differences between ASL and English, but no differences related to hearing status. In ASL, there is more explicitness, and importance seems to play a more pivotal role in instrument specification. The results are related to differences in the typology of English and ASL and are discussed with regard to secondlanguage learning and translation

2021 ◽  
pp. 095679762199155
Author(s):  
Amanda R. Brown ◽  
Wim Pouw ◽  
Diane Brentari ◽  
Susan Goldin-Meadow

When we use our hands to estimate the length of a stick in the Müller-Lyer illusion, we are highly susceptible to the illusion. But when we prepare to act on sticks under the same conditions, we are significantly less susceptible. Here, we asked whether people are susceptible to illusion when they use their hands not to act on objects but to describe them in spontaneous co-speech gestures or conventional sign languages of the deaf. Thirty-two English speakers and 13 American Sign Language signers used their hands to act on, estimate the length of, and describe sticks eliciting the Müller-Lyer illusion. For both gesture and sign, the magnitude of illusion in the description task was smaller than the magnitude of illusion in the estimation task and not different from the magnitude of illusion in the action task. The mechanisms responsible for producing gesture in speech and sign thus appear to operate not on percepts involved in estimation but on percepts derived from the way we act on objects.


2012 ◽  
Vol 15 (2) ◽  
pp. 402-412 ◽  
Author(s):  
DIANE BRENTARI ◽  
MARIE A. NADOLSKE ◽  
GEORGE WOLFORD

In this paper the prosodic structure of American Sign Language (ASL) narratives is analyzed in deaf native signers (L1-D), hearing native signers (L1-H), and highly proficient hearing second language signers (L2-H). The results of this study show that the prosodic patterns used by these groups are associated both with their ASL language experience (L1 or L2) and with their hearing status (deaf or hearing), suggesting that experience using co-speech gesture (i.e. gesturing while speaking) may have some effect on the prosodic cues used by hearing signers, similar to the effects of the prosodic structure of an L1 on an L2.


The growth of technology has influenced development in various fields. Technology has helped people achieve their dreams over the past years. One such field that technology involves is aiding the hearing and speech impaired people. The obstruction between common individuals and individuals with hearing and language incapacities can be resolved by using the current technology to develop an environment such that the aforementioned easily communicate among one and other. ASL Interpreter aims to facilitate communication among the hearing and speech impaired individuals. This project mainly focuses on the development of software that can convert American Sign Language to Communicative English Language and vice-versa. This is accomplished via Image-Processing. The latter is a system that does a few activities on a picture, to acquire an improved picture or to extricate some valuable data from it. Image processing in this project is done by using MATLAB, software by MathWorks. The latter is programmed in a way that it captures the live image of the hand gesture. The captured gestures are put under the spotlight by being distinctively colored in contrast with the black background. The contrasted hand gesture will be delivered in the database as a binary equivalent of the location of each pixel and the interpreter would now link the binary value to its equivalent translation delivered in the database. This database shall be integrated into the mainframe image processing interface. The Image Processing toolbox, which is an inbuilt toolkit provided by MATLAB is used in the development of the software and Histogramic equivalents of the images are brought to the database and the extracted image will be converted to a histogram using the ‘imhist()’ function and would be compared with the same. The concluding phase of the project i.e. translation of speech to sign language is designed by matching the letter equivalent to the hand gesture in the database and displaying the result as images. The software will use a webcam to capture the hand gesture made by the user. This venture plans to facilitate the way toward learning gesture-based communication and supports hearing-impaired people to converse without trouble.


2020 ◽  
pp. 1-31
Author(s):  
IRIS BERENT ◽  
OUTI BAT-EL ◽  
DIANE BRENTARI ◽  
QATHERINE ANDAN ◽  
VERED VAKNIN-NUSBAUM

Does knowledge of language transfer spontaneously across language modalities? For example, do English speakers, who have had no command of a sign language, spontaneously project grammatical constraints from English to linguistic signs? Here, we address this question by examining the constraints on doubling. We first demonstrate that doubling (e.g. panana; generally: ABB) is amenable to two conflicting parses (identity vs. reduplication), depending on the level of analysis (phonology vs. morphology). We next show that speakers with no command of a sign language spontaneously project these two parses to novel ABB signs in American Sign Language. Moreover, the chosen parse (for signs) is constrained by the morphology of spoken language. Hebrew speakers can project the morphological parse when doubling indicates diminution, but English speakers only do so when doubling indicates plurality, in line with the distinct morphological properties of their spoken languages. These observations suggest that doubling in speech and signs is constrained by a common set of linguistic principles that are algebraic, amodal and abstract.


2018 ◽  
Vol 39 (5) ◽  
pp. 961-987 ◽  
Author(s):  
ZED SEVCIKOVA SEHYR ◽  
BRENDA NICODEMUS ◽  
JENNIFER PETRICH ◽  
KAREN EMMOREY

ABSTRACTAmerican Sign Language (ASL) and English differ in linguistic resources available to express visual–spatial information. In a referential communication task, we examined the effect of language modality on the creation and mutual acceptance of reference to non-nameable figures. In both languages, description times reduced over iterations and references to the figures’ geometric properties (“shape-based reference”) declined over time in favor of expressions describing the figures’ resemblance to nameable objects (“analogy-based reference”). ASL signers maintained a preference for shape-based reference until the final (sixth) round, while English speakers transitioned toward analogy-based reference by Round 3. Analogy-based references were more time efficient (associated with shorter round description times). Round completion times were longer for ASL than for English, possibly due to gaze demands of the task and/or to more shape-based descriptions. Signers’ referring expressions remained unaffected by figure complexity while speakers preferred analogy-based expressions for complex figures and shape-based expressions for simple figures. Like speech, co-speech gestures decreased over iterations. Gestures primarily accompanied shape-based references, but listeners rarely looked at these gestures, suggesting that they were recruited to aid the speaker rather than the addressee. Overall, different linguistic resources (classifier constructions vs. geometric vocabulary) imposed distinct demands on referring strategies in ASL and English.


1996 ◽  
Vol 6 (1) ◽  
pp. 65-86 ◽  
Author(s):  
Marina L. McIntire ◽  
Judy Reilly

Abstract In this study, we compared storytelling of a pictured narrative, Frog, Where Are You?, by 6 Deaf and 6 hearing mothers in American Sign Language (ASL) and in English, respectively. How do these mothers construct their stories, that is, how do they mark episodes? And how do English-speakers' strategies differ from ASL-users' strategies? We found that stories in ASL contained more explicit markers to signal both local and global relations of the narrative. Because of modality and grammatical differences between English and ASL, Deaf mothers seemed to have more strategies available to use. Although the overall pattern of use throughout the story was similar, Deaf mothers appeared to be more "dramatic" in their storytelling than were hearing mothers. Both groups of parents used a variety of markers to call their children's attention to the theme of the story. (Psychology)


2021 ◽  
Author(s):  
Bhavadharshini M ◽  
Josephine Racheal J ◽  
Kamali M ◽  
Sankar S ◽  
Bhavadharshini M

Sign language is a terminology that encloses a motion of hand gestures which is an environment for the auditory impairment, individual (deaf or dumb) to deal with others. Nevertheless, so as to impart with the hearing impaired individual, the communicator obtains to acquire acquaintance in sign language. As follows is frequent to make undoubted that the message provided by the hearing impaired person acknowledged. This implemented system propounds an implementation of real time American Sign Language perception in Convolutional Neural Network (CNN) with the support of You Only Look Once version (YOLO) algorithm. The algorithm initially executes data acquisition, subsequently the pre-processing of gestures and are conducted to trace hand movement utilize a combinational algorithm.


Sign in / Sign up

Export Citation Format

Share Document