From action to spoken and signed language through gesture

Author(s):  
Virginia Volterra ◽  
Olga Capirci ◽  
Pasquale Rinaldi ◽  
Laura Sparaci
Keyword(s):  
2019 ◽  
Vol 5 (1) ◽  
pp. 583-600
Author(s):  
Lindsay Ferrara ◽  
Torill Ringsø

AbstractPrevious studies on perspective in spatial signed language descriptions suggest a basic dichotomy between either a route or a survey perspective, which entails either the signer being conceptualized as a mobile agent within a life-sized scene or the signer in a fixed position as an external observer of a scaled-down scene. We challenge this dichotomy by investigating the particular couplings of vantage point position and mobility engaged during various types of spatial language produced across eight naturalistic conversations in Norwegian Sign Language. Spatial language was annotated for the purpose of the segment, the size of the environment described, the signs produced, and the location and mobility of vantage points. Analysis revealed that survey and route perspectives, as characterized in the literature, do not adequately account for the range of vantage point combinations observed in conversations (e.g., external, but mobile, vantage points). There is also some preliminary evidence that the purpose of the spatial language and the size of the environments described may also play a role in how signers engage vantage points. Finally, the study underscores the importance of investigating spatial language within naturalistic conversational contexts.


2012 ◽  
Vol 41 (1) ◽  
pp. 29-71 ◽  
Author(s):  
Terra Edwards

AbstractThis article is concerned with how social actors establish relations between language, the body, and the physical and social environment. The empirical focus is a series of interactions between Deaf-Blind people and tactile signed language interpreters in Seattle, Washington. Many members of the Seattle Deaf-Blind community were born deaf and, due to a genetic condition, lose their vision slowly over the course of many years. Drawing on recent work in language and practice theory, I argue that these relations are established by Deaf-Blind people through processes ofintegrationwhereby continuity between linguistic, embodied, and social elements of a fading visual order are made continuous with corresponding elements in an emerging tactile order. In doing so, I contribute to current attempts in linguistic anthropology to model the means by which embodied, linguistic, and social phenomena crystallize in relational patterns to yield worlds that take on the appearance of concreteness and naturalness. (Classifiers, Deaf-Blind, integration, interpretation, language and embodiment, practice, rhythm, Tactile American Sign Language, tactility)*


Gesture ◽  
2013 ◽  
Vol 13 (3) ◽  
pp. 354-376 ◽  
Author(s):  
Dea Hunsicker ◽  
Susan Goldin-Meadow

All established languages, spoken or signed, make a distinction between nouns and verbs. Even a young sign language emerging within a family of deaf individuals has been found to mark the noun-verb distinction, and to use handshape type to do so. Here we ask whether handshape type is used to mark the noun-verb distinction in a gesture system invented by a deaf child who does not have access to a usable model of either spoken or signed language. The child produces homesigns that have linguistic structure, but receives from his hearing parents co-speech gestures that are structured differently from his own gestures. Thus, unlike users of established and emerging languages, the homesigner is a producer of his system but does not receive it from others. Nevertheless, we found that the child used handshape type to mark the distinction between nouns and verbs at the early stages of development. The noun-verb distinction is thus so fundamental to language that it can arise in a homesign system not shared with others. We also found that the child abandoned handshape type as a device for distinguishing nouns from verbs at just the moment when he developed a combinatorial system of handshape and motion components that marked the distinction. The way the noun-verb distinction is marked thus depends on the full array of linguistic devices available within the system.


2021 ◽  
Author(s):  
Lorna C Quandt ◽  
Athena Willis ◽  
Carly Leannah

Signed language users communicate in a wide array of sub-optimal environments, such as in dim lighting or from a distance. While fingerspelling is a common and essential part of signed languages, the perception of fingerspelling in varying visual environments is not well understood. Signed languages such as American Sign Language (ASL) rely on visuospatial information that combines hand and bodily movements, facial expressions, and fingerspelling. Linguistic information in ASL is conveyed with movement and spatial patterning, which lends itself well to using dynamic Point Light Display (PLD) stimuli to represent sign language movements. We created PLD videos of fingerspelled location names. The location names were either Real (e.g., KUWAIT) or Pseudo-names (e.g., CLARTAND), and the PLDs showed either a High or a Low number of markers. In an online study, Deaf and Hearing ASL users (total N = 283) watched 27 PLD stimulus videos that varied by Realness and Number of Markers. We calculated accuracy and confidence scores in response to each video. We predicted that when signers see ASL fingerspelled letter strings in a suboptimal visual environment, language experience in ASL will be positively correlated with accuracy and self-rated confidence scores. We also predicted that Real location names would be understood better than Pseudo names. Our findings show that participants were more accurate and confident in response to Real place names than Pseudo names and for stimuli with High rather than Low markers. We also discovered a significant interaction between Age and Realness, which shows that as people age, they can better use outside world knowledge to inform their fingerspelling success. Finally, we examined the accuracy and confidence in fingerspelling perception in sub-groups of people who had learned ASL before the age of four. Studying the relationship between language experience with PLD fingerspelling perception allows us to explore how hearing status, ASL fluency levels, and age of language acquisition affect the core abilities of understanding fingerspelling.


Sign in / Sign up

Export Citation Format

Share Document