scholarly journals Attitudes toward signing human avatars vary depending on hearing status, age of signed language exposure, and avatar type

2021 ◽  
Author(s):  
Lorna C Quandt ◽  
Athena Willis ◽  
Melody Schwenk ◽  
Kaitlyn Weeks ◽  
Ruthie Ferster

The use of virtual humans (i.e., avatars) holds the potential for interactive, automated interaction in domains such as remote communication, customer service, or public announcements. For signed language users, signing avatars could potentially provide accessible content by sharing information in the signer’s preferred or native language. As development of signing avatars has gained traction in recent years, many different methods of creating signing avatars have been developed, and the resulting avatars vary widely in their appearance, the naturalness of their movements, and their facial expressions--all of which may potentially impact users’ acceptance of the avatars. We designed a study to test the effects of these intrinsic properties of different signing avatars, while also examining the extent to which people’s own language experiences change their responses to signing avatars. We created video stimuli showing individual signs produced by 1) a live human signer (Human), 2) an avatar made using computer-synthesized animation (CS Avatar), and 3) an avatar made using high-fidelity motion capture (Mocap avatar). We surveyed 191 American Sign Language users, including Deaf (N = 83), Hard-of-Hearing (N = 34), and Hearing (N= 67) groups. Participants rated the three signers on multiple dimensions which were then combined to form ratings of Attitudes, Impressions, Comprehension, and Naturalness. Analyses demonstrated that the Mocap avatar was rated significantly more positively than the CS avatar on all primary variables. Correlations revealed that signers who acquire sign language later in life are more accepting of, and likely to have positive impressions of signing avatars. Finally, those who learned ASL earlier were more likely to give lower, more negative ratings to the CS avatar, but this association was not seen for the Mocap avatar or the Human signer. Together, these findings suggest that movement quality and appearance significantly impact users’ ratings of signing avatars, and show that signed language users with earlier age of ASL exposure are the most sensitive to movement quality issues seen in computer-generated avatars. We suggest that future efforts to develop signing avatars be considerate of retaining the fluid movement qualities which are integral to signed languages.

2021 ◽  
Author(s):  
Lorna C Quandt ◽  
Athena Willis ◽  
Carly Leannah

Signed language users communicate in a wide array of sub-optimal environments, such as in dim lighting or from a distance. While fingerspelling is a common and essential part of signed languages, the perception of fingerspelling in varying visual environments is not well understood. Signed languages such as American Sign Language (ASL) rely on visuospatial information that combines hand and bodily movements, facial expressions, and fingerspelling. Linguistic information in ASL is conveyed with movement and spatial patterning, which lends itself well to using dynamic Point Light Display (PLD) stimuli to represent sign language movements. We created PLD videos of fingerspelled location names. The location names were either Real (e.g., KUWAIT) or Pseudo-names (e.g., CLARTAND), and the PLDs showed either a High or a Low number of markers. In an online study, Deaf and Hearing ASL users (total N = 283) watched 27 PLD stimulus videos that varied by Realness and Number of Markers. We calculated accuracy and confidence scores in response to each video. We predicted that when signers see ASL fingerspelled letter strings in a suboptimal visual environment, language experience in ASL will be positively correlated with accuracy and self-rated confidence scores. We also predicted that Real location names would be understood better than Pseudo names. Our findings show that participants were more accurate and confident in response to Real place names than Pseudo names and for stimuli with High rather than Low markers. We also discovered a significant interaction between Age and Realness, which shows that as people age, they can better use outside world knowledge to inform their fingerspelling success. Finally, we examined the accuracy and confidence in fingerspelling perception in sub-groups of people who had learned ASL before the age of four. Studying the relationship between language experience with PLD fingerspelling perception allows us to explore how hearing status, ASL fluency levels, and age of language acquisition affect the core abilities of understanding fingerspelling.


Languages ◽  
2019 ◽  
Vol 4 (4) ◽  
pp. 90
Author(s):  
Kim B. Kurz ◽  
Kellie Mullaney ◽  
Corrine Occhino

Constructed action is a cover term used in signed language linguistics to describe multi-functional constructions which encode perspective-taking and viewpoint. Within constructed action, viewpoint constructions serve to create discourse coherence by allowing signers to share perspectives and psychological states. Character, observer, and blended viewpoint constructions have been well documented in signed language literature in Deaf signers. However, little is known about hearing second language learners’ use of constructed action or about the acquisition and use of viewpoint constructions. We investigate the acquisition of viewpoint constructions in 11 college students acquiring American Sign Language (ASL) as a second language in a second modality (M2L2). Participants viewed video clips from the cartoon Canary Row and were asked to “retell the story as if you were telling it to a deaf friend”. We analyzed the signed narratives for time spent in character, observer, and blended viewpoints. Our results show that despite predictions of an overall increase in use of all types of viewpoint constructions, students varied in their time spent in observer and character viewpoints, while blended viewpoint was rarely observed. We frame our preliminary findings within the context of M2L2 learning, briefly discussing how gestural strategies used in multimodal speech-gesture constructions may influence learning trajectories.


Author(s):  
David Quinto-Pozos ◽  
Robert Adam

Language contact of various kinds is the norm in Deaf communities throughout the world, and this allows for exploration of the role of the different kinds of modality (be it spoken, signed or written, or a combination of these) and the channel of communication in language contact. Drawing its evidence largely from instances of American Sign Language (ASL) this chapter addresses and illustrates several of these themes: sign-speech contact, sign-writing contact, and sign-sign contact, examining instances of borrowing and bilingualism between some of these modalities, and compares these to contact between hearing users of spoken languages, specifically in this case American English.


Gesture ◽  
2004 ◽  
Vol 4 (1) ◽  
pp. 43-73 ◽  
Author(s):  
Sherman Wilcox

In this paper I explore the role of gesture in the development of signed languages. Using data from American Sign Language, Catalan Sign Language, French Sign Language, and Italian Sign Language, as well as historical sources describing gesture in the Mediterranean region, I demonstrate that gesture enters the linguistic system via two distinct routes. In one, gesture serves as a source of lexical and grammatical morphemes in signed languages. In the second, elements become directly incorporated into signed language morphology, bypassing the lexical stage. Finally, I propose a unifying framework for understanding the gesture-language interface in signed and spoken languages.


2012 ◽  
Vol 15 (2) ◽  
pp. 402-412 ◽  
Author(s):  
DIANE BRENTARI ◽  
MARIE A. NADOLSKE ◽  
GEORGE WOLFORD

In this paper the prosodic structure of American Sign Language (ASL) narratives is analyzed in deaf native signers (L1-D), hearing native signers (L1-H), and highly proficient hearing second language signers (L2-H). The results of this study show that the prosodic patterns used by these groups are associated both with their ASL language experience (L1 or L2) and with their hearing status (deaf or hearing), suggesting that experience using co-speech gesture (i.e. gesturing while speaking) may have some effect on the prosodic cues used by hearing signers, similar to the effects of the prosodic structure of an L1 on an L2.


Author(s):  
Jon Henner ◽  
Robert Hoffmeister ◽  
Jeanne Reis

Limited choices exist for assessing the signed language development of deaf and hard of hearing children. Over the past 30 years, the American Sign Language Assessment Instrument (ASLAI) has been one of the top choices for norm-referenced assessment of deaf and hard of hearing children who use American Sign Language. Signed language assessments can also be used to evaluate the effects of a phenomenon known as language deprivation, which tends to affect deaf children. They can also measure the effects of impoverished and idiosyncratic nonstandard signs and grammar used by educators of the deaf and professionals who serve the Deaf community. This chapter discusses what was learned while developing the ASLAI and provides guidelines for educators and researchers of the deaf who seek to develop their own signed language assessments.


Target ◽  
1995 ◽  
Vol 7 (1) ◽  
pp. 135-149 ◽  
Author(s):  
William P. Isham

Abstract Research using interpreters who work with signed languages can aid us in understanding the cognitive processes of interpretation in general. Using American Sign Language (ASL) as an example, the nature of signed languages is outlined first. Then the difference between signed languages and manual codes for spoken languages is delineated, and it is argued that these two manners of communicating through the visual channel offer a unique research opportunity. Finally, an example from recent research is used to demonstrate how comparisons between spoken-language interpreters and signed-language interpreters can be used to test hypotheses regarding interpretation.


Author(s):  
Joseph Hill

This chapter describes how ideologies about signed languages have come about, and what policies and attitudes have resulted. Language ideologies have governed the formal recognition of signed language at local, national, and international levels, such as that of the United Nations. The chapter discusses three major areas in the study of attitudes toward signed languages: Attitudes versus structural reality; the social factors and educational policies that have contributed to language attitudes; and the impact of language attitudes on identity and educational policy. Even in the United States, American Sign Language does not get recognition as a language in every region, and the attempt to suppress sign language is still operative. This is a worldwide issue for many countries with histories of opposition tosigned languages that parallel the history of the United States.


1999 ◽  
Vol 26 (2) ◽  
pp. 321-338 ◽  
Author(s):  
E. DAYLENE RICHMOND-WELTY ◽  
PATRICIA SIPLE

Signed languages make unique demands on gaze during communication. Bilingual children acquiring both a spoken and a signed language must learn to differentiate gaze use for their two languages. Gaze during utterances was examined for a set of bilingual-bimodal twins acquiring spoken English and American Sign Language (ASL) and a set of monolingual twins acquiring ASL when the twins were aged 2;0, 3;0 and 4;0. The bilingual-bimodal twins differentiated their languages by age 3;0. Like the monolingual ASL twins, the bilingual-bimodal twins established mutual gaze at the beginning of their ASL utterances and either maintained gaze to the end or alternated gaze to include a terminal look. In contrast, like children acquiring spoken English monolingually, the bilingual-bimodal twins established mutual gaze infrequently for their spoken English utterances. When they did establish mutual gaze, it occurred later in their spoken utterances and they tended to look away before the end.


2001 ◽  
Vol 28 (1-2) ◽  
pp. 143-186 ◽  
Author(s):  
Susan Lloyd Mcburney

Summary The first modern linguistic analysis of a signed language was published in 1960 – William Clarence Stokoe’s (1919–2000) Sign Language Structure. Although the initial impact of Stokoe’s monograph on linguistics and education was minimal, his work formed a solid base for what was to become a new field of research: American Sign Language (ASL) Linguistics. Together with the work of those that followed (in particular Ursula Bellugi and colleagues), Stokoe’s ground-breaking work on the structure of ASL has led to an acceptance of signed languages as autonomous linguistic systems that exhibit the complex structure characteristic of all human languages.


Sign in / Sign up

Export Citation Format

Share Document