A theory-driven model of handshape similarity

Phonology ◽  
2017 ◽  
Vol 34 (2) ◽  
pp. 221-241 ◽  
Author(s):  
Jonathan Keane ◽  
Zed Sevcikova Sehyr ◽  
Karen Emmorey ◽  
Diane Brentari

Following the Articulatory Model of Handshape (Keane 2014), which mathematically defines handshapes on the basis of joint angles, we propose two methods for calculating phonetic similarity: a contour difference method, which assesses the amount of change between handshapes within a fingerspelled word, and a positional similarity method, which compares similarity between pairs of letters in the same position across two fingerspelled words. Both methods are validated with psycholinguistic evidence based on similarity ratings by deaf signers. The results indicate that the positional similarity method more reliably predicts native signer intuition judgements about handshape similarity. This new similarity metric fills a gap in the literature (the lack of a theory-driven similarity metric) that has been empty since effectively the beginning of sign-language linguistics.

2019 ◽  
Vol 22 (1) ◽  
pp. 83-111
Author(s):  
Ella Wehrmeyer

Abstract Concerted attention in sign language linguistics has focused on finding ways to document signs. Until now, most notation systems rely on a complex plethora of symbols and are under-specific, to the extent that visual images are still the most widely accepted way of recording primary data. This paper presents a novel phonetic notation of handshape in a step towards deriving an International Phonetic Alphabet for sign languages, based on digit shape (configuration) and position in terms of reference coordinates, aiming at both readability and precision. It is sufficiently hybrid to allow for both accurate measurements and estimates of digit positions, thereby affording a way of representing handshapes suitable for lexicography, studying phonetic variation and avatar programming. Originally tailored to describe handshapes in South African Sign Language, it can also notate gestures. After discussing transcription methods and hand physiology, digit configurations are defined in terms of joint angles. Variations in configuration positions are then specified in terms of Cartesian reference coordinates.


2016 ◽  
Vol 32 (3) ◽  
pp. 347-366 ◽  
Author(s):  
Joshua Williams ◽  
Sharlene Newman

In the present study we aimed to investigate phonological substitution errors made by hearing second language (M2L2) learners of American Sign Language (ASL) during a sentence translation task. Learners saw sentences in ASL that were signed by either a native signer or a M2L2 learner. Learners were to simply translate the sentence from ASL to English. Learners’ responses were analysed for lexical translation errors that were caused by phonological parameter substitutions. Unlike previous related studies, tracking phonological substitution errors during sentence translation allows for the characterization of uncontrolled and naturalistic perception errors. Results indicated that learners made mostly movement errors followed by handshape and location errors. Learners made more movement errors for sentences signed by the M2L2 learner relative to those by the native signer. Additionally, high proficiency learners made more handshape errors than low proficiency learners. Taken together, this pattern of results suggests that late M2L2 learners are poor at perceiving the movement parameter and M2L2 production variability of the movement parameter negatively contributes to perception.


Author(s):  
Ricardo Etxepare ◽  
Aritz Irurtzun

Several Upper Palaeolithic archaeological sites from the Gravettian period display hand stencils with missing fingers. On the basis of the stencils that Leroi-Gourhan identified in the cave of Gargas (France) in the late 1960s, we explore the hypothesis that those stencils represent hand signs with deliberate folding of fingers, intentionally projected as a negative figure onto the wall. Through a study of the biomechanics of handshapes, we analyse the articulatory effort required for producing the handshapes under the stencils in the Gargas cave, and show that only handshapes that are articulable in the air can be found among the existing stencils. In other words, handshape configurations that would have required using the cave wall as a support for the fingers are not attested. We argue that the stencils correspond to the type of handshape that one ordinarily finds in sign language phonology. More concretely, we claim that they correspond to signs of an ‘alternate’ or ‘non-primary’ sign language, like those still employed by a number of bimodal (speaking and signing) human groups in hunter–gatherer populations, like the Australian first nations or the Plains Indians. In those groups, signing is used for hunting and for a rich array of ritual purposes, including mourning and traditional story-telling. We discuss further evidence, based on typological generalizations about the phonology of non-primary sign languages and comparative ethnographic work, that points to such a parallelism. This evidence includes the fact that for some of those groups, stencil and petroglyph art has independently been linked to their sign language expressions. This article is part of the theme issue ‘Reconstructing prehistoric languages’.


Author(s):  
Marta Donazzan ◽  
Luciana Sanchez-Mendes

ABSTRACT This paper focuses on the investigation of the meaning of reduplication in Brazilian Sign Language (Libras) analyzing the contribution of each of its forms: repetition (\rep) and alternation (\alt). In order to check their role, we proceeded a data collection with a native signer during elicitation sessions following the methodology for semantic elicitation (Matthewson, 2004). We also collected a spontaneous datum with another signer. We show that \rep is related to aspectual distribution, and \alt is associated to two pieces of information - participant-related distribution and aspectual distribution. We propose a formal analysis for each of the forms as well as of the way they interact compositionally.


2018 ◽  
Vol 2018 (69) ◽  
pp. 97-128
Author(s):  
Hanna Jaeger ◽  
Anita Junghanns

AbstractDeaf sign language users oftentimes claim to be able to recognise straight away whether their interlocutors are native signers. To date it is unclear, however, what exactly such judgement calls might be based on. The aim of the research presented was to explore whether specific articulatory features are being associated with signers that have (allegedly) acquired German Sign Language (Deutsche Gebärdensprache, DGS) as their first language. The study is based on the analysis of qualitative and quantitative data. Qualitative data were generated in ten focus group settings. Each group was made up of three participants and one facilitator. Deaf participants’ meta-linguistic claims concerning linguistic features of ‘native signing’ (i. e. what native signing looks like) were qualitatively analysed using grounded theory methods. Quantitative data were generated via a language assessment experiment designed around stimulus material extracted from DGS corpus data. Participants were asked to judge whether or not individual clips extracted from a DGS corpus had been produced by a native signer. Against the backdrop of the findings identified in the focus group data, the stimulus material was subsequently linguistically analysed in order to identify specific linguistic features that might account for some clips to be judged as ‘produced by a native signer’ as opposed to others that were claimed to have been ‘articulated by a non-native signer’. Through juxtaposing meta-linguistic perspectives, the results of a language perception experiment and the linguistic analysis of the stimulus material, the study brings to the fore specific crystallisation points of linguistic and social features indexing linguistic authenticity. The findings break new ground in that they suggest that the face as articulator in general, and micro-prosodic features expressed in the movement of eyes, eyebrows and mouth in particular, play a significant role in the perception of others as (non-)native signers.


2002 ◽  
Vol 5 (2) ◽  
pp. 105-130 ◽  
Author(s):  
Diane Brentari ◽  
Laurinda Crossley

The analysis in this paper deals with the prosodic cues that were present in a one-hour lecture by a native signer of American Sign Language (ASL). Special attention is paid to the interaction of the dominant hand (H1) and the nondominant hand (H2), as well as to facial expressions articulated on the lower face. In our corpus, we found that H1 and H2 interact in several prosodic contexts; we analyze four of them here: Single Prosodic Word, Multiple Prosodic Words in an Intermediate Phrase, Parenthetical, and Forward- Referencing. Our main finding is that, while the spread of the nondominant hand (H2-Spread) is an important redundant cue to prosodic structure, the primary cue is on the lower face. Our findings also confirmed positional cues and domain effects of H2-Spread in Prosodic Words and Phonological Phrases that were previously found in Israeli Sign Language.


2012 ◽  
Vol 15 (1) ◽  
pp. 39-72 ◽  
Author(s):  
Petra Eccarius ◽  
Rebecca Bour ◽  
Robert A. Scheidt

In sign language research, we understand little about articulatory factors involved in shaping phonemic boundaries or the amount (and articulatory nature) of acceptable phonetic variation between handshapes. To date, there exists no comprehensive analysis of handshape based on the quantitative measurement of joint angles during sign production. The purpose of our work is to develop a methodology for collecting and visualizing quantitative handshape data in an attempt to better understand how handshapes are produced at a phonetic level. In this pursuit, we seek to quantify the flexion and abduction angles of the finger joints using a commercial data glove (CyberGlove; Immersion Inc.). We present calibration procedures used to convert raw glove signals into joint angles. We then implement those procedures and evaluate their ability to accurately predict joint angle. Finally, we provide examples of how our recording techniques might inform current research questions.


2015 ◽  
Vol 16 (1) ◽  
pp. 86-116
Author(s):  
Russell S. Rosen ◽  
Meredith Turtletaub ◽  
Mary DeLouise ◽  
Sarah Drake

Author(s):  
Hui Qu

In order for blind people to learn aerobics more conveniently, we combined Kinect skeletal tracking technology with aerobics-assisted training to design a Kinect-based aerobics-assisted training system. Through the Kinect somatosensory camera, the feature extraction method and recognition algorithm of sign language are improved, and the sign language recognition system is realized. Sign language is translated through the sign language recognition system and expressed in understandable terms, providing a sound way of learning. The experimental results show that the system can automatically collect and recognize the aerobics movements. By comparing with the standard movements in the database, the system evaluates the posture of trainers from the perspectives of joint coordinates and joint angles, followed by the provision of movements contrast graphics and corresponding advice. Therefore, the system can effectively help the blind to learn aerobics.


Sign in / Sign up

Export Citation Format

Share Document