scholarly journals Libras and Articulatory Phonology

2018 ◽  
Vol 3 (1) ◽  
pp. 103-124
Author(s):  
Adelaide H. P. Silva ◽  
André Nogueira Xavier

This paper proposes a new approach to the phonological representation of Brazilian Sign Language (Libras). We depart from the observation that traditional analyses have overlooked features of signed languages which have no (exact) correspondence in spoken languages. Moreover, traditional approaches impose spoken language theoretical constructs on signed languages analyses and, by doing so, they disregard the possibility that signed languages follow different principles, as well as that analytical categories for spoken languages may be inaccurate for signed languages. Therefore, we argue that an approach grounded on a general theory of movement can account for signed language phonology in a more accurate way. Following Articulatory Phonology, we propose the analytical primes for a motor-oriented phonological approach to Libras, i.e., we determine which are the articulatory gestures that constitute the lexical items in a signed language. Besides, we propose a representation for the sign BEETLE-CAR in terms of a gestural score, and explain how gestures coordinate in relation to each other. As it is discussed, this approach allows us to more satisfactorily explain cases of variation attested in our data.

Target ◽  
1995 ◽  
Vol 7 (1) ◽  
pp. 135-149 ◽  
Author(s):  
William P. Isham

Abstract Research using interpreters who work with signed languages can aid us in understanding the cognitive processes of interpretation in general. Using American Sign Language (ASL) as an example, the nature of signed languages is outlined first. Then the difference between signed languages and manual codes for spoken languages is delineated, and it is argued that these two manners of communicating through the visual channel offer a unique research opportunity. Finally, an example from recent research is used to demonstrate how comparisons between spoken-language interpreters and signed-language interpreters can be used to test hypotheses regarding interpretation.


2021 ◽  
Vol 3 (1) ◽  
pp. 169-181
Author(s):  
André Nogueira Xavier

Signs, the lexical items of signed languages, can be articulatorily characterized as one or two-handed (Klima and Bellugi, 1979). It has been observed in the signed language literature that some one-handed signs can undergo doubling of manual articulator to express meaning intensification (Johnston and Schembri, 1999). This work reports the results of an experiment designed and carried out (1) to elicit intensified forms of some signs of Brazilian Sign Language (Libras) and (2) to check the extent to which the doubling of the number of hands in signs typically produced with only one hand is employed as a resource for expressing the intensification of their meaning. The analysis of the data obtained revealed that subjects were consistent in changing their facial and body expressions as well as the aspects of their hands’ movement when producing the intensified forms of a sign. However, the same did not seem to hold true about the doubling of the number of hands in one-handed signs for the same purpose. Out of 12 deaf subjects, users of Libras, only 6 produced a few one-handed sign with two hands when intensifying their meaning and mostly not for the same sign.


2018 ◽  
Vol 22 (2) ◽  
pp. 185-231 ◽  
Author(s):  
Trevor Johnston

Abstract Signed languages have been classified typologically as being manual dominant or non-manual dominant for negation. In the former negation is conveyed primarily by manual lexical signs whereas in the latter negation is primarily conveyed by nonmanual signs. In support of this typology, the site and spread of headshaking in negated clauses was also described as linguistically constrained. Headshaking was thus said to be a formal part of negation in signed languages so it was linguistic, not gestural. This paper aims to establish the role of headshaking in negation in Auslan with reference to this typology. In this corpus-based study, I show that Auslan users almost always negate clauses using a manual negative sign. Although headshakes are found in just over half of these manually negated clauses, the position and spreading behaviour of headshakes do not appear to be linguistically constrained. I also show that signers use headshakes as the sole negating element in a clause extremely rarely. I conclude that headshaking in Auslan appears similar to headshaking in the ambient face-to-face spoken language, English. I explore the implications of these findings for the proposed typology of negation in signed languages in terms of the type of data that were used to support it, and assumptions about the relationship between gesture and signed languages that underlie it.


2021 ◽  
Author(s):  
Lorna C Quandt ◽  
Athena Willis ◽  
Carly Leannah

Signed language users communicate in a wide array of sub-optimal environments, such as in dim lighting or from a distance. While fingerspelling is a common and essential part of signed languages, the perception of fingerspelling in varying visual environments is not well understood. Signed languages such as American Sign Language (ASL) rely on visuospatial information that combines hand and bodily movements, facial expressions, and fingerspelling. Linguistic information in ASL is conveyed with movement and spatial patterning, which lends itself well to using dynamic Point Light Display (PLD) stimuli to represent sign language movements. We created PLD videos of fingerspelled location names. The location names were either Real (e.g., KUWAIT) or Pseudo-names (e.g., CLARTAND), and the PLDs showed either a High or a Low number of markers. In an online study, Deaf and Hearing ASL users (total N = 283) watched 27 PLD stimulus videos that varied by Realness and Number of Markers. We calculated accuracy and confidence scores in response to each video. We predicted that when signers see ASL fingerspelled letter strings in a suboptimal visual environment, language experience in ASL will be positively correlated with accuracy and self-rated confidence scores. We also predicted that Real location names would be understood better than Pseudo names. Our findings show that participants were more accurate and confident in response to Real place names than Pseudo names and for stimuli with High rather than Low markers. We also discovered a significant interaction between Age and Realness, which shows that as people age, they can better use outside world knowledge to inform their fingerspelling success. Finally, we examined the accuracy and confidence in fingerspelling perception in sub-groups of people who had learned ASL before the age of four. Studying the relationship between language experience with PLD fingerspelling perception allows us to explore how hearing status, ASL fluency levels, and age of language acquisition affect the core abilities of understanding fingerspelling.


2001 ◽  
Vol 4 (1-2) ◽  
pp. 29-45 ◽  
Author(s):  
Elena Antinoro Pizzuto ◽  
Paola Pietrandrea

This paper focuses on some of the major methodological and theoretical problems raised by the fact that there are currently no appropriate notation tools for analyzing and describing signed language texts. We propose to approach these problems taking into account the fact that all signed languages are at present languages without a written tradition. We describe and discuss examples of the gloss-based notation that is currently most widely used in the analysis of signed texts. We briefly consider the somewhat paradoxical problem posed by the difficulty of applying the notation developed for individual signs to signs connected in texts, and the more general problem of clearly identifying and characterizing the constituent units of signed texts. We then compare the use of glosses in signed and spoken language research, and we examine the major pitfalls we see in the use of glosses as a primary means to explore and describe the structure of signed languages. On this basis, we try to specify as explicitly as possible what can or cannot be learned about the structure of signed languages using a gloss-based notation, and to provide some indications for future work that may aim to overcome the limitations of this notation.


Gesture ◽  
2004 ◽  
Vol 4 (1) ◽  
pp. 43-73 ◽  
Author(s):  
Sherman Wilcox

In this paper I explore the role of gesture in the development of signed languages. Using data from American Sign Language, Catalan Sign Language, French Sign Language, and Italian Sign Language, as well as historical sources describing gesture in the Mediterranean region, I demonstrate that gesture enters the linguistic system via two distinct routes. In one, gesture serves as a source of lexical and grammatical morphemes in signed languages. In the second, elements become directly incorporated into signed language morphology, bypassing the lexical stage. Finally, I propose a unifying framework for understanding the gesture-language interface in signed and spoken languages.


2020 ◽  
Vol 6 (1) ◽  
pp. 13-52
Author(s):  
Lauren W. Reed

Abstract Abstract (Australian Sign Language) Most bilingualism and translanguaging studies focus on spoken language; less is known about how people use two or more ways of signing. Here, I take steps towards redressing this imbalance, presenting a case study of signed language in Port Moresby, Papua New Guinea. The study’s methodology is participant observation and analysis of conversational recordings between deaf signers. The Port Moresby deaf community uses two ways of signing: sign language and culture. sign language is around 30 years old, and its lexicon is drawn largely from Australasian Signed English. In contrast, culture – which is as old as each individual user – is characterised by signs of local origin, abundant depiction, and considerable individual variation. Despite sign language’s young age, its users have innovated a metalinguistic sign (switch-caps) to describe switching between ways of communicating. To conclude, I discuss how the Port Moresby situation challenges both the bilingualism and translanguaging approaches.


Author(s):  
Joseph Hill

This chapter describes how ideologies about signed languages have come about, and what policies and attitudes have resulted. Language ideologies have governed the formal recognition of signed language at local, national, and international levels, such as that of the United Nations. The chapter discusses three major areas in the study of attitudes toward signed languages: Attitudes versus structural reality; the social factors and educational policies that have contributed to language attitudes; and the impact of language attitudes on identity and educational policy. Even in the United States, American Sign Language does not get recognition as a language in every region, and the attempt to suppress sign language is still operative. This is a worldwide issue for many countries with histories of opposition tosigned languages that parallel the history of the United States.


2016 ◽  
Vol 27 (1) ◽  
pp. 35-65 ◽  
Author(s):  
Anna-Lena Nilsson

AbstractThe present study describes how Swedish Sign Language (SSL) interpreters systematically use signing space and movements of their hands, arms and body to simultaneously layer iconic expressions of metaphors for differences and for time, in ways previously not described. This is analyzed as the interpreters embodying metaphors, and each of the conceptual metaphors they embody seems to be expressed in a distinct manner not noted before in accounts of the structure of signed languages. Data consists of recordings of Swedish-SSL interpreting by native SSL signers. Rendering spoken Swedish into SSL, these interpreters produce complex sequences making abundant use of the circumstance that in signed language you can express several types of information simultaneously. With little processing time, they produce iconic expressions, frequently using several underlying conceptual metaphors to simultaneously layer information. The interpreters place individual signs in relation to time lines in order to express metaphorical content related to time, and use movement’s of their bodies to express comparisons and contrasts. In all of the analyzed sequences, the interpreters express the metaphor difference-between-is-distance-between. In addition, they layer metaphors for difference and time simultaneously, in some instances also expressing the orientational metaphor pair more-is-up and less-is-down at the same time.


1999 ◽  
Vol 26 (2) ◽  
pp. 321-338 ◽  
Author(s):  
E. DAYLENE RICHMOND-WELTY ◽  
PATRICIA SIPLE

Signed languages make unique demands on gaze during communication. Bilingual children acquiring both a spoken and a signed language must learn to differentiate gaze use for their two languages. Gaze during utterances was examined for a set of bilingual-bimodal twins acquiring spoken English and American Sign Language (ASL) and a set of monolingual twins acquiring ASL when the twins were aged 2;0, 3;0 and 4;0. The bilingual-bimodal twins differentiated their languages by age 3;0. Like the monolingual ASL twins, the bilingual-bimodal twins established mutual gaze at the beginning of their ASL utterances and either maintained gaze to the end or alternated gaze to include a terminal look. In contrast, like children acquiring spoken English monolingually, the bilingual-bimodal twins established mutual gaze infrequently for their spoken English utterances. When they did establish mutual gaze, it occurred later in their spoken utterances and they tended to look away before the end.


Sign in / Sign up

Export Citation Format

Share Document