scholarly journals Age of sign language acquisition has lifelong effect on syntactic preferences in sign language users

2020 ◽  
pp. 016502542095819
Author(s):  
Julia Krebs ◽  
Dietmar Roehm ◽  
Ronnie B. Wilbur ◽  
Evie A. Malaia

Acquisition of natural language has been shown to fundamentally impact both one’s ability to use the first language and the ability to learn subsequent languages later in life. Sign languages offer a unique perspective on this issue because Deaf signers receive access to signed input at varying ages. The majority acquires sign language in (early) childhood, but some learn sign language later—a situation that is drastically different from that of spoken language acquisition. To investigate the effect of age of sign language acquisition and its potential interplay with age in signers, we examined grammatical acceptability ratings and reaction time measures in a group of Deaf signers (age range = 28–58 years) with early (0–3 years) or later (4–7 years) acquisition of sign language in childhood. Behavioral responses to grammatical word order variations (subject–object–verb [SOV] vs. object–subject–verb [OSV]) were examined in sentences that included (1) simple sentences, (2) topicalized sentences, and (3) sentences involving manual classifier constructions, uniquely characteristic of sign languages. Overall, older participants responded more slowly. Age of acquisition had subtle effects on acceptability ratings, whereby the direction of the effect depended on the specific linguistic structure.

2021 ◽  
Vol 8 (3) ◽  
pp. 110-132
Author(s):  
Khunaw Sulaiman Pirot ◽  
Wrya Izaddin Ali

This paper entitled ‘The Common Misconceptions about Sign Language’ is concerned with the most common misconceptions about sign language. It also deals with sign language and its relation with the spoken language. Sign language, primarily used by deaf people, is a fully-developed human language that does not use sounds for communication, but it is a visual-gestural system that uses hands, body and facial gestures. One of the misconceptions is that all sign languages are the same in the worldwide. Such assumptions cause problems. Accordingly, some questions have been raised: first, is sign language universal? Second, is sign language based on spoken language? And third, is sign language invented by hearing people?      The aim of the paper is to have a deeper understanding about sign language. It also demonstrates the similarities and differences between the two different modalities: sign language and spoken language. The paper is based on some hypothesis. One of the hypotheses is that sign languages are pantomimes and gestures. It also hypothesizes that the process of language acquisition in sign language for deaf people is different from the language acquisition in spoken language for hearing people.     To answer the questions raised, the qualitative approach is adopted. The procedure is to collect data about the subject from books and articles and then analyze the data to obtain the aim of the study.  One of the conclusions is that sign language is not universal. It is recommended that more work can be carried out on the differences between either American Sign Language (ASL) or British Sign Language (BSL) with reference to zmânî âmâžaî kurdî (ZAK) Kurdish Sign Language) at all linguistic levels.   


2013 ◽  
Vol 16 (1) ◽  
pp. 75-90 ◽  
Author(s):  
Patrícia do Carmo ◽  
Ana Mineiro ◽  
Joana Castelo Branco ◽  
Ronice Müller de Quadros ◽  
Alexandre Castro-Caldas

Sign languages have only been acknowledged as true languages in the second half of the 20th century. Studies on their ontogenesis are recent and include mostly comparative approaches to spoken language and sign language acquisition. Studies on sign language acquisition show that of the manual phonological parameters, handshape is the one which is acquired last. This study reports the findings of a first pilot study on Portuguese Sign Language (Língua Gestual Portuguesa — LGP) acquisition, focusing on a Deaf child from 10 months until 24 months of age, and it confirms the pattern previously described for other sign languages. We discuss possible reasons why handshape is harder to acquire, which relate to neuromotor development and perceptual issues, and we suggest that auditory deprivation might delay the acquisition of fine motor skills.


2010 ◽  
Vol 13 (2) ◽  
pp. 183-199 ◽  
Author(s):  
Evie Malaia ◽  
Ronnie B. Wilbur

Early acquisition of a natural language, signed or spoken, has been shown to fundamentally impact both one’s ability to use the first language, and the ability to learn subsequent languages later in life (Mayberry 2007, 2009). This review summarizes a number of recent neuroimaging studies in order to detail the neural bases of sign language acquisition. The logic of this review is to present research reports that contribute to the bigger picture showing that people who acquire a natural language, spoken or signed, in the normal way possess specialized linguistic abilities and brain functions that are missing or deficient in people whose exposure to natural language is delayed or absent. Comparing the function of each brain region with regards to the processing of spoken and sign languages, we attempt to clarify the role each region plays in language processing in general, and to outline the challenges and remaining questions in understanding language processing in the brain.


1999 ◽  
Vol 2 (2) ◽  
pp. 187-215 ◽  
Author(s):  
Wendy Sandler

In natural communication, the medium through which language is transmitted plays an important and systematic role. Sentences are broken up rhythmically into chunks; certain elements receive special stress; and, in spoken language, intonational tunes are superimposed onto these chunks in particular ways — all resulting in an intricate system of prosody. Investigations of prosody in Israeli Sign Language demonstrate that sign languages have comparable prosodic systems to those of spoken languages, although the phonetic medium is completely different. Evidence for the prosodic word and for the phonological phrase in ISL is examined here within the context of the relationship between the medium and the message. New evidence is offered to support the claim that facial expression in sign languages corresponds to intonation in spoken languages, and the term “superarticulation” is coined to describe this system in sign languages. Interesting formaldiffer ences between the intonationaltunes of spoken language and the “superarticulatory arrays” of sign language are shown to offer a new perspective on the relation between the phonetic basis of language, its phonological organization, and its communicative content.


2018 ◽  
Vol 44 (3-4) ◽  
pp. 123-208 ◽  
Author(s):  
Philippe Schlenker

AbstractWhile it is now accepted that sign languages should inform and constrain theories of ‘Universal Grammar’, their role in ‘Universal Semantics’ has been under-studied. We argue that they have a crucial role to play in the foundations of semantics, for two reasons. First, in some casessign languages provide overt evidence on crucial aspects of the Logical Form of sentences, ones that are only inferred indirectly in spoken language. For instance, sign language ‘loci’ are positions in signing space that can arguably realize logical variables, and the fact that they are overt makes it possible to revisit foundational debates about the syntactic reality of variables, about mechanisms of temporal and modal anaphora, and about the existence of dynamic binding. Another example pertains to mechanisms of ‘context shift’, which were postulated on the basis of indirect evidence in spoken language, but which are arguably overt in sign language. Second, along one dimensionsign languages are strictly more expressive than spoken languagesbecause iconic phenomena can be found at their logical core. This applies to loci themselves, which maysimultaneouslyfunction as logical variables and as schematic pictures of what they denote (context shift comes with some iconic requirements as well). As a result, the semantic system of spoken languages can in some respects be seen as a simplified version of the richer semantics found in sign languages. Two conclusions could be drawn from this observation. One is that the full extent of Universal Semantics can only be studied in sign languages. An alternative possibility is that spoken languages have comparable expressive mechanisms, but only when co-speech gestures are taken into account (as recently argued by Goldin-Meadow and Brentari). Either way, sign languages have a crucial role to play in investigations of the foundations of semantics.


2002 ◽  
Vol 29 (2) ◽  
pp. 449-488 ◽  
Author(s):  
JIM MILLER

Ravid & Tolchinsky are to be applauded for proposing literacy as a central topic in first language acquisition. A synthesis of research in spoken language, in literacy and literacy practices and the lines of enquiry represented in MacWhinney (1999) has interesting consequences for theories of first language acquisition, not least the nativist ones. This response focuses on constructions but a brief list of controversial points in R&T's paper will be useful.


2019 ◽  
Vol 39 (4) ◽  
pp. 367-395 ◽  
Author(s):  
Matthew L. Hall ◽  
Wyatte C. Hall ◽  
Naomi K. Caselli

Deaf and Hard of Hearing (DHH) children need to master at least one language (spoken or signed) to reach their full potential. Providing access to a natural sign language supports this goal. Despite evidence that natural sign languages are beneficial to DHH children, many researchers and practitioners advise families to focus exclusively on spoken language. We critique the Pediatrics article ‘Early Sign Language Exposure and Cochlear Implants’ (Geers et al., 2017) as an example of research that makes unsupported claims against the inclusion of natural sign languages. We refute claims that (1) there are harmful effects of sign language and (2) that listening and spoken language are necessary for optimal development of deaf children. While practical challenges remain (and are discussed) for providing a sign language-rich environment, research evidence suggests that such challenges are worth tackling in light of natural sign languages providing a host of benefits for DHH children – especially in the prevention and reduction of language deprivation.


2015 ◽  
Vol 1 (2) ◽  
pp. 190-219 ◽  
Author(s):  
Felix Sze ◽  
Silva Isma ◽  
Adhika Irlang Suwiryo ◽  
Laura Lesmana Wijaya ◽  
Adhi Kusumo Bharato ◽  
...  

The distinction between languages and dialects has remained a controversial issue in literature. When such a distinction is made, it often has far-reaching implications in top-down language promotion and preservation policies that tend to favor only those varieties that are labelled as ‘languages’. This issue is of critical importance for the survival of most sign language varieties in the world from a socio-political point of view. Against this background, this paper discusses how the notions of ‘dialect’ and ‘language’ have been applied in classifying sign languages in the past few decades. In particular, the paper reports on two recent studies which provide linguistic evidence that the signing varieties used by Deaf signers in Jakarta and Yogyakarta in Indonesia should be regarded as distinct sign languages rather than mutually intelligible dialects of Indonesian Sign Language. The evidence comes from significant differences in the lexicon, preferred word order for encoding transitive events, and use of mouth actions. Our result suggests that signing varieties within a country can be significantly different from each other, thus calling for more concerted efforts in documenting and recognizing these differences if the linguistic needs of the signing communities are to be met.


2007 ◽  
Vol 10 (1) ◽  
pp. 23-54 ◽  
Author(s):  
Myriam Vermeerbergen ◽  
Mieke Van Herreweghe ◽  
Philemon Akach ◽  
Emily Matabane

This paper reports on a comparison of word order issues, and more specifically on the order of the verb and its arguments, in two unrelated sign languages: South African Sign Language and Flemish Sign Language. The study comprises the first part of a larger project in which a number of grammatical mechanisms and structures are compared across the two sign languages, using a corpus consisting of similar VGT and SASL-data of a various nature. The overall goal of the project is to contribute to a further understanding of the issue of the degree of similarity across unrelated sign languages. However, the different studies also mean a further exploration of the grammars of the two languages involved. In this paper the focus is on the analysis of isolated declarative sentences elicited by means of pictures. The results yield some interesting similarities across all signers but also indicate that — especially with regard to constituent order — there are important differences between the two languages.


Sign in / Sign up

Export Citation Format

Share Document