scholarly journals The categorical role of structurally iconic signs

2017 ◽  
Vol 40 ◽  
Author(s):  
Brent Strickland ◽  
Valentina Aristodemo ◽  
Jeremy Kuhn ◽  
Carlo Geraci

AbstractGoldin-Meadow & Brentari (G-M&B) argue that, for sign language users, gesture – in contrast to linguistic sign – is iconic, highly variable, and similar to spoken language co-speech gesture. We discuss two examples (telicity and absolute gradable adjectives) that challenge the use of these criteria for distinguishing sign from gesture.

2014 ◽  
Vol 22 (2) ◽  
pp. 244-263 ◽  
Author(s):  
Nicolas Fay ◽  
Mark Ellison ◽  
Simon Garrod

This paper explores the role of iconicity in spoken language and other human communication systems. First, we concentrate on graphical and gestural communication and show how semantically motivated iconic signs play an important role in creating such communication systems from scratch. We then consider how iconic signs tend to become simplified and symbolic as the communication system matures and argue that this process is driven by repeated interactive use of the signs. We then consider evidence for iconicity at the level of the system in graphical communication and finally draw comparisons between iconicity in graphical and gestural communication systems and in spoken language.


Gesture ◽  
2017 ◽  
Vol 16 (3) ◽  
pp. 371-395 ◽  
Author(s):  
Lindsay Ferrara ◽  
Rolf Piene Halvorsen

Abstract There is growing momentum towards a theory of languaging that acknowledges the diverse semiotic repertoires people use with each other. This paper contributes to this goal by providing further evidence from signed language discourse. In particular, we examine iconic signs from Norwegian Sign Language, which can be interpreted as both “regular” lexical signs and token depictions. This dual potential is manipulated by signers in context. We analyze these signs as descriptions and depictions, two different modes of representation. Then we compare these signs to some of the description and depiction that occurs in spoken language discourse. In this way we aim to present some of the advantages of using description and depiction in analyses of communication and interaction. By doing this, we also forge links between the languaging of speakers and the languaging of signers.


1998 ◽  
Vol 21 (4) ◽  
pp. 531-532 ◽  
Author(s):  
Bencie Woll ◽  
Jechil S. Sieratzki

This commentary supports MacNeilage's dismissal of an evolutionary development from sign language to spoken language but presents evidence of a feature in sign language (echo phonology) that links iconic signs to abstract vocal syllables. These data provide an insight into possible mechanism by which iconic manual gestures accompanied by vocalisation could have provided a route for the evolution of spoken language with its characteristically arbitrary form–meaning relationship.


1984 ◽  
Vol 49 (3) ◽  
pp. 287-292 ◽  
Author(s):  
Michael D. Orlansky ◽  
John D. Bonvillian

A longitudinal study of sign language acquisition was conducted with 13 very young children (median age 10 months at outset of study) of deaf parents. The children's sign language lexicons were examined for their percentages of iconic signs at two early stages of vocabulary development. Iconic signs are those that clearly resemble the action, object, or characteristic they represent. Analysis of the subjects' vocabularies revealed that iconic signs comprised 30.8% of the first 10 signs they acquired. At age 18 months, the proportion of iconic signs was found to be 33.7%. The finding that a majority of signs in the subjects' early vocabularies were not iconic suggests that the role of iconicity in young children's acquisition of signs may have been overrated by some investigators, and that other formational features may be of greater importance in influencing young children's ability to acquire signs.


2019 ◽  
Vol 24 (4) ◽  
pp. 435-447
Author(s):  
Corina Goodwin ◽  
Diane Lillo-Martin

AbstractSign language use in the (re)habilitation of children with cochlear implants (CIs) remains a controversial issue. Concerns that signing impedes spoken language development are based on research comparing children exposed to spoken and signed language (bilinguals) to children exposed only to speech (monolinguals), although abundant research demonstrates that bilinguals and monolinguals differ in language development. We control for bilingualism effects by comparing bimodal bilingual (signing-speaking) children with CIs (BB-CI) to those with typical hearing (BB-TH). Each child had at least one Deaf parent and was exposed to ASL from birth. The BB-THs were exposed to English from birth by hearing family members, while the BB-CIs began English exposure after cochlear implantation around 22-months-of-age. Elicited speech samples were analyzed for accuracy of English grammatical morpheme production. Although there was a trend toward lower overall accuracy in the BB-CIs, this seemed driven by increased omission of the plural -s, suggesting an exaggerated role of perceptual salience in this group. Errors of commission were rare in both groups. Because both groups were bimodal bilinguals, trends toward group differences were likely caused by delayed exposure to spoken language or hearing through a CI, rather than sign language exposure.


2018 ◽  
Vol 22 (2) ◽  
pp. 185-231 ◽  
Author(s):  
Trevor Johnston

Abstract Signed languages have been classified typologically as being manual dominant or non-manual dominant for negation. In the former negation is conveyed primarily by manual lexical signs whereas in the latter negation is primarily conveyed by nonmanual signs. In support of this typology, the site and spread of headshaking in negated clauses was also described as linguistically constrained. Headshaking was thus said to be a formal part of negation in signed languages so it was linguistic, not gestural. This paper aims to establish the role of headshaking in negation in Auslan with reference to this typology. In this corpus-based study, I show that Auslan users almost always negate clauses using a manual negative sign. Although headshakes are found in just over half of these manually negated clauses, the position and spreading behaviour of headshakes do not appear to be linguistically constrained. I also show that signers use headshakes as the sole negating element in a clause extremely rarely. I conclude that headshaking in Auslan appears similar to headshaking in the ambient face-to-face spoken language, English. I explore the implications of these findings for the proposed typology of negation in signed languages in terms of the type of data that were used to support it, and assumptions about the relationship between gesture and signed languages that underlie it.


2017 ◽  
Vol 2 (12) ◽  
pp. 81-88
Author(s):  
Sandy K. Bowen ◽  
Silvia M. Correa-Torres

America's population is more diverse than ever before. The prevalence of students who are culturally and/or linguistically diverse (CLD) has been steadily increasing over the past decade. The changes in America's demographics require teachers who provide services to students with deafblindness to have an increased awareness of different cultures and diversity in today's classrooms, particularly regarding communication choices. Children who are deafblind may use spoken language with appropriate amplification, sign language or modified sign language, and/or some form of augmentative and alternative communication (AAC).


2004 ◽  
Author(s):  
Conor T. McLennan ◽  
Paul A. Luce ◽  
Robert La Vigne
Keyword(s):  

Cortex ◽  
2021 ◽  
Vol 135 ◽  
pp. 240-254
Author(s):  
A. Banaszkiewicz ◽  
Ł. Bola ◽  
J. Matuszewski ◽  
M. Szczepanik ◽  
B. Kossowski ◽  
...  

1999 ◽  
Vol 2 (2) ◽  
pp. 187-215 ◽  
Author(s):  
Wendy Sandler

In natural communication, the medium through which language is transmitted plays an important and systematic role. Sentences are broken up rhythmically into chunks; certain elements receive special stress; and, in spoken language, intonational tunes are superimposed onto these chunks in particular ways — all resulting in an intricate system of prosody. Investigations of prosody in Israeli Sign Language demonstrate that sign languages have comparable prosodic systems to those of spoken languages, although the phonetic medium is completely different. Evidence for the prosodic word and for the phonological phrase in ISL is examined here within the context of the relationship between the medium and the message. New evidence is offered to support the claim that facial expression in sign languages corresponds to intonation in spoken languages, and the term “superarticulation” is coined to describe this system in sign languages. Interesting formaldiffer ences between the intonationaltunes of spoken language and the “superarticulatory arrays” of sign language are shown to offer a new perspective on the relation between the phonetic basis of language, its phonological organization, and its communicative content.


Sign in / Sign up

Export Citation Format

Share Document