Intensifier actions in Israeli Sign Language (ISL) discourse

Gesture ◽  
2016 ◽  
Vol 15 (2) ◽  
pp. 192-223 ◽  
Author(s):  
Orit Fuks

The study describes certain structural modifications employed on the citation forms of ISL during signing for intensification purposes. In Signed Languages, citation forms are considered relatively immune to modifications. Nine signers signed several scenarios describing some intense quality. The signers used conventional adverbs existing in ISL for intensification purposes. Yet, they also employed idiosyncratic modifications on the formational components of adjectives simultaneously to form realization. These optional modifications enriched the messages conveyed merely by the conventional forms. They show that signers can incorporate gradient modes of expressions directly into the production of the lexical items to communicate more diverse and explicit messages in context. Using a comparative semiotic approach allowed us to describe the synergetic cooperation manifested at the stage of utterance construction between formational elements which were more suited to convey gradient and analog meanings in context and those that were less suited and thus not modified.

2000 ◽  
Vol 3 (1) ◽  
pp. 3-58 ◽  
Author(s):  
Theodore B. Fernald ◽  
Donna Jo Napoli

American Sign Language shares with spoken languages derivational and inflectional morphological processes, including compounding, reduplication, incorporation, and, arguably, templates. Like spoken languages, ASL also has an extensive nonderivational, noninflectional morphology involving phonological alternation although this is typically more limited. Additionally, ASL frequently associates meaning with individual phonological parameters. This association is atypical of spoken languages. We account for these phenomena by positing “ion-morphs,” which are phonologically incomplete lexical items that bond with other compatible ion-morphs. These ion-morphs draw lexical items into “families” of related signs. In contrast, ASL makes little, if any, use of concatenative affixation, a morphological mechanism common among spoken languages. We propose that this difference is the result of the comparative slowness of movement of the manual articulators as compared to the speech articulators, as well as the perceptual robustness of the manual articulators to the visual system. The slowness of the manual articulators disfavors concatenative affixation. The perceptual robustness of the manual articulators allows ASL to exploit morphological potential that spoken language can use only at considerable cost.


2021 ◽  
Vol 3 (1) ◽  
pp. 169-181
Author(s):  
André Nogueira Xavier

Signs, the lexical items of signed languages, can be articulatorily characterized as one or two-handed (Klima and Bellugi, 1979). It has been observed in the signed language literature that some one-handed signs can undergo doubling of manual articulator to express meaning intensification (Johnston and Schembri, 1999). This work reports the results of an experiment designed and carried out (1) to elicit intensified forms of some signs of Brazilian Sign Language (Libras) and (2) to check the extent to which the doubling of the number of hands in signs typically produced with only one hand is employed as a resource for expressing the intensification of their meaning. The analysis of the data obtained revealed that subjects were consistent in changing their facial and body expressions as well as the aspects of their hands’ movement when producing the intensified forms of a sign. However, the same did not seem to hold true about the doubling of the number of hands in one-handed signs for the same purpose. Out of 12 deaf subjects, users of Libras, only 6 produced a few one-handed sign with two hands when intensifying their meaning and mostly not for the same sign.


2016 ◽  
Vol 28 (1) ◽  
pp. 20-40 ◽  
Author(s):  
Velia Cardin ◽  
Eleni Orfanidou ◽  
Lena Kästner ◽  
Jerker Rönnberg ◽  
Bencie Woll ◽  
...  

The study of signed languages allows the dissociation of sensorimotor and cognitive neural components of the language signal. Here we investigated the neurocognitive processes underlying the monitoring of two phonological parameters of sign languages: handshape and location. Our goal was to determine if brain regions processing sensorimotor characteristics of different phonological parameters of sign languages were also involved in phonological processing, with their activity being modulated by the linguistic content of manual actions. We conducted an fMRI experiment using manual actions varying in phonological structure and semantics: (1) signs of a familiar sign language (British Sign Language), (2) signs of an unfamiliar sign language (Swedish Sign Language), and (3) invented nonsigns that violate the phonological rules of British Sign Language and Swedish Sign Language or consist of nonoccurring combinations of phonological parameters. Three groups of participants were tested: deaf native signers, deaf nonsigners, and hearing nonsigners. Results show that the linguistic processing of different phonological parameters of sign language is independent of the sensorimotor characteristics of the language signal. Handshape and location were processed by different perceptual and task-related brain networks but recruited the same language areas. The semantic content of the stimuli did not influence this process, but phonological structure did, with nonsigns being associated with longer RTs and stronger activations in an action observation network in all participants and in the supramarginal gyrus exclusively in deaf signers. These results suggest higher processing demands for stimuli that contravene the phonological rules of a signed language, independently of previous knowledge of signed languages. We suggest that the phonological characteristics of a language may arise as a consequence of more efficient neural processing for its perception and production.


2021 ◽  
Author(s):  
Lorna C Quandt ◽  
Athena Willis ◽  
Carly Leannah

Signed language users communicate in a wide array of sub-optimal environments, such as in dim lighting or from a distance. While fingerspelling is a common and essential part of signed languages, the perception of fingerspelling in varying visual environments is not well understood. Signed languages such as American Sign Language (ASL) rely on visuospatial information that combines hand and bodily movements, facial expressions, and fingerspelling. Linguistic information in ASL is conveyed with movement and spatial patterning, which lends itself well to using dynamic Point Light Display (PLD) stimuli to represent sign language movements. We created PLD videos of fingerspelled location names. The location names were either Real (e.g., KUWAIT) or Pseudo-names (e.g., CLARTAND), and the PLDs showed either a High or a Low number of markers. In an online study, Deaf and Hearing ASL users (total N = 283) watched 27 PLD stimulus videos that varied by Realness and Number of Markers. We calculated accuracy and confidence scores in response to each video. We predicted that when signers see ASL fingerspelled letter strings in a suboptimal visual environment, language experience in ASL will be positively correlated with accuracy and self-rated confidence scores. We also predicted that Real location names would be understood better than Pseudo names. Our findings show that participants were more accurate and confident in response to Real place names than Pseudo names and for stimuli with High rather than Low markers. We also discovered a significant interaction between Age and Realness, which shows that as people age, they can better use outside world knowledge to inform their fingerspelling success. Finally, we examined the accuracy and confidence in fingerspelling perception in sub-groups of people who had learned ASL before the age of four. Studying the relationship between language experience with PLD fingerspelling perception allows us to explore how hearing status, ASL fluency levels, and age of language acquisition affect the core abilities of understanding fingerspelling.


Gesture ◽  
2013 ◽  
Vol 13 (1) ◽  
pp. 1-27 ◽  
Author(s):  
Rachel Sutton-Spence ◽  
Donna Jo Napoli

Sign Language poetry is especially valued for its presentation of strong visual images. Here, we explore the highly visual signs that British Sign Language and American Sign Language poets create as part of the ‘classifier system’ of their languages. Signed languages, as they create visually-motivated messages, utilise categoricity (more traditionally considered ‘language’) and analogy (more traditionally considered extra-linguistic and the domain of ‘gesture’). Classifiers in sign languages arguably show both these characteristics (Oviedo, 2004). In our discussion of sign language poetry, we see that poets take elements that are widely understood to be highly visual, closely representing their referents, and make them even more highly visual — so going beyond categorisation and into new areas of analogue.


Gesture ◽  
2001 ◽  
Vol 1 (1) ◽  
pp. 51-72 ◽  
Author(s):  
Evelyn McClave

This paper presents evidence of non-manual gestures in American Sign Language (ASL). The types of gestures identified are identical to non-manual, spontaneous gestures used by hearing non-signers which suggests that the gestures co-occurring with ASL signs are borrowings from hearing culture. A comparison of direct quotes in ASL with spontaneous movements of hearing non-signers suggests a history of borrowing and eventual grammaticization in ASL of features previously thought to be unique to signed languages. The electronic edition of this article includes audio-visial data.


Multilingua ◽  
2016 ◽  
Vol 35 (3) ◽  
Author(s):  
Elizabeth S. Parks

AbstractIn this paper, I use a holographic metaphor to explain the identification of overlapping sign language communities in Panama. By visualizing Panama’s complex signing communities as emitting community “hotspots” through social drama on multiple stages, I employ ethnographic methods to explore overlapping contours of Panama’s sign language communities in both time and space, similar to what a hologram accomplishes. Based on rapid appraisal of Panama’s signed languages through 2 weeks of participant observation, interviews, and lexical comparisons, and contextualization of this data in a broad 5-year project that included fieldwork in 15 countries in Latin America and the Caribbean, I propose recognition of overlapping Chiriquí and Panamanian Signing Communities using distinct signed languages: Lengua de Señas Panameñas and Lengua de Señas de Chiriquí.


Gesture ◽  
2004 ◽  
Vol 4 (1) ◽  
pp. 43-73 ◽  
Author(s):  
Sherman Wilcox

In this paper I explore the role of gesture in the development of signed languages. Using data from American Sign Language, Catalan Sign Language, French Sign Language, and Italian Sign Language, as well as historical sources describing gesture in the Mediterranean region, I demonstrate that gesture enters the linguistic system via two distinct routes. In one, gesture serves as a source of lexical and grammatical morphemes in signed languages. In the second, elements become directly incorporated into signed language morphology, bypassing the lexical stage. Finally, I propose a unifying framework for understanding the gesture-language interface in signed and spoken languages.


Author(s):  
Anne Therese Frederiksen ◽  
Rachel I. Mayberry

AbstractImplicit causality (IC) biases, the tendency of certain verbs to elicit re-mention of either the first-mentioned noun phrase (NP1) or the second-mentioned noun phrase (NP2) from the previous clause, are important in psycholinguistic research. Understanding IC verbs and the source of their biases in signed as well as spoken languages helps elucidate whether these phenomena are language general or specific to the spoken modality. As the first of its kind, this study investigates IC biases in American Sign Language (ASL) and provides IC bias norms for over 200 verbs, facilitating future psycholinguistic studies of ASL and comparisons of spoken versus signed languages. We investigated whether native ASL signers continued sentences with IC verbs (e.g., ASL equivalents of ‘Lisa annoys Maya because…’) by mentioning NP1 (i.e., Lisa) or NP2 (i.e., Maya). We found a tendency towards more NP2-biased verbs. Previous work has found that a verb’s thematic roles predict bias direction: stimulus-experiencer verbs (e.g., ‘annoy’), where the first argument is the stimulus (causing annoyance) and the second argument is the experiencer (experiencing annoyance), elicit more NP1 continuations. Verbs with experiencer-stimulus thematic roles (e.g., ‘love’) elicit more NP2 continuations. We probed whether the trend towards more NP2-biased verbs was related to an existing claim that stimulus-experiencer verbs do not exist in sign languages. We found that stimulus-experiencer structure, while permitted, is infrequent, impacting the IC bias distribution in ASL. Nevertheless, thematic roles predict IC bias in ASL, suggesting that the thematic role-IC bias relationship is stable across languages as well as modalities.


2001 ◽  
Vol 4 (1-2) ◽  
pp. 145-169 ◽  
Author(s):  
Trevor Johnston

The form and content of the lexical database of Auslan (Australian Sign Language) is described and explained. The type of database utilized and its precise structure (relational or flat, the type and number of fields, the design of the data entry interface, etc.) is first described. This is followed by a detailed description of the types of information registered in the database: phonological, definitional, bilingual (English-based glossing), grammatical, and semantic. The non-gloss based representations of each sign record (graphic, video, and transcription) that are used in the lexical database are then discussed. Finally, the compatibility of the Auslan lexical database with other lexical databases is examined. The paper concludes with a discussion of the possibility of building an extensive “universal” database of signs that could centralize lexical information from scores of signed languages and facilitate cross-linguistic investigations of lexis and phonology.


Sign in / Sign up

Export Citation Format

Share Document