Two Types of Nonconcatenative Morphology in Signed Languages

Author(s):  
Gaurav Mathur ◽  
Christian Rathmann
2016 ◽  
Vol 28 (1) ◽  
pp. 20-40 ◽  
Author(s):  
Velia Cardin ◽  
Eleni Orfanidou ◽  
Lena Kästner ◽  
Jerker Rönnberg ◽  
Bencie Woll ◽  
...  

The study of signed languages allows the dissociation of sensorimotor and cognitive neural components of the language signal. Here we investigated the neurocognitive processes underlying the monitoring of two phonological parameters of sign languages: handshape and location. Our goal was to determine if brain regions processing sensorimotor characteristics of different phonological parameters of sign languages were also involved in phonological processing, with their activity being modulated by the linguistic content of manual actions. We conducted an fMRI experiment using manual actions varying in phonological structure and semantics: (1) signs of a familiar sign language (British Sign Language), (2) signs of an unfamiliar sign language (Swedish Sign Language), and (3) invented nonsigns that violate the phonological rules of British Sign Language and Swedish Sign Language or consist of nonoccurring combinations of phonological parameters. Three groups of participants were tested: deaf native signers, deaf nonsigners, and hearing nonsigners. Results show that the linguistic processing of different phonological parameters of sign language is independent of the sensorimotor characteristics of the language signal. Handshape and location were processed by different perceptual and task-related brain networks but recruited the same language areas. The semantic content of the stimuli did not influence this process, but phonological structure did, with nonsigns being associated with longer RTs and stronger activations in an action observation network in all participants and in the supramarginal gyrus exclusively in deaf signers. These results suggest higher processing demands for stimuli that contravene the phonological rules of a signed language, independently of previous knowledge of signed languages. We suggest that the phonological characteristics of a language may arise as a consequence of more efficient neural processing for its perception and production.


2021 ◽  
Vol 6 (1) ◽  
Author(s):  
Mark Aronoff ◽  
Jonathan Rawski ◽  
Wendy Sandler ◽  
Iris Berent

Spoken and signed languages differ because of the affordances of the human body and the limits of each medium. But can commonalities between the two be compared to find abstract language universals?


2011 ◽  
Vol 14 (2) ◽  
pp. 213-247 ◽  
Author(s):  
Rachel McKee ◽  
Sophia Wallingford

This study investigates the frequency and functions of a ubiquitous form in conversational NZSL discourse glossed as palm-up. Dictionaries show that it is a polysemous vocabulary item in NZSL, although many of its uses in discourse are not accounted for in the lexicon. Analysis of discourse data from 20 signers shows it to be the second most frequently occuring item, and to exhibit phonological variation. We identify and discuss four (non-exclusive) functions of palm-up in this data: cohesive, modal, interactive, and manual frame for unpredictable mouthings (codemixing). Correspondences in form, linguistic context, and meaning are found between uses of palm-up in NZSL, similar forms in other signed languages, and co-speech palm gestures. The study affirms previous descriptions of this form as having properties of both gesture and sign, and suggests that it also has features of a discourse marker.


2019 ◽  
Author(s):  
Chloé Stoll ◽  
Matthew William Geoffrey Dye

While a substantial body of work has suggested that deafness brings about an increased allocation of visual attention to the periphery there has been much less work on how using a signed language may also influence this attentional allocation. Signed languages are visual-gestural and produced using the body and perceived via the human visual system. Signers fixate upon the face of interlocutors and do not directly look at the hands moving in the inferior visual field. It is therefore reasonable to predict that signed languages require a redistribution of covert visual attention to the inferior visual field. Here we report a prospective and statistically powered assessment of the spatial distribution of attention to inferior and superior visual fields in signers – both deaf and hearing – in a visual search task. Using a Bayesian Hierarchical Drift Diffusion Model, we estimated decision making parameters for the superior and inferior visual field in deaf signers, hearing signers and hearing non-signers. Results indicated a greater attentional redistribution toward the inferior visual field in adult signers (both deaf and hearing) than in hearing sign-naïve adults. The effect was smaller for hearing signers than for deaf signers, suggestive of either a role for extent of exposure or greater plasticity of the visual system in the deaf. The data provide support for a process by which the demands of linguistic processing can influence the human attentional system.


2018 ◽  
Vol 21 (5) ◽  
pp. 915-916 ◽  
Author(s):  
ROBERT DEKEYSER

For several decades now, research on the acquisition of ASL and other signed languages has contributed to our understanding of language acquisition and of age effects in particular. A strong decline in learning capacity with age has been shown in numerous studies with ASL as L1, and the age range for this critical period phenomenon appears to be very similar to what has been observed in even more studies in L2 (for both spoken and signed languages). Mayberry and Kluender (Mayberry & Kluender) argue that the two phenomena are quite different, however, to such an extent that the concept of a critical period is not applicable to L2. Their two main arguments are that L2 learners are less affected by late acquisition than L1 learners and that some L2 studies have not shown the kind of discontinuity in the age-proficiency function that is predicted by the concept of a critical period. As space is very limited, I will limit my comments to these two issues.


2021 ◽  
Author(s):  
Lorna C Quandt ◽  
Athena Willis ◽  
Carly Leannah

Signed language users communicate in a wide array of sub-optimal environments, such as in dim lighting or from a distance. While fingerspelling is a common and essential part of signed languages, the perception of fingerspelling in varying visual environments is not well understood. Signed languages such as American Sign Language (ASL) rely on visuospatial information that combines hand and bodily movements, facial expressions, and fingerspelling. Linguistic information in ASL is conveyed with movement and spatial patterning, which lends itself well to using dynamic Point Light Display (PLD) stimuli to represent sign language movements. We created PLD videos of fingerspelled location names. The location names were either Real (e.g., KUWAIT) or Pseudo-names (e.g., CLARTAND), and the PLDs showed either a High or a Low number of markers. In an online study, Deaf and Hearing ASL users (total N = 283) watched 27 PLD stimulus videos that varied by Realness and Number of Markers. We calculated accuracy and confidence scores in response to each video. We predicted that when signers see ASL fingerspelled letter strings in a suboptimal visual environment, language experience in ASL will be positively correlated with accuracy and self-rated confidence scores. We also predicted that Real location names would be understood better than Pseudo names. Our findings show that participants were more accurate and confident in response to Real place names than Pseudo names and for stimuli with High rather than Low markers. We also discovered a significant interaction between Age and Realness, which shows that as people age, they can better use outside world knowledge to inform their fingerspelling success. Finally, we examined the accuracy and confidence in fingerspelling perception in sub-groups of people who had learned ASL before the age of four. Studying the relationship between language experience with PLD fingerspelling perception allows us to explore how hearing status, ASL fluency levels, and age of language acquisition affect the core abilities of understanding fingerspelling.


Sign in / Sign up

Export Citation Format

Share Document