A Positive Relationship Between Sign Language Comprehension and Mental Rotation Abilities

2020 ◽  
Vol 26 (1) ◽  
pp. 1-12
Author(s):  
Emily Kubicek ◽  
Lorna C Quandt

Abstract Past work investigating spatial cognition suggests better mental rotation abilities for those who are fluent in a signed language. However, no prior work has assessed whether fluency is needed to achieve this performance benefit or what it may look like on the neurobiological level. We conducted an electroencephalography experiment and assessed accuracy on a classic mental rotation task given to deaf fluent signers, hearing fluent signers, hearing non-fluent signers, and hearing non-signers. Two of the main findings of the study are as follows: (1) Sign language comprehension and mental rotation abilities are positively correlated and (2) Behavioral performance differences between signers and non-signers are not clearly reflected in brain activity typically associated with mental rotation. In addition, we propose that the robust impact sign language appears to have on mental rotation abilities strongly suggests that “sign language use” should be added to future measures of spatial experiences.

2019 ◽  
Vol 24 (4) ◽  
pp. 435-447
Author(s):  
Corina Goodwin ◽  
Diane Lillo-Martin

AbstractSign language use in the (re)habilitation of children with cochlear implants (CIs) remains a controversial issue. Concerns that signing impedes spoken language development are based on research comparing children exposed to spoken and signed language (bilinguals) to children exposed only to speech (monolinguals), although abundant research demonstrates that bilinguals and monolinguals differ in language development. We control for bilingualism effects by comparing bimodal bilingual (signing-speaking) children with CIs (BB-CI) to those with typical hearing (BB-TH). Each child had at least one Deaf parent and was exposed to ASL from birth. The BB-THs were exposed to English from birth by hearing family members, while the BB-CIs began English exposure after cochlear implantation around 22-months-of-age. Elicited speech samples were analyzed for accuracy of English grammatical morpheme production. Although there was a trend toward lower overall accuracy in the BB-CIs, this seemed driven by increased omission of the plural -s, suggesting an exaggerated role of perceptual salience in this group. Errors of commission were rare in both groups. Because both groups were bimodal bilinguals, trends toward group differences were likely caused by delayed exposure to spoken language or hearing through a CI, rather than sign language exposure.


2019 ◽  
Vol 5 (1) ◽  
pp. 583-600
Author(s):  
Lindsay Ferrara ◽  
Torill Ringsø

AbstractPrevious studies on perspective in spatial signed language descriptions suggest a basic dichotomy between either a route or a survey perspective, which entails either the signer being conceptualized as a mobile agent within a life-sized scene or the signer in a fixed position as an external observer of a scaled-down scene. We challenge this dichotomy by investigating the particular couplings of vantage point position and mobility engaged during various types of spatial language produced across eight naturalistic conversations in Norwegian Sign Language. Spatial language was annotated for the purpose of the segment, the size of the environment described, the signs produced, and the location and mobility of vantage points. Analysis revealed that survey and route perspectives, as characterized in the literature, do not adequately account for the range of vantage point combinations observed in conversations (e.g., external, but mobile, vantage points). There is also some preliminary evidence that the purpose of the spatial language and the size of the environments described may also play a role in how signers engage vantage points. Finally, the study underscores the importance of investigating spatial language within naturalistic conversational contexts.


2011 ◽  
Vol 23 (6) ◽  
pp. 1395-1404 ◽  
Author(s):  
Ruth Seurinck ◽  
Floris P. de Lange ◽  
Erik Achten ◽  
Guy Vingerhoets

A growing number of studies show that visual mental imagery recruits the same brain areas as visual perception. Although the necessity of hV5/MT+ for motion perception has been revealed by means of TMS, its relevance for motion imagery remains unclear. We induced a direction-selective adaptation in hV5/MT+ by means of an MAE while subjects performed a mental rotation task that elicits imagined motion. We concurrently measured behavioral performance and neural activity with fMRI, enabling us to directly assess the effect of a perturbation of hV5/MT+ on other cortical areas involved in the mental rotation task. The activity in hV5/MT+ increased as more mental rotation was required, and the perturbation of hV5/MT+ affected behavioral performance as well as the neural activity in this area. Moreover, several regions in the posterior parietal cortex were also affected by this perturbation. Our results show that hV5/MT+ is required for imagined visual motion and engages in an interaction with parietal cortex during this cognitive process.


Gesture ◽  
2013 ◽  
Vol 13 (3) ◽  
pp. 354-376 ◽  
Author(s):  
Dea Hunsicker ◽  
Susan Goldin-Meadow

All established languages, spoken or signed, make a distinction between nouns and verbs. Even a young sign language emerging within a family of deaf individuals has been found to mark the noun-verb distinction, and to use handshape type to do so. Here we ask whether handshape type is used to mark the noun-verb distinction in a gesture system invented by a deaf child who does not have access to a usable model of either spoken or signed language. The child produces homesigns that have linguistic structure, but receives from his hearing parents co-speech gestures that are structured differently from his own gestures. Thus, unlike users of established and emerging languages, the homesigner is a producer of his system but does not receive it from others. Nevertheless, we found that the child used handshape type to mark the distinction between nouns and verbs at the early stages of development. The noun-verb distinction is thus so fundamental to language that it can arise in a homesign system not shared with others. We also found that the child abandoned handshape type as a device for distinguishing nouns from verbs at just the moment when he developed a combinatorial system of handshape and motion components that marked the distinction. The way the noun-verb distinction is marked thus depends on the full array of linguistic devices available within the system.


PeerJ ◽  
2018 ◽  
Vol 6 ◽  
pp. e5395
Author(s):  
Jose L. Pardo-Vazquez ◽  
Carlos Acuña

Previous works have shown that neurons from the ventral premotor cortex (PMv) represent several elements of perceptual decisions. One of the most striking findings was that, after the outcome of the choice is known, neurons from PMv encode all the information necessary for evaluating the decision process. These results prompted us to suggest that this cortical area could be involved in shaping future behavior. In this work, we have characterized neuronal activity and behavioral performance as a function of the outcome of the previous trial. We found that the outcome of the immediately previous trial (n−1) significantly changes, in the current trial (n), the activity of single cells and behavioral performance. The outcome of trial n−2, however, does not affect either behavior or neuronal activity. Moreover, the outcome of difficult trials had a greater impact on performance and recruited more PMv neurons than the outcome of easy trials. These results give strong support to our suggestion that PMv neurons evaluate the decision process and use this information to modify future behavior.


2019 ◽  
Author(s):  
Lin Wang ◽  
Edward Wlotko ◽  
Edward Alexander ◽  
Lotte Schoot ◽  
Minjae Kim ◽  
...  

AbstractIt has been proposed that people can generate probabilistic predictions at multiple levels of representation during language comprehension. We used Magnetoencephalography (MEG) and Electroencephalography (EEG), in combination with Representational Similarity Analysis (RSA), to seek neural evidence for the prediction of animacy features. In two studies, MEG and EEG activity was measured as human participants (both sexes) read three-sentence scenarios. Verbs in the final sentences constrained for either animate or inanimate semantic features of upcoming nouns, and the broader discourse context constrained for either a specific noun or for multiple nouns belonging to the same animacy category. We quantified the similarity between spatial patterns of brain activity following the verbs until just before the presentation of the nouns. The MEG and EEG datasets revealed converging evidence that the similarity between spatial patterns of neural activity following animate constraining verbs was greater than following inanimate constraining verbs. This effect could not be explained by lexical-semantic processing of the verbs themselves. We therefore suggest that it reflected the inherent difference in the semantic similarity structure of the predicted animate and inanimate nouns. Moreover, the effect was present regardless of whether a specific word could be predicted, providing strong evidence for the prediction of coarse-grained semantic features that goes beyond the prediction of individual words.Significance statementLanguage inputs unfold very quickly during real-time communication. By predicting ahead we can give our brains a “head-start”, so that language comprehension is faster and more efficient. While most contexts do not constrain strongly for a specific word, they do allow us to predict some upcoming information. For example, following the context, “they cautioned the…”, we can predict that the next word will be animate rather than inanimate (we can caution a person, but not an object). Here we used EEG and MEG techniques to show that the brain is able to use these contextual constraints to predict the animacy of upcoming words during sentence comprehension, and that these predictions are associated with specific spatial patterns of neural activity.


2019 ◽  
Vol 5 (2) ◽  
pp. 95-120 ◽  
Author(s):  
Jemina Napier ◽  
Rosemary Oram ◽  
Alys Young ◽  
Robert Skinner

Abstract Deaf people’s lives are predicated to some extent on working with sign language interpreters. The self is translated on a regular basis and is a long-term state of being. Identity becomes known and performed through the translated self in many interactions, especially at work. (Hearing) others’ experience of deaf people, largely formed indirectly through the use of sign language interpreters, is rarely understood as intercultural or from a sociocultural linguistic perspective. This study positions itself at the cross-roads of translation studies, sociolinguistics and deaf studies, to specifically discuss findings from a scoping study that sought, for the first time, to explore whether the experience of being ‘known’ through translation is a pertinent issue for deaf signers. Through interviews with three deaf signers, we examine how they draw upon their linguistic repertoires and adopt bimodal translanguaging strategies in their work to assert or maintain their professional identity, including bypassing their representation through interpreters. This group we refer to as ‘Deaf Contextual Speakers’ (DCS). The DCS revealed the tensions they experienced as deaf signers in reinforcing, contravening or perpetuating language ideologies, with respect to assumptions that hearing people make about them as deaf people, their language use in differing contexts; the status of sign language; as well as the perceptions of other deaf signers about their translanguaging choices. This preliminary discussion of DCS’ engagement with translation, translanguaging and professional identity(ies) will contribute to theoretical discussions of translanguaging through the examination of how this group of deaf people draw upon their multilingual and multimodal repertoires, contingent and situational influences on these choices, and extend our understanding of the relationship between language use, power, identity, translation and representation.


Author(s):  
Edit H. Kontra ◽  
Kata Csizér

Abstract The aim of this study is to point out the relationship between foreign language learning motivation and sign language use among hearing impaired Hungarians. In the article we concentrate on two main issues: first, to what extent hearing impaired people are motivated to learn foreign languages in a European context; second, to what extent sign language use in the classroom as well as outside school shapes their level of motivation. The participants in our research were 331 Deaf and hard of hearing people from all over Hungary. The instrument of data collection was a standardized questionnaire. Our results support the notion that sign language use helps foreign language learning. Based on the findings, we can conclude that there is indeed no justification for further neglecting the needs of Deaf and hard of hearing people as foreign language learners and that their claim for equal opportunities in language learning is substantiated.


2021 ◽  
Author(s):  
Lorna C Quandt ◽  
Athena Willis ◽  
Carly Leannah

Signed language users communicate in a wide array of sub-optimal environments, such as in dim lighting or from a distance. While fingerspelling is a common and essential part of signed languages, the perception of fingerspelling in varying visual environments is not well understood. Signed languages such as American Sign Language (ASL) rely on visuospatial information that combines hand and bodily movements, facial expressions, and fingerspelling. Linguistic information in ASL is conveyed with movement and spatial patterning, which lends itself well to using dynamic Point Light Display (PLD) stimuli to represent sign language movements. We created PLD videos of fingerspelled location names. The location names were either Real (e.g., KUWAIT) or Pseudo-names (e.g., CLARTAND), and the PLDs showed either a High or a Low number of markers. In an online study, Deaf and Hearing ASL users (total N = 283) watched 27 PLD stimulus videos that varied by Realness and Number of Markers. We calculated accuracy and confidence scores in response to each video. We predicted that when signers see ASL fingerspelled letter strings in a suboptimal visual environment, language experience in ASL will be positively correlated with accuracy and self-rated confidence scores. We also predicted that Real location names would be understood better than Pseudo names. Our findings show that participants were more accurate and confident in response to Real place names than Pseudo names and for stimuli with High rather than Low markers. We also discovered a significant interaction between Age and Realness, which shows that as people age, they can better use outside world knowledge to inform their fingerspelling success. Finally, we examined the accuracy and confidence in fingerspelling perception in sub-groups of people who had learned ASL before the age of four. Studying the relationship between language experience with PLD fingerspelling perception allows us to explore how hearing status, ASL fluency levels, and age of language acquisition affect the core abilities of understanding fingerspelling.


Sign in / Sign up

Export Citation Format

Share Document