Language Lateralization in a Bimanual Language

2003 ◽  
Vol 15 (5) ◽  
pp. 718-730 ◽  
Author(s):  
David P. Corina ◽  
Lucila San Jose-Robertson ◽  
Andre Guillemin ◽  
Julia High ◽  
Allen R. Braun

Unlike spoken languages, sign languages of the deaf make use of two primary articulators, the right and left hands, to produce signs. This situation has no obvious parallel in spoken languages, in which speech articulation is carried out by symmetrical unitary midline vocal structures. This arrangement affords a unique opportunity to examine the robustness of linguistic systems that underlie language production in the face of contrasting articulatory demands and to chart the differential effects of handedness for highly skilled movements. Positron emission tomography (PET) technique was used to examine brain activation in 16 deaf users of American Sign Language (ASL) while subjects generated verb signs independently with their right dominant and left nondominant hands (compared to the repetition of noun signs). Nearly identical patterns of left inferior frontal and right cerebellum activity were observed. This pattern of activation during signing is consistent with patterns that have been reported for spoken languages including evidence for specializations of inferior frontal regions related to lexical–semantic processing, search and retrieval, and phonological encoding. These results indicate that lexical–semantic processing in production relies upon left-hemisphere regions regardless of the modality in which a language is realized, and that this left-hemisphere activation is stable, even in the face of conflicting articulatory demands. In addition, these data provide evidence for the role of the right posterolateral cerebellum in linguistic–cognitive processing and evidence of a left ventral fusiform contribution to sign language processing

1998 ◽  
Vol 172 (2) ◽  
pp. 142-146 ◽  
Author(s):  
Matthias Weisbrod ◽  
Sabine Maier ◽  
Sabine Harig ◽  
Ulrike Himmelsbach ◽  
Manfred Spitzer

BackgroundIn schizophrenia, disturbances in the development of physiological hemisphere asymmetry are assumed to play a pathogenetic role. The most striking difference between hemispheres is in language processing. The left hemisphere is superior in the use of syntactic or semantic information, whereas the right hemisphere uses contextual information more effectively.MethodUsing psycholinguistic experimental techniques, semantic associations were examined in 38 control subjects, 24 non-thought-disordered and 16 thought-disordered people with schizophrenia, for both hemispheres separately.ResultsDirect semantic priming did not differ between the hemispheres in any of the groups. Only thought-disordered people showed significant indirect semantic priming in the left hemisphere.ConclusionsThe results support: (a) a prominent role of the right hemisphere for remote associations; (b) enhanced spreading of semantic associations in thought-disordered subjects; and (c) disorganisation of the functional asymmetry of semantic processing in thought-disordered subjects.


2009 ◽  
Vol 30 (1) ◽  
pp. 187-203 ◽  
Author(s):  
KAREN EMMOREY ◽  
NELLY GERTSBERG ◽  
FRANCO KORPICS ◽  
CHARLES E. WRIGHT

ABSTRACTSpeakers monitor their speech output by listening to their own voice. However, signers do not look directly at their hands and cannot see their own face. We investigated the importance of a visual perceptual loop for sign language monitoring by examining whether changes in visual input alter sign production. Deaf signers produced American Sign Language (ASL) signs within a carrier phrase under five conditions: blindfolded, wearing tunnel-vision goggles, normal (citation) signing, shouting, and informal signing. Three-dimensional movement trajectories were obtained using an Optotrak Certus system. Informally produced signs were shorter with less vertical movement. Shouted signs were displaced forward and to the right and were produced within a larger volume of signing space, with greater velocity, greater distance traveled, and a longer duration. Tunnel vision caused signers to produce less movement within the vertical dimension of signing space, but blind and citation signing did not differ significantly on any measure, except duration. Thus, signers do not “sign louder” when they cannot see themselves, but they do alter their sign production when vision is restricted. We hypothesize that visual feedback serves primarily to fine-tune the size of signing space rather than as input to a comprehension-based monitor.


Neurology ◽  
2020 ◽  
Vol 95 (21) ◽  
pp. e2880-e2889
Author(s):  
Jennifer Shum ◽  
Lora Fanda ◽  
Patricia Dugan ◽  
Werner K. Doyle ◽  
Orrin Devinsky ◽  
...  

ObjectiveThe combined spatiotemporal dynamics underlying sign language production remain largely unknown. To investigate these dynamics compared to speech production, we used intracranial electrocorticography during a battery of language tasks.MethodsWe report a unique case of direct cortical surface recordings obtained from a neurosurgical patient with intact hearing who is bilingual in English and American Sign Language. We designed a battery of cognitive tasks to capture multiple modalities of language processing and production.ResultsWe identified 2 spatially distinct cortical networks: ventral for speech and dorsal for sign production. Sign production recruited perirolandic, parietal, and posterior temporal regions, while speech production recruited frontal, perisylvian, and perirolandic regions. Electrical cortical stimulation confirmed this spatial segregation, identifying mouth areas for speech production and limb areas for sign production. The temporal dynamics revealed superior parietal cortex activity immediately before sign production, suggesting its role in planning and producing sign language.ConclusionsOur findings reveal a distinct network for sign language and detail the temporal propagation supporting sign production.


2011 ◽  
Author(s):  
M. Leonard ◽  
N. Ferjan Ramirez ◽  
C. Torres ◽  
M. Hatrak ◽  
R. Mayberry ◽  
...  

2006 ◽  
Vol 18 (11) ◽  
pp. 1789-1798 ◽  
Author(s):  
Angela Bartolo ◽  
Francesca Benuzzi ◽  
Luca Nocetti ◽  
Patrizia Baraldi ◽  
Paolo Nichelli

Humor is a unique ability in human beings. Suls [A two-stage model for the appreciation of jokes and cartoons. In P. E. Goldstein & J. H. McGhee (Eds.), The psychology of humour. Theoretical perspectives and empirical issues. New York: Academic Press, 1972, pp. 81–100] proposed a two-stage model of humor: detection and resolution of incongruity. Incongruity is generated when a prediction is not confirmed in the final part of a story. To comprehend humor, it is necessary to revisit the story, transforming an incongruous situation into a funny, congruous one. Patient and neuroimaging studies carried out until now lead to different outcomes. In particular, patient studies found that right brain-lesion patients have difficulties in humor comprehension, whereas neuroimaging studies suggested a major involvement of the left hemisphere in both humor detection and comprehension. To prevent activation of the left hemisphere due to language processing, we devised a nonverbal task comprising cartoon pairs. Our findings demonstrate activation of both the left and the right hemispheres when comparing funny versus nonfunny cartoons. In particular, we found activation of the right inferior frontal gyrus (BA 47), the left superior temporal gyrus (BA 38), the left middle temporal gyrus (BA 21), and the left cerebellum. These areas were also activated in a nonverbal task exploring attribution of intention [Brunet, E., Sarfati, Y., Hardy-Bayle, M. C., & Decety, J. A PET investigation of the attribution of intentions with a nonverbal task. Neuroimage, 11, 157–166, 2000]. We hypothesize that the resolution of incongruity might occur through a process of intention attribution. We also asked subjects to rate the funniness of each cartoon pair. A parametric analysis showed that the left amygdala was activated in relation to subjective amusement. We hypothesize that the amygdala plays a key role in giving humor an emotional dimension.


2021 ◽  
Author(s):  
R. D. Rusiru Sewwantha ◽  
T. N. D. S. Ginige

Sign Language is the use of various gestures and symbols for communication. It is mainly used by disabled people with communication difficulties due to their speech or hearing impediments. Due to the lack of knowledge on sign language, natural language speakers like us, are not able to communicate with such people. As a result, a communication gap is created between sign language users and natural language speakers. It should also be noted that sign language differs from country to country. With American sign language being the most commonly used, in Sri Lanka, we use Sri Lankan/Sinhala sign language. In this research, the authors propose a mobile solution using a Region Based Convolutional Neural Network for object detection to reduce the communication gap between the sign users and language speakers by identifying and interpreting Sinhala sign language to Sinhala text using Natural Language Processing (NLP). The system is able to identify and interpret still gesture signs in real-time using the trained model. The proposed solution uses object detection for the identification of the signs.


2001 ◽  
Vol 13 (6) ◽  
pp. 829-843 ◽  
Author(s):  
A. L. Roskies ◽  
J. A. Fiez ◽  
D. A. Balota ◽  
M. E. Raichle ◽  
S. E. Petersen

To distinguish areas involved in the processing of word meaning (semantics) from other regions involved in lexical processing more generally, subjects were scanned with positron emission tomography (PET) while performing lexical tasks, three of which required varying degrees of semantic analysis and one that required phonological analysis. Three closely apposed regions in the left inferior frontal cortex and one in the right cerebellum were significantly active above baseline in the semantic tasks, but not in the nonsemantic task. The activity in two of the frontal regions was modulated by the difficulty of the semantic judgment. Other regions, including some in the left temporal cortex and the cerebellum, were active across all four language tasks. Thus, in addition to a number of regions known to be active during language processing, regions in the left inferior frontal cortex were specifically recruited during semantic processing in a task-dependent manner. A region in the right cerebellum may be functionally related to those in the left inferior frontal cortex. Discussion focuses on the implications of these results for current views regarding neural substrates of semantic processing.


Author(s):  
François Grosjean

The author discovered American Sign Language (ASL) and the world of the deaf whilst in the United States. He helped set up a research program in the psycholinguistics of ASL and describes a few studies he did. He also edited, with Harlan Lane, a special issue of Langages on sign language, for French colleagues. The author then worked on the bilingualism and biculturalism of the deaf, and authored a text on the right of the deaf child to become bilingual. It has been translated into 30 different languages and is known the world over.


2007 ◽  
Vol 46 (02) ◽  
pp. 247-250 ◽  
Author(s):  
H. Takahashi ◽  
N. Yahata ◽  
M. Matsuura ◽  
K. Asai ◽  
Y. Okubo ◽  
...  

Summary Objectives : In our previous functional magnetic resonance imaging (fMRI) study, we determined that there was distinct left hemispheric dominance for lexical- semantic processing without the influence of human voice perception in right-handed healthy subjects. However, the degree of right-handedness in the right-handed subjects ranged from 52 to 100 according to the Edinburgh Handedness Inventory (EHI) score. In the present study, we aimed to clarify the correlation between the degree of right-handedness and language dominance in the fronto-temporo-parietal cortices by examining cerebral activation for lexical-semantic processing. Methods : Twenty-seven normal right-handed healthy subjects were scanned by fMRI while listening to sentences (SEN), reverse sentences (rSEN), and identifiable non-vocal sounds (SND). Fronto-temporo-parietal activation was observed in the left hemisphere under the SEN - rSEN contrast, which included lexical- semantic processing without the influence of human voice perception. Laterality Indexwas calculated as LI = (L - R)/(L + R) X 100, L: left, R: right. Results : Laterality Index in the fronto-temporo-parietal cortices did not correlate with the degree of right-handedness in EHI score. Conclusions : The present study indicated that the degree of right-handedness from 52 to 100 in EHI score had no effect on the degree of left hemispheric dominance for lexical-semantic processing in right-handed healthy subjects.


2014 ◽  
Vol 26 (3) ◽  
pp. 1015-1026 ◽  
Author(s):  
Naja Ferjan Ramirez ◽  
Matthew K. Leonard ◽  
Tristan S. Davenport ◽  
Christina Torres ◽  
Eric Halgren ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document