scholarly journals Neural Correlates of Phonetic Adaptation as Induced by Lexical and Audiovisual Context

2020 ◽  
Vol 32 (11) ◽  
pp. 2145-2158 ◽  
Author(s):  
Shruti Ullas ◽  
Lars Hausfeld ◽  
Anne Cutler ◽  
Frank Eisner ◽  
Elia Formisano

When speech perception is difficult, one way listeners adjust is by reconfiguring phoneme category boundaries, drawing on contextual information. Both lexical knowledge and lipreading cues are used in this way, but it remains unknown whether these two differing forms of perceptual learning are similar at a neural level. This study compared phoneme boundary adjustments driven by lexical or audiovisual cues, using ultra-high-field 7-T fMRI. During imaging, participants heard exposure stimuli and test stimuli. Exposure stimuli for lexical retuning were audio recordings of words, and those for audiovisual recalibration were audio–video recordings of lip movements during utterances of pseudowords. Test stimuli were ambiguous phonetic strings presented without context, and listeners reported what phoneme they heard. Reports reflected phoneme biases in preceding exposure blocks (e.g., more reported /p/ after /p/-biased exposure). Analysis of corresponding brain responses indicated that both forms of cue use were associated with a network of activity across the temporal cortex, plus parietal, insula, and motor areas. Audiovisual recalibration also elicited significant occipital cortex activity despite the lack of visual stimuli. Activity levels in several ROIs also covaried with strength of audiovisual recalibration, with greater activity accompanying larger recalibration shifts. Similar activation patterns appeared for lexical retuning, but here, no significant ROIs were identified. Audiovisual and lexical forms of perceptual learning thus induce largely similar brain response patterns. However, audiovisual recalibration involves additional visual cortex contributions, suggesting that previously acquired visual information (on lip movements) is retrieved and deployed to disambiguate auditory perception.

2018 ◽  
Author(s):  
Theo Marins ◽  
Maite Russo ◽  
Erika Rodrigues ◽  
jorge Moll ◽  
Daniel Felix ◽  
...  

ABSTRACTEvidence of cross-modal plasticity in blind individuals has been reported over the past decades showing that non-visual information is carried and processed by classical “visual” brain structures. This feature of the blind brain makes it a pivotal model to explore the limits and mechanisms of brain plasticity. However, despite recent efforts, the structural underpinnings that could explain cross-modal plasticity in congenitally blind individuals remain unclear. Using advanced neuroimaging techniques, we mapped the thalamocortical connectivity and assessed cortical thickness and integrity of white matter of congenitally blind individuals and sighted controls to test the hypothesis that aberrant thalamocortical pattern of connectivity can pave the way for cross-modal plasticity. We described a direct occipital takeover by the temporal projections from the thalamus, which would carry non-visual information (e.g. auditory) to the visual cortex in congenitally blinds. In addition, the amount of thalamo-occipital connectivity correlated with the cortical thickness of primary visual cortex (V1), supporting a probably common (or related) reorganization phenomena. Our results suggest that aberrant thalamocortical connectivity as one possible mechanism of cross-modal plasticity in blinds, with potential impact on cortical thickness of V1.SIGNIFICANT STATEMENTCongenitally blind individuals often develop greater abilities on spared sensory modalities, such as increased acuity in auditory discrimination and voice recognition, when compared to sighted controls. These functional gains have been shown to rely on ‘visual’ cortical areas of the blind brain, characterizing the phenomenon of cross-modal plasticity. However, its anatomical underpinnings in humans have been unsuccessfully pursued for decades. Recent advances of non-invasive neuroimaging techniques allowed us to test the hypothesis of abnormal thalamocortical connectivity in congenitally blinds. Our results showed an expansion of the thalamic connections to the temporal cortex over those that project to the occipital cortex, which may explain, the cross-talk between the visual and auditory systems in congenitally blind individuals.


2021 ◽  
Author(s):  
Lin Hua ◽  
Fei Gao ◽  
Chantat Leong ◽  
Zhen Yuan

Previous research on perceptual grouping primarily focused on the dynamics of single grouping principle in light of the Gestalt psychology. Yet, there has been comparatively little emphasis on the dissociation across two or more grouping principles. To tackle this issue, the current study aims at investigating how, when, and where the processing of two grouping principles (proximity and similarity) are established in the human brain by using a dimotif lattice paradigm and adjusting the strength of one grouping principle. Specifically, we measured the modulated strength of the other grouping principle, thus forming six visual stimuli. The current psychophysical results showed that similarity grouping effect was enhanced with reduced proximity effect when the grouping cues of proximity and similarity were presented simultaneously. Meanwhile, electrophysiological (EEG) response patterns were able to decode the specific pattern out of the six visual stimuli involving both principles in each trail by using time-resolved multivariate pattern analysis (MVPA). The onsets of the dissociation between the two grouping principles coincided within three time windows: the earliest proximity-defined local visual element arrangement in the middle occipital cortex, the middle-stage processing for feature selection modulating low-level visual cortex in the inferior occipital cortex and fusiform cortex, and the higher-level cognitive integration to make decisions for specific grouping preference in the parietal areas. In addition, brain responses were highly correlated with behavioral grouping. The results therefore provide direct evidence for a link between human perceptual space of grouping decision-making and neural space of these brain response patterns.


Author(s):  
Edward H. Silson ◽  
Iris I. A. Groen ◽  
Chris I. Baker

AbstractHuman visual cortex is organised broadly according to two major principles: retinotopy (the spatial mapping of the retina in cortex) and category-selectivity (preferential responses to specific categories of stimuli). Historically, these principles were considered anatomically separate, with retinotopy restricted to the occipital cortex and category-selectivity emerging in the lateral-occipital and ventral-temporal cortex. However, recent studies show that category-selective regions exhibit systematic retinotopic biases, for example exhibiting stronger activation for stimuli presented in the contra- compared to the ipsilateral visual field. It is unclear, however, whether responses within category-selective regions are more strongly driven by retinotopic location or by category preference, and if there are systematic differences between category-selective regions in the relative strengths of these preferences. Here, we directly compare contralateral and category preferences by measuring fMRI responses to scene and face stimuli presented in the left or right visual field and computing two bias indices: a contralateral bias (response to the contralateral minus ipsilateral visual field) and a face/scene bias (preferred response to scenes compared to faces, or vice versa). We compare these biases within and between scene- and face-selective regions and across the lateral and ventral surfaces of the visual cortex more broadly. We find an interaction between surface and bias: lateral surface regions show a stronger contralateral than face/scene bias, whilst ventral surface regions show the opposite. These effects are robust across and within subjects, and appear to reflect large-scale, smoothly varying gradients. Together, these findings support distinct functional roles for the lateral and ventral visual cortex in terms of the relative importance of the spatial location of stimuli during visual information processing.


2021 ◽  
Author(s):  
Edward H Silson ◽  
Iris Isabelle Anna Groen ◽  
Chris I Baker

Human visual cortex is organised broadly according to two major principles: retinotopy (the spatial mapping of the retina in cortex) and category-selectivity (preferential responses to specific categories of stimuli). Historically, these principles were considered anatomically separate, with retinotopy restricted to the occipital cortex and category-selectivity emerging in lateral-occipital and ventral-temporal cortex. Contrary to this assumption, recent studies show that category-selective regions exhibit systematic retinotopic biases. It is unclear, however, whether responses within these regions are more strongly driven by retinotopic location or by category preference, and if there are systematic differences between category-selective regions in the relative strengths of these preferences. Here, we directly compare spatial and category preferences by measuring fMRI responses to scene and face stimuli presented in the left or right visual field and computing two bias indices: a spatial bias (response to the contralateral minus ipsilateral visual field) and a category bias (response to the preferred minus non-preferred category). We compare these biases within and between scene- and face-selective regions across the lateral and ventral surfaces of visual cortex. We find an interaction between surface and bias: lateral regions show a stronger spatial than category bias, whilst ventral regions show the opposite. These effects are robust across and within subjects, and reflect large-scale, smoothly varying gradients across both surfaces. Together, these findings support distinct functional roles for lateral and ventral category-selective regions in visual information processing in terms of the relative importance of spatial information.


2020 ◽  
Author(s):  
M Babo-Rebelo ◽  
A Puce ◽  
D Bullock ◽  
L Hugueville ◽  
F Pestilli ◽  
...  

ABSTRACTOccipito-temporal regions within the face network process perceptual and socio-emotional information, but the dynamics and interactions between different nodes within this network remain unknown. Here, we analyzed intracerebral EEG from 11 epileptic patients viewing a stimulus sequence beginning with a neutral face with direct gaze. The gaze could avert or remain direct, while the emotion changed to fearful or happy. N200 field potential peak latencies indicated that face processing begins in inferior occipital cortex and proceeds anteroventrally to fusiform and inferior temporal cortices, in parallel. The superior temporal sulcus responded preferentially to gaze changes with augmented field potential amplitudes for averted versus direct gaze, and large effect sizes relative to other regions of the network. An overlap analysis of posterior white matter tractography endpoints (from 1066 healthy brains) relative to active intracerebral electrodes from the 11 patients showed likely involvement of both dorsal and ventral posterior white matter pathways. The inferior occipital and temporal sulci likely broadcast their information - the former dorsally to intraparietal sulcus, and the latter between fusiform and superior temporal cortex. Overall, our data call for inclusion of inferior temporal cortex in face processing models, and anchor the superior temporal cortex in dynamic gaze processing.


2014 ◽  
Vol 27 (3-4) ◽  
pp. 247-262 ◽  
Author(s):  
Emiliano Ricciardi ◽  
Leonardo Tozzi ◽  
Andrea Leo ◽  
Pietro Pietrini

Cross-modal responses in occipital areas appear to be essential for sensory processing in visually deprived subjects. However, it is yet unclear whether this functional recruitment might be dependent on the sensory channel conveying the information. In order to characterize brain areas showing task-independent, but sensory specific, cross-modal responses in blind individuals, we pooled together distinct brain functional studies in a single based meta-analysis according only to the modality conveying experimental stimuli (auditory or tactile). Our approach revealed a specific functional cortical segregation according to the sensory modality conveying the non-visual information, irrespectively from the cognitive features of the tasks. In particular, dorsal and posterior subregions of occipital and superior parietal cortex showed a higher cross-modal recruitment across tactile tasks in blind as compared to sighted individuals. On the other hand, auditory stimuli activated more medial and ventral clusters within early visual areas, the lingual and inferior temporal cortex. These findings suggest a modality-specific functional modification of cross-modal responses within different portions of the occipital cortex of blind individuals. Cross-modal recruitment can thus be specifically influenced by the intrinsic features of sensory information.


2019 ◽  
Vol 30 (3) ◽  
pp. 1103-1116
Author(s):  
Kiki van der Heijden ◽  
Elia Formisano ◽  
Giancarlo Valente ◽  
Minye Zhan ◽  
Ron Kupers ◽  
...  

Abstract Auditory spatial tasks induce functional activation in the occipital—visual—cortex of early blind humans. Less is known about the effects of blindness on auditory spatial processing in the temporal—auditory—cortex. Here, we investigated spatial (azimuth) processing in congenitally and early blind humans with a phase-encoding functional magnetic resonance imaging (fMRI) paradigm. Our results show that functional activation in response to sounds in general—independent of sound location—was stronger in the occipital cortex but reduced in the medial temporal cortex of blind participants in comparison with sighted participants. Additionally, activation patterns for binaural spatial processing were different for sighted and blind participants in planum temporale. Finally, fMRI responses in the auditory cortex of blind individuals carried less information on sound azimuth position than those in sighted individuals, as assessed with a 2-channel, opponent coding model for the cortical representation of sound azimuth. These results indicate that early visual deprivation results in reorganization of binaural spatial processing in the auditory cortex and that blind individuals may rely on alternative mechanisms for processing azimuth position.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Alexandre Boutet ◽  
Radhika Madhavan ◽  
Gavin J. B. Elias ◽  
Suresh E. Joel ◽  
Robert Gramer ◽  
...  

AbstractCommonly used for Parkinson’s disease (PD), deep brain stimulation (DBS) produces marked clinical benefits when optimized. However, assessing the large number of possible stimulation settings (i.e., programming) requires numerous clinic visits. Here, we examine whether functional magnetic resonance imaging (fMRI) can be used to predict optimal stimulation settings for individual patients. We analyze 3 T fMRI data prospectively acquired as part of an observational trial in 67 PD patients using optimal and non-optimal stimulation settings. Clinically optimal stimulation produces a characteristic fMRI brain response pattern marked by preferential engagement of the motor circuit. Then, we build a machine learning model predicting optimal vs. non-optimal settings using the fMRI patterns of 39 PD patients with a priori clinically optimized DBS (88% accuracy). The model predicts optimal stimulation settings in unseen datasets: a priori clinically optimized and stimulation-naïve PD patients. We propose that fMRI brain responses to DBS stimulation in PD patients could represent an objective biomarker of clinical response. Upon further validation with additional studies, these findings may open the door to functional imaging-assisted DBS programming.


2021 ◽  
Vol 11 (8) ◽  
pp. 960
Author(s):  
Mina Kheirkhah ◽  
Philipp Baumbach ◽  
Lutz Leistritz ◽  
Otto W. Witte ◽  
Martin Walter ◽  
...  

Studies investigating human brain response to emotional stimuli—particularly high-arousing versus neutral stimuli—have obtained inconsistent results. The present study was the first to combine magnetoencephalography (MEG) with the bootstrapping method to examine the whole brain and identify the cortical regions involved in this differential response. Seventeen healthy participants (11 females, aged 19 to 33 years; mean age, 26.9 years) were presented with high-arousing emotional (pleasant and unpleasant) and neutral pictures, and their brain responses were measured using MEG. When random resampling bootstrapping was performed for each participant, the greatest differences between high-arousing emotional and neutral stimuli during M300 (270–320 ms) were found to occur in the right temporo-parietal region. This finding was observed in response to both pleasant and unpleasant stimuli. The results, which may be more robust than previous studies because of bootstrapping and examination of the whole brain, reinforce the essential role of the right hemisphere in emotion processing.


Sign in / Sign up

Export Citation Format

Share Document