auditory speech
Recently Published Documents


TOTAL DOCUMENTS

187
(FIVE YEARS 41)

H-INDEX

30
(FIVE YEARS 3)

Author(s):  
Oksana I. Shevchenko ◽  
Dina V. Rusanova ◽  
Oleg L. Lakhman

Introduction. The authors note insufficient knowledge of pathophysiological mechanisms, the cumulative role of cerebral functioning disorders in the formation of sensorineural deficit in vibration disease (VD). The study aims to identify changes in indicators characterizing neurofunctional activity in patients with VD due to the combined effects of local and general vibration. Materials and methods. The study involved 42 patients with VD (group I), 35 healthy men (comparison group). Researchers used methods of electroneuromyography, neuroenergic mapping, and neuropsychological testing. Results. In group I, when compared with the comparison group, we have detected an increase in latency N9, N10, N11, N13, N25, N30, the duration of the peak interval N10-N13 (p=0,002; 0,0001; 0,0002; 0,0001; 0,0023; 0,005; 0,01 respectively); an increase in local levels of constant potential (LCP) in the frontal, central, right parietal, occipital, right temporal parts of the brain (p=0.037; 0.0007; 0,0005; 0,01; 0,0004; 0,014; 0,029; 0,028; 0,001 respectively). Cognitive impairments in patients with VD correspond to an easily expressed disorder of analytical-synthetic and conceptual thinking, short-term (auditory-speech), visual imagery, long-term memory, dynamic praxis, joint coordination, impressive and expressive speech. The revealed conjugacy of the indicators of the LCP of the frontal left abduction, and the latency of the peak N30, duration N13-N20, and the hand of analytical and synthetic thinking (r=0.51, p=0.004; r=0.50, p=0.005, respectively) indicates the pathogenetic significance in the violation of neurofunctional activity of a decrease in cortical activation as a result of the arrival of a sensory message from the brain stem to the cortex, increased energy exchange in the frontal part of the left hemisphere. Conclusions. A sign of impaired neurofunctional activity in VB from the combined effects of local and general vibration is a decrease in the postsynaptic action of neurons, the time of signal passage through afferent pathways at the level of the cervical spinal cord, dynamic praxis, short-term (auditory-speech) memory, increased energy metabolism in the temporal right and left frontal brain.


2021 ◽  
Author(s):  
Patrick S Malone ◽  
Silvio P Eberhardt ◽  
Edward T Auer ◽  
Richard Klein ◽  
Lynne E Bernstein ◽  
...  

The goal of sensory substitution is to convey the information transduced by one sensory system through a novel sensory modality. One example is vibrotactile (VT) speech, for which acoustic speech is transformed into vibrotactile patterns. Despite an almost century-long history of studying vibrotactile speech, there has been no study of the neural bases of VT speech learning. We here trained hearing adult participants to recognize VT speech syllables. Using fMRI, we showed that both somatosensory (left post-central gyrus) and auditory (right temporal lobe) regions acquire selectivity for VT speech stimuli following training. The right planum temporale in particular was selective for both VT and auditory speech. EEG source-estimated activity revealed temporal dynamics consistent with direct, low-latency engagement of right temporal lobe following activation of the left post-central gyrus. Our results suggest that VT speech learning achieves integration with the auditory speech system by piggybacking onto corresponding auditory speech representations.


2021 ◽  
Author(s):  
Daniel Senkowski ◽  
James K. Moran

AbstractObjectivesPeople with Schizophrenia (SZ) show deficits in auditory and audiovisual speech recognition. It is possible that these deficits are related to aberrant early sensory processing, combined with an impaired ability to utilize visual cues to improve speech recognition. In this electroencephalography study we tested this by having SZ and healthy controls (HC) identify different unisensory auditory and bisensory audiovisual syllables at different auditory noise levels.MethodsSZ (N = 24) and HC (N = 21) identified one of three different syllables (/da/, /ga/, /ta/) at three different noise levels (no, low, high). Half the trials were unisensory auditory and the other half provided additional visual input of moving lips. Task-evoked mediofrontal N1 and P2 brain potentials triggered to the onset of the auditory syllables were derived and related to behavioral performance.ResultsIn comparison to HC, SZ showed speech recognition deficits for unisensory and bisensory stimuli. These deficits were primarily found in the no noise condition. Paralleling these observations, reduced N1 amplitudes to unisensory and bisensory stimuli in SZ were found in the no noise condition. In HC the N1 amplitudes were positively related to the speech recognition performance, whereas no such relationships were found in SZ. Moreover, no group differences in multisensory speech recognition benefits and N1 suppression effects for bisensory stimuli were observed.ConclusionOur study shows that reduced N1 amplitudes relate to auditory and audiovisual speech processing deficits in SZ. The findings that the amplitude effects were confined to salient speech stimuli and the attenuated relationship with behavioral performance, compared to HC, indicates a diminished decoding of the auditory speech signals in SZs. Our study also revealed intact multisensory benefits in SZs, which indicates that the observed auditory and audiovisual speech recognition deficits were primarily related to aberrant auditory speech processing.HighlightsSpeech processing deficits in schizophrenia related to reduced N1 amplitudes Audiovisual suppression effect in N1 preserved in schizophrenia Schizophrenia showed weakened P2 components in specifically audiovisual processing


2021 ◽  
Vol 23 (2) ◽  
pp. 39-44
Author(s):  
Olga A. Toporkova ◽  
Mikhail V. Aleksandrov ◽  
Malik M. Tastanbekov

The effect of structural epilepsy on the frequency of intraoperative convulsive seizures is assessed when mapping functionally significant areas of the cerebral cortex during resection of intracerebral neoplasms. The work is based on the analysis of the results of intraoperative neurophysiological studies at the Polenov Neurosurgical Institute. For the period 20192020 87 intraoperative mappings of eloquent cortex were carried out during resections of intracerebral neoplasms: 79 mappings of the motor cortex and 16 mappings of auditory-speech areas during operations with awakening. When mapping the motor zones of the cortex, the frequency of seizures was 5.1%, while mapping the auditory-speech zones with awakening 18.75%. The division of cases of intraoperative convulsive seizures into two groups: seizures arising from motor mapping and seizures associated with the mapping of auditory zones reflects differences in factors that affect the excitability of the cerebral cortex. In motor mapping, stimulation occurs against the background of general anesthesia, unlike waking operations. The intensity of stimulation in auditory mapping is higher than in motor mapping in motor mapping. Formally, the current used in motor mapping is significantly higher than in mapping auditory zones. In general, with the development of intraoperative convulsive seizures, the current intensity of cortical stimulation does not exceed the average values required to stimulate functionally significant cortical zones. The presence of epileptic syndrome in patients with intracerebral tumors cannot be considered as a predictor of intraoperative seizure development when performing motor mapping under general anesthesia as well as during surgery with awakening for mapping of motor or auditory verbal zones.


2021 ◽  
Vol 15 ◽  
Author(s):  
Junxian Wang ◽  
Jing Chen ◽  
Xiaodong Yang ◽  
Lei Liu ◽  
Chao Wu ◽  
...  

Under a “cocktail party” environment, listeners can utilize prior knowledge of the content and voice of the target speech [i.e., auditory speech priming (ASP)] and perceived spatial separation to improve recognition of the target speech among masking speech. Previous studies suggest that these two unmasking cues are not processed independently. However, it is unclear whether the unmasking effects of these two cues are supported by common neural bases. In the current study, we aimed to first confirm that ASP and perceived spatial separation contribute to the improvement of speech recognition interactively in a multitalker condition and further investigate whether there exist intersectant brain substrates underlying both unmasking effects, by introducing these two unmasking cues in a unified paradigm and using functional magnetic resonance imaging. The results showed that neural activations by the unmasking effects of ASP and perceived separation partly overlapped in brain areas: the left pars triangularis (TriIFG) and orbitalis of the inferior frontal gyrus, left inferior parietal lobule, left supramarginal gyrus, and bilateral putamen, all of which are involved in the sensorimotor integration and the speech production. The activations of the left TriIFG were correlated with behavioral improvements caused by ASP and perceived separation. Meanwhile, ASP and perceived separation also enhanced the functional connectivity between the left IFG and brain areas related to the suppression of distractive speech signals: the anterior cingulate cortex and the left middle frontal gyrus, respectively. Therefore, these findings suggest that the motor representation of speech is important for both the unmasking effects of ASP and perceived separation and highlight the critical role of the left IFG in these unmasking effects in “cocktail party” environments.


2021 ◽  
Vol 118 (20) ◽  
pp. e2025043118
Author(s):  
Dawoon Choi ◽  
Ghislaine Dehaene-Lambertz ◽  
Marcela Peña ◽  
Janet F. Werker

While there is increasing acceptance that even young infants detect correspondences between heard and seen speech, the common view is that oral-motor movements related to speech production cannot influence speech perception until infants begin to babble or speak. We investigated the extent of multimodal speech influences on auditory speech perception in prebabbling infants who have limited speech-like oral-motor repertoires. We used event-related potentials (ERPs) to examine how sensorimotor influences to the infant’s own articulatory movements impact auditory speech perception in 3-mo-old infants. In experiment 1, there were ERP discriminative responses to phonetic category changes across two phonetic contrasts (bilabial–dental /ba/-/ɗa/; dental–retroflex /ɗa/-/ɖa/) in a mismatch paradigm, indicating that infants auditorily discriminated both contrasts. In experiment 2, inhibiting infants’ own tongue-tip movements had a disruptive influence on the early ERP discriminative response to the /ɗa/-/ɖa/ contrast only. The same articulatory inhibition had contrasting effects on the perception of the /ba/-/ɗa/ contrast, which requires different articulators (the lips vs. the tongue) during production, and the /ɗa/-/ɖa/ contrast, whereby both phones require tongue-tip movement as a place of articulation. This articulatory distinction between the two contrasts plausibly accounts for the distinct influence of tongue-tip suppression on the neural responses to phonetic category change perception in definitively prebabbling, 3-mo-old, infants. The results showing a specificity in the relation between oral-motor inhibition and phonetic speech discrimination suggest a surprisingly early mapping between auditory and motor speech representation already in prebabbling infants.


2021 ◽  
Vol 11 (1) ◽  
pp. 49
Author(s):  
Kaylah Lalonde ◽  
Lynne A. Werner

The natural environments in which infants and children learn speech and language are noisy and multimodal. Adults rely on the multimodal nature of speech to compensate for noisy environments during speech communication. Multiple mechanisms underlie mature audiovisual benefit to speech perception, including reduced uncertainty as to when auditory speech will occur, use of correlations between the amplitude envelope of auditory and visual signals in fluent speech, and use of visual phonetic knowledge for lexical access. This paper reviews evidence regarding infants’ and children’s use of temporal and phonetic mechanisms in audiovisual speech perception benefit. The ability to use temporal cues for audiovisual speech perception benefit emerges in infancy. Although infants are sensitive to the correspondence between auditory and visual phonetic cues, the ability to use this correspondence for audiovisual benefit may not emerge until age four. A more cohesive account of the development of audiovisual speech perception may follow from a more thorough understanding of the development of sensitivity to and use of various temporal and phonetic cues.


Sign in / Sign up

Export Citation Format

Share Document