scholarly journals The interplay of top-down focal attention and the cortical tracking of speech

2019 ◽  
Author(s):  
D Lesenfants ◽  
T Francart

AbstractMany active neuroimaging paradigms rely on the assumption that the participant sustains attention to a task. However, in practice, there will be momentary distractions, potentially influencing the results. We investigated the effect of focal attention, objectively quantified using a measure of brain signal entropy, on cortical tracking of the speech envelope. The latter is a measure of neural processing of naturalistic speech. We let participants listen to 44 minutes of natural speech, while their electroencephalogram was recorded, and quantified both entropy and cortical envelope tracking. Focal attention affected the later brain responses to speech, between 100 and 300 ms latency. By only taking into account periods with higher attention, the measured cortical speech tracking improved by 47%. This illustrates the impact of the participant’s active engagement in the modeling of the brain-speech response and the importance of accounting for it. Our results suggests a cortico-cortical loop that initiates during the early-stages of the auditory processing, then propagates through the parieto-occipital and frontal areas, and finally impacts the later-latency auditory processes in a top-down fashion. The proposed framework could be transposed to other active electrophysiological paradigms (visual, somatosensory, etc) and help to control the impact of participants’ engagement on the results.

Author(s):  
Saransh Jain ◽  
Suma Raju

Fatigue is a common yet poorly understood topic. The psychological, physiological, social, emotional, and cognitive wellbeing of a person may be affected due to fatigue. Despite a century of research in understanding the effect of fatigue on human systems, there is no concrete explanation as how fatigue affects the perception of speech. Fatigue impairs auditory cognition and the reduced cognitive abilities further increase mental and physical fatigue. Since cognition is markedly affected in individuals experiencing mental fatigue, its consequences are widespread. According to the top-down approach of auditory processing, there is a direct link between cognition and speech perception. Thus, in the present chapter, the influence of fatigue on perception is reviewed. It is noted that the impact of fatigue on cognition and quality of life is different for children and adults. Training in music, meditation, and exposure to more than one language are some of the measures that help to reduce the effect of fatigue and improve cognitive abilities in both children as well as in adults.


2020 ◽  
Vol 6 (30) ◽  
pp. eaba7830
Author(s):  
Laurianne Cabrera ◽  
Judit Gervain

Speech perception is constrained by auditory processing. Although at birth infants have an immature auditory system and limited language experience, they show remarkable speech perception skills. To assess neonates’ ability to process the complex acoustic cues of speech, we combined near-infrared spectroscopy (NIRS) and electroencephalography (EEG) to measure brain responses to syllables differing in consonants. The syllables were presented in three conditions preserving (i) original temporal modulations of speech [both amplitude modulation (AM) and frequency modulation (FM)], (ii) both fast and slow AM, but not FM, or (iii) only the slowest AM (<8 Hz). EEG responses indicate that neonates can encode consonants in all conditions, even without the fast temporal modulations, similarly to adults. Yet, the fast and slow AM activate different neural areas, as shown by NIRS. Thus, the immature human brain is already able to decompose the acoustic components of speech, laying the foundations of language learning.


Author(s):  
M. S. Chafi ◽  
G. Karami ◽  
M. Ziejewski

In this paper, an integrated numerical approach is introduced to determine the human brain responses when the head is exposed to blast explosions. The procedure is based on a 3D non-linear finite element method (FEM) that implements a simultaneous conduction of explosive detonation, shock wave propagation, and blast-brain interaction of the confronting human head. Due to the fact that there is no reported experimental data on blast-head interactions, several important checkpoints should be made before trusting the brain responses resulting from the blast modeling. These checkpoints include; a) a validated human head FEM subjected to impact loading; b) a validated air-free blast propagation model; and c) the verified blast waves-solid interactions. The simulations presented in this paper satisfy the above-mentioned requirements and checkpoints. The head model employed here has been validated again impact loadings. In this respect, Chafi et al. [1] have examined the head model against the brain intracranial pressure, and brain’s strains under different impact loadings of cadaveric experimental tests of Hardy et al. [2]. In another report, Chafi et al. [3] has examined the air-blast and blast-object simulations using Arbitrary Lagrangian Eulerian (ALE) multi-material and Fluid-Solid Interaction (FSI) formulations. The predicted results of blast propagation matched very well with those of experimental data proving that this computational solid-fluid algorithm is able to accurately predict the blast wave propagation in the medium and the response of the structure to blast loading. Various aspects of blast wave propagations in air as well as when barriers such as solid walls are encountered have been studied. With the head model included, different scenarios have been assumed to capture an appropriate picture of the brain response at a constant stand-off distance of nearly 80cm (2.62 feet) from the explosion core. The impact of brain response due to severity of the blast under different amounts of the explosive material, TNT (0.0838, 0.205, and 0.5lb) is examined. The accuracy of the modeling can provide the information to design protection facilities for human head for the hostile environments.


2007 ◽  
Vol 363 (1493) ◽  
pp. 1023-1035 ◽  
Author(s):  
Roy D Patterson ◽  
Ingrid S Johnsrude

In this paper, we describe domain-general auditory processes that we believe are prerequisite to the linguistic analysis of speech. We discuss biological evidence for these processes and how they might relate to processes that are specific to human speech and language. We begin with a brief review of (i) the anatomy of the auditory system and (ii) the essential properties of speech sounds. Section 4 describes the general auditory mechanisms that we believe are applied to all communication sounds, and how functional neuroimaging is being used to map the brain networks associated with domain-general auditory processing. Section 5 discusses recent neuroimaging studies that explore where such general processes give way to those that are specific to human speech and language.


Author(s):  
Betina Korka ◽  
Andreas Widmann ◽  
Florian Waszak ◽  
Álvaro Darriba ◽  
Erich Schröger

AbstractAccording to the ideomotor theory, action may serve to produce desired sensory outcomes. Perception has been widely described in terms of sensory predictions arising due to top-down input from higher order cortical areas. Here, we demonstrate that the action intention results in reliable top-down predictions that modulate the auditory brain responses. We bring together several lines of research, including sensory attenuation, active oddball, and action-related omission studies: Together, the results suggest that the intention-based predictions modulate several steps in the sound processing hierarchy, from preattentive to evaluation-related processes, also when controlling for additional prediction sources (i.e., sound regularity). We propose an integrative theoretical framework—the extended auditory event representation system (AERS), a model compatible with the ideomotor theory, theory of event coding, and predictive coding. Initially introduced to describe regularity-based auditory predictions, we argue that the extended AERS explains the effects of action intention on auditory processing while additionally allowing studying the differences and commonalities between intention- and regularity-based predictions—we thus believe that this framework could guide future research on action and perception.


2012 ◽  
Vol 25 (0) ◽  
pp. 184-185
Author(s):  
Sonja Schall ◽  
Stefan J. Kiebel ◽  
Burkhard Maess ◽  
Katharina von Kriegstein

There is compelling evidence that low-level sensory areas are sensitive to more than one modality. For example, auditory cortices respond to visual-only stimuli (Calvert et al., 1997; Meyer et al., 2010; Pekkola et al., 2005) and conversely, visual sensory areas respond to sound sources even in auditory-only conditions (Poirier et al., 2005; von Kriegstein et al., 2008; von Kriegstein and Giraud, 2006). Currently, it is unknown what makes the brain activate modality-specific, sensory areas solely in response to input of a different modality. One reason may be that such activations are instrumental for early sensory processing of the input modality — a hypothesis that is contrary to current text book knowledge. Here we test this hypothesis by harnessing a temporally highly resolved method, i.e., magnetoencephalography (MEG), to identify the temporal response profile of visual regions in response to auditory-only voice recognition. Participants () briefly learned a set of voices audio–visually, i.e., together with a talking face in an ecologically valid situation, as in daily life. Once subjects were able to recognize these now familiar voices, we measured their brain responses using MEG. The results revealed two key mechanisms that characterize the sensory processing of familiar speakers’ voices: (i) activation in the visual face-sensitive fusiform gyrus at very early auditory processing stages, i.e., only 100 ms after auditory onset and (ii) a temporal facilitation of auditory processing (M200) that was directly associated with improved recognition performance. These findings suggest that visual areas are instrumental already during very early auditory-only processing stages and indicate that the brain uses visual mechanisms to optimize sensory processing and recognition of auditory stimuli.


2018 ◽  
Author(s):  
Eline Verschueren ◽  
Jonas Vanthornhout ◽  
Tom Francart

ABSTRACTObjectivesRecently an objective measure of speech intelligibility, based on brain responses derived from the electroencephalogram (EEG), has been developed using isolated Matrix sentences as a stimulus. We investigated whether this objective measure of speech intelligibility can also be used with natural speech as a stimulus, as this would be beneficial for clinical applications.DesignWe recorded the EEG in 19 normal-hearing participants while they listened to two types of stimuli: Matrix sentences and a natural story. Each stimulus was presented at different levels of speech intelligibility by adding speech weighted noise. Speech intelligibility was assessed in two ways for both stimuli: (1) behaviorally and (2) objectively by reconstructing the speech envelope from the EEG using a linear decoder and correlating it with the acoustic envelope. We also calculated temporal response functions (TRFs) to investigate the temporal characteristics of the brain responses in the EEG channels covering different brain areas.ResultsFor both stimulus types the correlation between the speech envelope and the reconstructed envelope increased with increasing speech intelligibility. In addition, correlations were higher for the natural story than for the Matrix sentences. Similar to the linear decoder analysis, TRF amplitudes increased with increasing speech intelligibility for both stimuli. Remarkable is that although speech intelligibility remained unchanged in the no noise and +2.5 dB SNR condition, neural speech processing was affected by the addition of this small amount of noise: TRF amplitudes across the entire scalp decreased between 0 to 150 ms, while amplitudes between 150 to 200 ms increased in the presence of noise. TRF latency changes in function of speech intelligibility appeared to be stimulus specific: The latency of the prominent negative peak in the early responses (50-300 ms) increased with increasing speech intelligibility for the Matrix sentences, but remained unchanged for the natural story.ConclusionsThese results show (1) the feasibility of natural speech as a stimulus for the objective measure of speech intelligibility, (2) that neural tracking of speech is enhanced using a natural story compared to Matrix sentences and (3) that noise and the stimulus type can change the temporal characteristics of the brain responses. These results might reflect the integration of incoming acoustic features and top-down information, suggesting that the choice of the stimulus has to be considered based on the intended purpose of the measurement.


Author(s):  
Abdollah Moossavi ◽  
Nasrin Gohari

Background and Aim: Researchers in the fields of psychoacoustic and electrophysiology are mostly focused on demonstrating the better and different neurophysiological performance of musicians. The present study explores the imp­act of music upon the auditory system, the non-auditory system as well as the improvement of language and cognitive skills following listening to music or receiving music training. Recent Findings: Studies indicate the impact of music upon the auditory processing from the cochlea to secondary auditory cortex and other parts of the brain. Besides, the impact of music on speech perception and other cognitive proce­ssing is demonstrated. Some papers point to the bottom-up and some others to the top-down pro­cessing, which is explained in detail. Conclusion: Listening to music and receiving music training, in the long run, creates plasticity from the cochlea to the auditory cortex. Since the auditory path of musical sounds overlaps functionally with that of speech path, music hel­ps better speech perception, too. Both percep­tual and cognitive functions are involved in this process. Music engages a large area of the brain, so music can be used as a supplement in rehabi­litation programs and helps the improvement of speech and language skills.


2019 ◽  
Author(s):  
Laurianne Cabrera ◽  
Judit Gervain

Speech perception is constrained by auditory processing. Although at birth, infants have an immature auditory system and limited language experience, they show remarkable speech perception skills. To assess neonates’ ability to process the complex acoustic cues of speech, we combined near-infrared spectroscopy (NIRS) and electroencephalography (EEG) to measure brain responses to syllables differing in consonants. The syllables were presented in three conditions preserving (i) original temporal modulations of speech (both amplitude and frequency modulations, AM/FM), (ii) both fast and slow AM, but not FM, or (iii) only the slowest AM (&lt; 8 Hz). EEG responses indicate that neonates are able to encode consonants in all conditions, even without the fast temporal modulations, similarly to adults. Yet, the fast and slow AM activate different neural areas, as shown by NIRS. Thus, the immature human brain is already able to decompose the acoustic components of speech, laying the foundations of language learning.


Author(s):  
Anil K. Seth

Consciousness is perhaps the most familiar aspect of our existence, yet we still do not know its biological basis. This chapter outlines a biomimetic approach to consciousness science, identifying three principles linking properties of conscious experience to potential biological mechanisms. First, conscious experiences generate large quantities of information in virtue of being simultaneously integrated and differentiated. Second, the brain continuously generates predictions about the world and self, which account for the specific content of conscious scenes. Third, the conscious self depends on active inference of self-related signals at multiple levels. Research following these principles helps move from establishing correlations between brain responses and consciousness towards explanations which account for phenomenological properties—addressing what can be called the “real problem” of consciousness. The picture that emerges is one in which consciousness, mind, and life, are tightly bound together—with implications for any possible future “conscious machines.”


Sign in / Sign up

Export Citation Format

Share Document