The auditory input during sleep and wakefulness

1964 ◽  
Vol 280 (1) ◽  
pp. 89-91 ◽  
Author(s):  
W. Baust ◽  
G. Berlucchi ◽  
G. Moruzzi
Keyword(s):  
2011 ◽  
Vol 105 (4) ◽  
pp. 1558-1573 ◽  
Author(s):  
Yu-Ting Mao ◽  
Tian-Miao Hua ◽  
Sarah L. Pallas

Sensory neocortex is capable of considerable plasticity after sensory deprivation or damage to input pathways, especially early in development. Although plasticity can often be restorative, sometimes novel, ectopic inputs invade the affected cortical area. Invading inputs from other sensory modalities may compromise the original function or even take over, imposing a new function and preventing recovery. Using ferrets whose retinal axons were rerouted into auditory thalamus at birth, we were able to examine the effect of varying the degree of ectopic, cross-modal input on reorganization of developing auditory cortex. In particular, we assayed whether the invading visual inputs and the existing auditory inputs competed for or shared postsynaptic targets and whether the convergence of input modalities would induce multisensory processing. We demonstrate that although the cross-modal inputs create new visual neurons in auditory cortex, some auditory processing remains. The degree of damage to auditory input to the medial geniculate nucleus was directly related to the proportion of visual neurons in auditory cortex, suggesting that the visual and residual auditory inputs compete for cortical territory. Visual neurons were not segregated from auditory neurons but shared target space even on individual target cells, substantially increasing the proportion of multisensory neurons. Thus spatial convergence of visual and auditory input modalities may be sufficient to expand multisensory representations. Together these findings argue that early, patterned visual activity does not drive segregation of visual and auditory afferents and suggest that auditory function might be compromised by converging visual inputs. These results indicate possible ways in which multisensory cortical areas may form during development and evolution. They also suggest that rehabilitative strategies designed to promote recovery of function after sensory deprivation or damage need to take into account that sensory cortex may become substantially more multisensory after alteration of its input during development.


BMC Biology ◽  
2021 ◽  
Vol 19 (1) ◽  
Author(s):  
Moritz Herbert Albrecht Köhler ◽  
Gianpaolo Demarchi ◽  
Nathan Weisz

AbstractBackgroundA long-standing debate concerns where in the processing hierarchy of the central nervous system (CNS) selective attention takes effect. In the auditory system, cochlear processes can be influenced via direct and mediated (by the inferior colliculus) projections from the auditory cortex to the superior olivary complex (SOC). Studies illustrating attentional modulations of cochlear responses have so far been limited to sound-evoked responses. The aim of the present study is to investigate intermodal (audiovisual) selective attention in humans simultaneously at the cortical and cochlear level during a stimulus-free cue-target interval.ResultsWe found that cochlear activity in the silent cue-target intervals was modulated by a theta-rhythmic pattern (~ 6 Hz). While this pattern was present independently of attentional focus, cochlear theta activity was clearly enhanced when attending to the upcoming auditory input. On a cortical level, classical posterior alpha and beta power enhancements were found during auditory selective attention. Interestingly, participants with a stronger release of inhibition in auditory brain regions show a stronger attentional modulation of cochlear theta activity.ConclusionsThese results hint at a putative theta-rhythmic sampling of auditory input at the cochlear level. Furthermore, our results point to an interindividual variable engagement of efferent pathways in an attentional context that are linked to processes within and beyond processes in auditory cortical regions.


2019 ◽  
Vol 0 (0) ◽  
Author(s):  
Raquel Serrano ◽  
Ana Pellicer-Sánchez

AbstractCombining reading with auditory input has been shown to be an effective way of supporting reading fluency and reading comprehension in a second language. Previous research has also shown that reading comprehension can be further supported by pictorial information. However, the studies conducted so far have mainly included adults or adolescents and have been based on post-reading tests that, although informative, do not contribute to our understanding of how learners’ processing of the several sources of input in multimodal texts changes with the presence of auditory input and the effect that potential differences could have on comprehension. The present study used eye-tracking to examine how young learners process the pictorial and textual information in a graded reader under reading only and reading-while-listening conditions. Results showed that readers spent more time processing the text in the reading only condition, while more time was spent processing the images in the reading-while-listening mode. Nevertheless, comprehension scores were similar for the readers in the two conditions. Additionally, our results suggested a significant (negative) relationship between the amount of time learners spent processing the text and comprehension scores in both modes.


1995 ◽  
Vol 47 (2) ◽  
pp. 105-133 ◽  
Author(s):  
N.B. Reese ◽  
E. Garcia-Rill ◽  
R.D. Skinner

2015 ◽  
Vol 37 (4) ◽  
pp. 757-780 ◽  
Author(s):  
JEONG-IM HAN ◽  
TAE-HWAN CHOI

ABSTRACTThis study examined the role of orthography in the production and storage of spoken words. Korean speakers learned novel Korean words with potential variants of /h/, including [ɦ] and ø. They were provided with the same auditory stimuli but with varying exposure to spelling. One group was presented with the letter for ø (<ㅇ>), the second group, the letter for [ɦ] (<ㅎ>), and the third group, auditory input only. In picture-naming tasks, the participants presented with <ㅇ> produced fewer words with [ɦ] than those presented with <ㅎ>. In a spelling recall task, the participants who were not exposed to spelling displayed various types of spellings for variants, but after exposure to spelling, they began to produce spellings as provided in the task. These results suggest that orthographic information influences the production of words via an offline restructuring of the phonological representation.


2021 ◽  
Vol 15 ◽  
Author(s):  
Fabian Kiepe ◽  
Nils Kraus ◽  
Guido Hesselmann

Self-generated auditory input is perceived less loudly than the same sounds generated externally. The existence of this phenomenon, called Sensory Attenuation (SA), has been studied for decades and is often explained by motor-based forward models. Recent developments in the research of SA, however, challenge these models. We review the current state of knowledge regarding theoretical implications about the significance of Sensory Attenuation and its role in human behavior and functioning. Focusing on behavioral and electrophysiological results in the auditory domain, we provide an overview of the characteristics and limitations of existing SA paradigms and highlight the problem of isolating SA from other predictive mechanisms. Finally, we explore different hypotheses attempting to explain heterogeneous empirical findings, and the impact of the Predictive Coding Framework in this research area.


2021 ◽  
Author(s):  
Ana Pellicer-Sánchez ◽  
Anna Siyanova

Abstract The field of vocabulary research is witnessing a growing interest in the use of eye-tracking to investigate topics that have traditionally been examined using offline measures, providing new insights into the processing and learning of vocabulary. During an eye-tracking experiment, participants’ eye movements are recorded while they attend to written or auditory input, resulting in a rich record of online processing behaviour. Because of its many benefits, eye-tracking is becoming a major research technique in vocabulary research. However, before this emerging trend of eye-tracking based vocabulary research continues to proliferate, it is important to step back and reflect on what current studies have shown about the processing and learning of vocabulary, and the ways in which we can use the technique in future research. To this aim, the present paper provides a comprehensive overview of current eye-tracking research findings, both in terms of the processing and learning of single words and formulaic sequences. Current research gaps and potential avenues for future research are also discussed.


2005 ◽  
Vol 116 (1) ◽  
pp. 142-150 ◽  
Author(s):  
Sara Määttä ◽  
Pia Saavalainen ◽  
Eila Herrgård ◽  
Ari Pääkkönen ◽  
Laila Luoma ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document