scholarly journals Mapping the contents of consciousness during musical imagery

2020 ◽  
Author(s):  
Mor Regev ◽  
Andrea R. Halpern ◽  
Adrian M. Owen ◽  
Aniruddh D Patel ◽  
Robert J Zatorre

AbstractHumans can internally represent auditory information without an external stimulus. When imagining music, how similar are unfolding neural representations to those during the original perceived experience? Participants memorized six one-minute-long musical pieces with high accuracy. Functional MRI data were collected during: 1) silent imagery of melodies to the beat of a visual metronome; 2) same but while tapping to the beat; and 3) passive listening. During imagery, inter-subject comparison showed that melody-specific temporal response patterns were reinstated in right associative auditory cortices. When tapping accompanied imagery, the melody-specific neural patterns were extended to associative cortices bilaterally. These results indicate that the specific contents of conscious experience are encoded similarly during imagery and perception in the dynamic activity of auditory cortices. Furthermore, rhythmic motion can enhance the reinstatement of neural patterns associated with the experience of complex sounds, in keeping with models of motor to sensory influences in auditory processing.

2004 ◽  
Vol 91 (1) ◽  
pp. 136-151 ◽  
Author(s):  
Sarah M. N. Woolley ◽  
John H. Casseday

The avian mesencephalicus lateralis, dorsalis (MLd) is the auditory midbrain nucleus in which multiple parallel inputs from lower brain stem converge and through which most auditory information passes to reach the forebrain. Auditory processing in the MLd has not been investigated in songbirds. We studied the tuning properties of single MLd neurons in adult male zebra finches. Pure tones were used to examine tonotopy, temporal response patterns, frequency coding, intensity coding, spike latencies, and duration tuning. Most neurons had no spontaneous activity. The tonotopy of MLd is like that of other birds and mammals; characteristic frequencies (CFs) increase in a dorsal to ventral direction. Four major response patterns were found: 1) onset (49% of cells); 2) primary-like (20%); 3) sustained (19%); and 4) primary-like with notch (12%). CFs ranged between 0.9 and 6.1 kHz, matching the zebra finch hearing range and the power spectrum of song. Tuning curves were generally V-shaped, but complex curves, with multiple peaks or noncontiguous excitatory regions, were observed in 22% of cells. Rate-level functions indicated that 51% of nononset cells showed monotonic relationships between spike rate and sound level. Other cells showed low saturation or nonmonotonic responses. Spike latencies ranged from 4 to 40 ms, measured at CF. Spike latencies generally decreased with increasing sound pressure level (SPL), although paradoxical latency shifts were observed in 16% of units. For onset cells, changes in SPL produced smaller latency changes than for cells showing other response types. Results suggest that auditory midbrain neurons may be particularly suited for processing temporally complex signals with a high degree of precision.


2019 ◽  
pp. 261-269
Author(s):  
Priya Santhanam ◽  
◽  
Anna Meehan ◽  
William W. Orrison ◽  
Steffanie H. Wilson ◽  
...  

Auditory processing disorders are common following mild traumatic brain injury (mTBI), but the neurocircuitry involved is not well understood. The present study used functional MRI to examine auditory cortex activation patterns during a passive listening task in a normative population and mTBI patients with and without clinical central auditory processing deficits (APD) as defined by the SCAN-3:A clinical battery. Patients with mTBI had overall patterns of lower auditory cortex activation during the listening tasks as compared to normative controls. A significant lateralization pattern (pairwise t-test; p<0.05) was observed in normative controls and in those with mTBI and APD during single-side stimulation. Additionally, baseline connectivity between left and right auditory cortices was lower in mTBI patients than in controls (p=0.01) and significantly reduced in the mTBI with APD group (p=0.008). Correlation was also observed between bilateral task-related activation and competing words subscore of the SCAN-3:A. These findings suggest the passive listening task is well suited to probe auditory function in military personnel with an mTBI diagnosis. Further, the study supports the use of multiple approaches for detecting and assessing central auditory deficits to improve monitoring of short- and long-term outcomes.


1996 ◽  
Vol 8 (1) ◽  
pp. 29-46 ◽  
Author(s):  
Robert J. Zatorre ◽  
Andrea R. Halpern ◽  
David W. Perry ◽  
Ernst Meyer ◽  
Alan C. Evans

Neuropsychological studies have suggested that imagery processes may be mediated by neuronal mechanisms similar to those used in perception. To test this hypothesis, and to explore the neural basis for song imagery, 12 normal subjects were scanned using the water bolus method to measure cerebral blood flow (CBF) during the performance of three tasks. In the control condition subjects saw pairs of words on each trial and judged which word was longer. In the perceptual condition subjects also viewed pairs of words, this time drawn from a familiar song; simultaneously they heard the corresponding song, and their task was to judge the change in pitch of the two cued words within the song. In the imagery condition, subjects performed precisely the same judgment as in the perceptual condition, but with no auditory input. Thus, to perform the imagery task correctly an internal auditory representation must be accessed. Paired-image subtraction of the resulting pattern of CBF, together with matched MRI for anatomical localization, revealed that both perceptual and imagery. tasks produced similar patterns of CBF changes, as compared to the control condition, in keeping with the hypothesis. More specifically, both perceiving and imagining songs are associated with bilateral neuronal activity in the secondary auditory cortices, suggesting that processes within these regions underlie the phenomenological impression of imagined sounds. Other CBF foci elicited in both tasks include areas in the left and right frontal lobes and in the left parietal lobe, as well as the supplementary motor area. This latter region implicates covert vocalization as one component of musical imagery. Direct comparison of imagery and perceptual tasks revealed CBF increases in the inferior frontal polar cortex and right thalamus. We speculate that this network of regions may be specifically associated with retrieval and/or generation of auditory information from memory.


1988 ◽  
Vol 33 (12) ◽  
pp. 1103-1103
Author(s):  
No authorship indicated

Author(s):  
Laura Hurley

The inferior colliculus (IC) receives prominent projections from centralized neuromodulatory systems. These systems include extra-auditory clusters of cholinergic, dopaminergic, noradrenergic, and serotonergic neurons. Although these modulatory sites are not explicitly part of the auditory system, they receive projections from primary auditory regions and are responsive to acoustic stimuli. This bidirectional influence suggests the existence of auditory-modulatory feedback loops. A characteristic of neuromodulatory centers is that they integrate inputs from anatomically widespread and functionally diverse sets of brain regions. This connectivity gives neuromodulatory systems the potential to import information into the auditory system on situational variables that accompany acoustic stimuli, such as context, internal state, or experience. Once released, neuromodulators functionally reconfigure auditory circuitry through a variety of receptors expressed by auditory neurons. In addition to shaping ascending auditory information, neuromodulation within the IC influences behaviors that arise subcortically, such as prepulse inhibition of the startle response. Neuromodulatory systems therefore provide a route for integrative behavioral information to access auditory processing from its earliest levels.


2021 ◽  
pp. 174702182199003
Author(s):  
Andy J Kim ◽  
David S Lee ◽  
Brian A Anderson

Previously reward-associated stimuli have consistently been shown to involuntarily capture attention in the visual domain. Although previously reward-associated but currently task-irrelevant sounds have also been shown to interfere with visual processing, it remains unclear whether such stimuli can interfere with the processing of task-relevant auditory information. To address this question, we modified a dichotic listening task to measure interference from task-irrelevant but previously reward-associated sounds. In a training phase, participants were simultaneously presented with a spoken letter and number in different auditory streams and learned to associate the correct identification of each of three letters with high, low, and no monetary reward, respectively. In a subsequent test phase, participants were again presented with the same auditory stimuli but were instead instructed to report the number while ignoring spoken letters. In both the training and test phases, response time measures demonstrated that attention was biased in favour of the auditory stimulus associated with high value. Our findings demonstrate that attention can be biased towards learned reward cues in the auditory domain, interfering with goal-directed auditory processing.


1992 ◽  
Vol 336 (1278) ◽  
pp. 295-306 ◽  

The past 30 years has seen a remarkable development in our understanding of how the auditory system - particularly the peripheral system - processes complex sounds. Perhaps the most significant has been our understanding of the mechanisms underlying auditory frequency selectivity and their importance for normal and impaired auditory processing. Physiologically vulnerable cochlear filtering can account for many aspects of our normal and impaired psychophysical frequency selectivity with important consequences for the perception of complex sounds. For normal hearing, remarkable mechanisms in the organ of Corti, involving enhancement of mechanical tuning (in mammals probably by feedback of electro-mechanically generated energy from the hair cells), produce exquisite tuning, reflected in the tuning properties of cochlear nerve fibres. Recent comparisons of physiological (cochlear nerve) and psychophysical frequency selectivity in the same species indicate that the ear’s overall frequency selectivity can be accounted for by this cochlear filtering, at least in band width terms. Because this cochlear filtering is physiologically vulnerable, it deteriorates in deleterious conditions of the cochlea - hypoxia, disease, drugs, noise overexposure, mechanical disturbance - and is reflected in impaired psychophysical frequency selectivity. This is a fundamental feature of sensorineural hearing loss of cochlear origin, and is of diagnostic value. This cochlear filtering, particularly as reflected in the temporal patterns of cochlear fibres to complex sounds, is remarkably robust over a wide range of stimulus levels. Furthermore, cochlear filtering properties are a prime determinant of the ‘place’ and ‘time’ coding of frequency at the cochlear nerve level, both of which appear to be involved in pitch perception. The problem of how the place and time coding of complex sounds is effected over the ear’s remarkably wide dynamic range is briefly addressed. In the auditory brainstem, particularly the dorsal cochlear nucleus, are inhibitory mechanisms responsible for enhancing the spectral and temporal contrasts in complex sounds. These mechanisms are now being dissected neuropharmacologically. At the cortical level, mechanisms are evident that are capable of abstracting biologically relevant features of complex sounds. Fundamental studies of how the auditory system encodes and processes complex sounds are vital to promising recent applications in the diagnosis and rehabilitation of the hearing impaired.


Author(s):  
Wessam Mostafa Essawy

<p class="abstract"><strong>Background:</strong> Amblyaudia is a weakness in the listener’s binaural processing of auditory information. Subjects with amblyaudia also demonstrate binaural integration deficits and may display similar patterns in their evoked responses in terms of latency and amplitude of these responses. The purpose of this study was to identify the presence of amblyaudia in a population of young children subjects and to measure mismatch negativity (MMN), P300 and cortical auditory evoked potentials (CAEPs) for those individuals.</p><p class="abstract"><strong>Methods:</strong> Subjects included in this study were divided into 2 groups control group that consisted of 20 normal hearing subjects with normal developmental milestones and normal speech development. The study group (GII) consisted of 50 subjects with central auditory processing disorders (CAPDs) diagnosed by central auditory screening tests. </p><p class="abstract"><strong>Results:</strong> With using dichotic tests including dichotic digits test (DDT) and competing sentence test (CST), we could classify these cases into normal, dichotic dysaudia, amblyaudia, and amblyaudia plus with percentages (40%, 14%, 38%, 8% respectively). Using event related potentials, we found that P300 and MMN are more specific in detecting neurocognitive dysfunction related to allocation of attentional resources and immediate memory in these cases.</p><p class="abstract"><strong>Conclusions:</strong> The presence of amblyaudia in cases of central auditory processing disorders (CAPDs) and event related potentials is an objective tool for diagnosis, prognosis and follow up after rehabilitation.</p>


2011 ◽  
Vol 105 (1) ◽  
pp. 188-199 ◽  
Author(s):  
Naoya Itatani ◽  
Georg M. Klump

It has been suggested that successively presented sounds that are perceived as separate auditory streams are represented by separate populations of neurons. Mostly, spectral separation in different peripheral filters has been identified as the cue for segregation. However, stream segregation based on temporal cues is also possible without spectral separation. Here we present sequences of ABA- triplet stimuli providing only temporal cues to neurons in the European starling auditory forebrain. A and B sounds (125 ms duration) were harmonic complexes (fundamentals 100, 200, or 400 Hz; center frequency and bandwidth chosen to fit the neurons' tuning characteristic) with identical amplitude spectra but different phase relations between components (cosine, alternating, or random phase) and presented at different rates. Differences in both rate responses and temporal response patterns of the neurons when stimulated with harmonic complexes with different phase relations provide first evidence for a mechanism allowing a separate neural representation of such stimuli. Recording sites responding >1 kHz showed enhanced rate and temporal differences compared with those responding at lower frequencies. These results demonstrate a neural correlate of streaming by temporal cues due to the variation of phase that shows striking parallels to observations in previous psychophysical studies.


Sign in / Sign up

Export Citation Format

Share Document