Modulation of Auditory Motion Processing by Visual Motion

2014 ◽  
Vol 28 (2) ◽  
pp. 82-100 ◽  
Author(s):  
Stephan Getzmann ◽  
Jörg Lewald

Neurophysiological findings suggested that auditory and visual motion information is integrated at an early stage of auditory cortical processing, already starting in primary auditory cortex. Here, the effect of visual motion on processing of auditory motion was investigated by employing electrotomography in combination with free-field sound motion. A delayed-motion paradigm was used in which the onset of motion was delayed relative to the onset of an initially stationary stimulus. The results indicated that activity related to the motion-onset response, a neurophysiological correlate of auditory motion processing, interacts with the processing of visual motion at quite early stages of auditory analysis in the dimensions of both the time and the location of cortical processing. A modulation of auditory motion processing by concurrent visual motion was found already around 170 ms after motion onset (cN1 component) in the regions of primary auditory cortex and posterior superior temporal gyrus: Incongruent visual motion enhanced the auditory motion onset response in auditory regions ipsilateral to the sound motion stimulus, thus reducing the pattern of contralaterality observed with unimodal auditory stimuli. No modulation was found in parietal cortex nor around 250 ms after motion onset (cP2 component) in any auditory region of interest. These findings may reflect the integration of auditory and visual motion information in low-level areas of the auditory cortical system at relatively early points in time.

2003 ◽  
Vol 14 (4) ◽  
pp. 357-361 ◽  
Author(s):  
Jean Vroomen ◽  
Beatrice de Gelder

In this study, we show that the contingent auditory motion aftereffect is strongly influenced by visual motion information. During an induction phase, participants listened to rightward-moving sounds with falling pitch alternated with leftward-moving sounds with rising pitch (or vice versa). Auditory aftereffects (i.e., a shift in the psychometric function for unimodal auditory motion perception) were bigger when a visual stimulus moved in the same direction as the sound than when no visual stimulus was presented. When the visual stimulus moved in the opposite direction, aftereffects were reversed and thus became contingent upon visual motion. When visual motion was combined with a stationary sound, no aftereffect was observed. These findings indicate that there are strong perceptual links between the visual and auditory motion-processing systems.


2020 ◽  
Author(s):  
A. Gurtubay-Antolin ◽  
C. Battal ◽  
C. Maffei ◽  
M. Rezk ◽  
S Mattioni ◽  
...  

ABSTRACTIn humans, the occipital middle-temporal region (hMT+/V5) specializes in the processing of visual motion, while the Planum Temporale (hPT) specializes in auditory motion processing. It has been hypothesized that these regions might communicate directly to achieve fast and optimal exchange of multisensory motion information. In this study, we investigated for the first time in humans the existence of direct white matter connections between visual and auditory motion-selective regions using a combined functional- and diffusion-MRI approach. We found reliable evidence supporting the existence of direct white matter connections between individually and functionally defined hMT+/V5 and hPT. We show that projections between hMT+/V5 and hPT do not overlap with large white matter bundles such as the Inferior Longitudinal Fasciculus (ILF) nor the Inferior Frontal Occipital Fasciculus (IFOF). Moreover, we did not find evidence for the existence of reciprocal projections between the face fusiform area and hPT, supporting the functional specificity of hMT+/V5 – hPT connections. Finally, evidence supporting the existence of hMT+/V5 – hPT connections was corroborated in a large sample of participants (n=114) from the human connectome project. Altogether, this study provides first evidence supporting the existence of direct occipito-temporal projections between hMT+/V5 and hPT which may support the exchange of motion information between functionally specialized auditory and visual regions and that we propose to name the middle (or motion) occipito-temporal track (MOTT).


2020 ◽  
Author(s):  
Stefania Benetti ◽  
Joshua Zonca ◽  
Ambra Ferrari ◽  
Mohamed Rezk ◽  
Giuseppe Rabini ◽  
...  

AbstractIn early deaf individuals, the auditory deprived temporal brain regions become engaged in visual processing. In our study we tested further the hypothesis that intrinsic functional specialization guides the expression of cross-modal responses in the deprived auditory cortex. We used functional MRI to characterize the brain response to horizontal, radial and stochastic visual motion in early deaf and hearing individuals matched for the use of oral or sign language. Visual motion showed enhanced response in the ‘deaf’ mid-lateral planum temporale, a region selective to auditory motion as demonstrated by a separate auditory motion localizer in hearing people. Moreover, multivariate pattern analysis revealed that this reorganized temporal region showed enhanced decoding of motion categories in the deaf group, while visual motion-selective region hMT+/V5 showed reduced decoding when compared to hearing people. Dynamic Causal Modelling revealed that the ‘deaf’ motion-selective temporal region shows a specific increase of its functional interactions with hMT+/V5 and is now part of a large-scale visual motion selective network. In addition, we observed preferential responses to radial, compared to horizontal, visual motion in the ‘deaf’ right superior temporal cortex region that also show preferential response to approaching/receding sounds in the hearing brain. Overall, our results suggest that the early experience of auditory deprivation interacts with intrinsic constraints and triggers a large-scale reallocation of computational load between auditory and visual brain regions that typically support the multisensory processing of motion information.HighlightsAuditory motion-sensitive regions respond to visual motion in the deafReorganized auditory cortex can discriminate between visual motion trajectoriesPart of the deaf auditory cortex shows preference for in-depth visual motionDeafness might lead to computational reallocation between auditory/visual regions.


2000 ◽  
Vol 84 (3) ◽  
pp. 1453-1463 ◽  
Author(s):  
Jos J. Eggermont

Responses of single- and multi-units in primary auditory cortex were recorded for gap-in-noise stimuli for different durations of the leading noise burst. Both firing rate and inter-spike interval representations were evaluated. The minimum detectable gap decreased in exponential fashion with the duration of the leading burst to reach an asymptote for durations of 100 ms. Despite the fact that leading and trailing noise bursts had the same frequency content, the dependence on leading burst duration was correlated with psychophysical estimates of across frequency channel (different frequency content of leading and trailing burst) gap thresholds in humans. The duration of the leading burst plus that of the gap was represented in the all-order inter-spike interval histograms for cortical neurons. The recovery functions for cortical neurons could be modeled on basis of fast synaptic depression and after-hyperpolarization produced by the onset response to the leading noise burst. This suggests that the minimum gap representation in the firing pattern of neurons in primary auditory cortex, and minimum gap detection in behavioral tasks is largely determined by properties intrinsic to those, or potentially subcortical, cells.


2018 ◽  
Author(s):  
Ceren Battal ◽  
Mohamed Rezk ◽  
Stefania Mattioni ◽  
Jyothirmayi Vadlamudi ◽  
Olivier Collignon

ABSTRACTThe ability to compute the location and direction of sounds is a crucial perceptual skill to efficiently interact with dynamic environments. How the human brain implements spatial hearing is however poorly understood. In our study, we used fMRI to characterize the brain activity of male and female humans listening to left, right, up and down moving as well as static sounds. Whole brain univariate results contrasting moving and static sounds varying in their location revealed a robust functional preference for auditory motion in bilateral human Planum Temporale (hPT). Using independently localized hPT, we show that this region contains information about auditory motion directions and, to a lesser extent, sound source locations. Moreover, hPT showed an axis of motion organization reminiscent of the functional organization of the middle-temporal cortex (hMT+/V5) for vision. Importantly, whereas motion direction and location rely on partially shared pattern geometries in hPT, as demonstrated by successful cross-condition decoding, the responses elicited by static and moving sounds were however significantly distinct. Altogether our results demonstrate that the hPT codes for auditory motion and location but that the underlying neural computation linked to motion processing is more reliable and partially distinct from the one supporting sound source location.SIGNIFICANCE STATEMENTIn comparison to what we know about visual motion, little is known about how the brain implements spatial hearing. Our study reveals that motion directions and sound source locations can be reliably decoded in the human Planum Temporale (hPT) and that they rely on partially shared pattern geometries. Our study therefore sheds important new lights on how computing the location or direction of sounds are implemented in the human auditory cortex by showing that those two computations rely on partially shared neural codes. Furthermore, our results show that the neural representation of moving sounds in hPT follows a “preferred axis of motion” organization, reminiscent of the coding mechanisms typically observed in the occipital hMT+/V5 region for computing visual motion.


1999 ◽  
Vol 81 (5) ◽  
pp. 2570-2581 ◽  
Author(s):  
Jos J. Eggermont

Neural correlates of gap detection in three auditory cortical fields in the cat. Mimimum detectable gaps in noise in humans are independent of the position of the gap, whereas in cat primary auditory cortex (AI) they are position dependent. The position dependence in other cortical areas is not known and may resolve this contrast. This study presents minimum detectable gap-in-noise values for which single-unit (SU), multiunit (MU) recordings and local field potentials (LFPs) show an onset response to the noise after the gap. The gap, which varied in duration between 5 and 70 ms, was preceded by a noise burst of either 5 ms (early gap) or 500 ms (late gap) duration. In 10 cats, simultaneous recordings were made with one electrode each in AI, anterior auditory field (AAF), and secondary auditory cortex (AII). In nine additional cats, two electrodes were inserted in AI and one in AAF. Minimum detectable gaps based on SU, MU, or LFP data in each cortical area were the same. In addition, very similar minimum early-gap values were found in all three areas (means, 36.1–41.7 ms). The minimum late-gap values were also similar in AI and AII (means, 11.1 and 11.7 ms), whereas AAF showed significantly larger minimum late-gap durations (mean 21.5 ms). For intensities >35 dB SPL, distributions of minimum early-gap durations in AAF and AII had modal values at ∼45 ms. In AI, the distribution was more uniform. Distributions for minimum late-gap duration were skewed toward low values (mode at 5 ms), but high values (≤60 ms) were found infrequently as well. A small fraction of units showed a response after the gap only for early-gap durations <20 ms. In AI and AII, the mean minimum early- and late-gap durations decreased significantly with increase in the neuron’s characteristic frequency (CF), whereas the lower boundary for the minimum early gap was CF independent. The findings suggest that human within-perceptual-channel gap detection, showing no dependence of the minimum detectable gap on the duration of the leading noise burst, likely is based on the lower envelope of the distribution of neural minimum gap values of units in AI and AAF. In contrast, across-perceptual-channel gap detection, which shows a decreasing minimum detectable gap with increasing duration of the leading noise burst, likely is based on the comparison ofon responses from populations of neurons that converge on units in AII.


2001 ◽  
Vol 86 (5) ◽  
pp. 2616-2620 ◽  
Author(s):  
Xiaoqin Wang ◽  
Siddhartha C. Kadia

A number of studies in various species have demonstrated that natural vocalizations generally produce stronger neural responses than do their time-reversed versions. The majority of neurons in the primary auditory cortex (A1) of marmoset monkeys responds more strongly to natural marmoset vocalizations than to the time-reversed vocalizations. However, it was unclear whether such differences in neural responses were simply due to the difference between the acoustic structures of natural and time-reversed vocalizations or whether they also resulted from the difference in behavioral relevance of both types of the stimuli. To address this issue, we have compared neural responses to natural and time-reversed marmoset twitter calls in A1 of cats with those obtained from A1 of marmosets using identical stimuli. It was found that the preference for natural marmoset twitter calls demonstrated in marmoset A1 was absent in cat A1. While both cortices responded approximately equally to time-reversed twitter calls, marmoset A1 responded much more strongly to natural twitter calls than did cat A1. This differential representation of marmoset vocalizations in two cortices suggests that experience-dependent and possibly species-specific mechanisms are involved in cortical processing of communication sounds.


1999 ◽  
Vol 81 (5) ◽  
pp. 2075-2087 ◽  
Author(s):  
Daryl E. Doan ◽  
James C. Saunders

Sensitivity to simulated directional sound motion in the rat primary auditory cortex. This paper examines neuron responses in rat primary auditory cortex (AI) during sound stimulation of the two ears designed to simulate sound motion in the horizontal plane. The simulated sound motion was synthesized from mathematical equations that generated dynamic changes in interaural phase, intensity, and Doppler shifts at the two ears. The simulated sounds were based on moving sources in the right frontal horizontal quadrant. Stimuli consisted of three circumferential segments between 0 and 30°, 30 and 60°, and 60 and 90° and four radial segments at 0, 30, 60, and 90°. The constant velocity portion of each segment was 0.84 m long. The circumferential segments and center of the radial segments were calculated to simulate a distance of 2 m from the head. Each segment had two trajectories that simulated motion in both directions, and each trajectory was presented at two velocities. Young adult rats were anesthetized, the left primary auditory cortex was exposed, and microelectrode recordings were obtained from sound responsive cells in AI. All testing took place at a tonal frequency that most closely approximated the best frequency of the unit at a level 20 dB above the tuning curve threshold. The results were presented on polar plots that emphasized the two directions of simulated motion for each segment rather than the location of sound in space. The trajectory exhibiting a “maximum motion response” could be identified from these plots. “Neuron discharge profiles” within these trajectories were used to demonstrate neuron activity for the two motion directions. Cells were identified that clearly responded to simulated uni- or multidirectional sound motion (39%), that were sensitive to sound location only (19%), or that were sound driven but insensitive to our location or sound motion stimuli (42%). The results demonstrated the capacity of neurons in rat auditory cortex to selectively process dynamic stimulus conditions representing simulated motion on the horizontal plane. Our data further show that some cells were responsive to location along the horizontal plane but not sensitive to motion. Cells sensitive to motion, however, also responded best to the moving sound at a particular location within the trajectory. It would seem that the mechanisms underlying sensitivity to sound location as well as direction of motion converge on the same cell.


2003 ◽  
Vol 90 (4) ◽  
pp. 2387-2401 ◽  
Author(s):  
Arnaud. J. Noreña ◽  
Masahiko Tomita ◽  
Jos J. Eggermont

Here we present the changes in cortical activity occurring within a few hours after a 1-h exposure to a 120-dB SPL pure tone (5 or 6 kHz). The changes in primary auditory cortex of 16 ketamine-anesthetized cats were assessed by recording, with two 8-microelectrode arrays, from the same multiunit clusters before and after the trauma. The exposure resulted in a peripheral threshold increase that stabilized after a few hours to on average 40 dB in the frequency range of 6–32 kHz, as measured by the auditory brain stem response. The trauma induced a shift in characteristic frequency toward lower frequencies, an emergence of new responses, a broadening of the tuning curve, and an increase in the maximum of driven discharges. In addition, the onset response after the trauma was of shorter duration than before the trauma. The results suggest the involvement of both a decrease and an increase in inhibition. They are discussed in terms of changes in central inhibition and its implications for tonotopic map plasticity.


2009 ◽  
Vol 198 (2-3) ◽  
pp. 391-402 ◽  
Author(s):  
Mikhail Zvyagintsev ◽  
Andrey R. Nikolaev ◽  
Heike Thönnessen ◽  
Olga Sachs ◽  
Jürgen Dammers ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document