Choice-related activity and neural encoding in primary auditory cortex and lateral belt during feature selective attention

Author(s):  
Jennifer Leigh Mohn ◽  
Joshua D Downer ◽  
Kevin N. O'Connor ◽  
Jeffrey Scott Johnson ◽  
Mitchell L Sutter

Selective attention is necessary to sift through, form a coherent percept of, and make behavioral decisions on the vast amount of information present in most sensory environments. How and where selective attention is employed in cortex and how this perceptual information then informs the relevant behavioral decisions is still not well understood. Studies probing selective attention and decision making in visual cortex have been enlightening as to how sensory attention might work in that modality; whether or not similar mechanisms are employed in auditory attention is not yet clear. Therefore, we trained rhesus macaques on a feature selective attention task, where they switched between reporting changes in temporal (amplitude modulation, AM) and spectral (carrier bandwidth) features of a broadband noise stimulus. We investigated how the encoding of these features by single neurons in primary (A1) and secondary (middle lateral belt, ML) auditory cortex were affected by the different attention conditions. We found that neurons in A1 and ML showed mixed-selectivity to the sound and task features. We found no difference in AM encoding between the attention conditions. We found that choice-related activity in both A1 and ML neurons shifts between attentional conditions. This finding suggests that choice-related activity in auditory cortex does not simply reflect motor preparation or action, and supports the relationship between reported choice-related activity and the decision and perceptual process.

2020 ◽  
Author(s):  
Jennifer L. Mohn ◽  
Joshua D. Downer ◽  
Kevin N. O’Connor ◽  
Jeffrey S. Johnson ◽  
Mitchell L. Sutter

AbstractSelective attention is necessary to sift through, form a coherent percept of, and make behavioral decisions on the vast amount of information present in most sensory environments. How and where selective attention is employed in cortex and how this perceptual information then informs the relevant behavioral decisions is still not well understood. Studies probing selective attention and decision making in visual cortex have been enlightening as to how sensory attention might work in that modality; whether or not similar mechanisms are employed in auditory attention is not yet clear. Therefore, we trained rhesus macaques on a feature selective attention task, where they switched between reporting changes in temporal (amplitude modulation, AM) and spectral (carrier bandwidth) features of a broadband noise stimulus. We investigated how the encoding of these features by single neurons in primary (A1) and secondary (lateral belt, ML) auditory cortex were affected by the different attention conditions. We found that neurons in A1 and ML showed mixed-selectivity to the sound and task features. We found no difference in AM encoding between the attention conditions. We found that choice-related activity in both A1 and ML neurons shifts between attentional conditions. This finding suggests that choice-related activity in auditory cortex does not simply reflect motor preparation or action, and supports the relationship between reported choice-related activity and the decision and perceptual process.New & NoteworthyWe recorded from primary and secondary auditory cortex while monkeys performed a non-spatial feature attention task. Both areas exhibited rate-based choice-related activity. The manifestation of choice-related activity was attention-dependent, suggesting that choice-related activity in auditory cortex does not simply reflect arousal or motor influences, but relates to the specific perceptual choice. The lack of temporal-based choice activity is consistent with growing evidence that subcortical, but not cortical, single neurons inform decisions through temporal envelope following.


2020 ◽  
Vol 30 (11) ◽  
pp. 5792-5805 ◽  
Author(s):  
Shiri Makov ◽  
Elana Zion Golumbic

Abstract Dynamic attending theory suggests that predicting the timing of upcoming sounds can assist in focusing attention toward them. However, whether similar predictive processes are also applied to background noises and assist in guiding attention “away” from potential distractors, remains an open question. Here we address this question by manipulating the temporal predictability of distractor sounds in a dichotic listening selective attention task. We tested the influence of distractors’ temporal predictability on performance and on the neural encoding of sounds, by comparing the effects of Rhythmic versus Nonrhythmic distractors. Using magnetoencephalography we found that, indeed, the neural responses to both attended and distractor sounds were affected by distractors’ rhythmicity. Baseline activity preceding the onset of Rhythmic distractor sounds was enhanced relative to nonrhythmic distractor sounds, and sensory response to them was suppressed. Moreover, detection of nonmasked targets improved when distractors were Rhythmic, an effect accompanied by stronger lateralization of the neural responses to attended sounds to contralateral auditory cortex. These combined behavioral and neural results suggest that not only are temporal predictions formed for task-irrelevant sounds, but that these predictions bear functional significance for promoting selective attention and reducing distractibility.


2017 ◽  
Vol 117 (3) ◽  
pp. 966-986 ◽  
Author(s):  
Deepa L. Ramamurthy ◽  
Gregg H. Recanzone

The mammalian auditory cortex is necessary for spectral and spatial processing of acoustic stimuli. Most physiological studies of single neurons in the auditory cortex have focused on the onset and sustained portions of evoked responses, but there have been far fewer studies on the relationship between onset and offset responses. In the current study, we compared spectral and spatial tuning of onset and offset responses of neurons in primary auditory cortex (A1) and the caudolateral (CL) belt area of awake macaque monkeys. Several different metrics were used to determine the relationship between onset and offset response profiles in both frequency and space domains. In the frequency domain, a substantial proportion of neurons in A1 and CL displayed highly dissimilar best stimuli for onset- and offset-evoked responses, although even for these neurons, there was usually a large overlap in the range of frequencies that elicited onset, and offset responses and distributions of tuning overlap metrics were mostly unimodal. In the spatial domain, the vast majority of neurons displayed very similar best locations for onset- and offset-evoked responses, along with unimodal distributions of all tuning overlap metrics considered. Finally, for both spectral and spatial tuning, a slightly larger fraction of neurons in A1 displayed nonoverlapping onset and offset response profiles, relative to CL, which supports hierarchical differences in the processing of sounds in the two areas. However, these differences are small compared with differences in proportions of simple cells (low overlap) and complex cells (high overlap) in primary and secondary visual areas. NEW & NOTEWORTHY In the current study, we examine the relationship between the tuning of neural responses evoked by the onset and offset of acoustic stimuli in the primary auditory cortex, as well as a higher-order auditory area—the caudolateral belt field—in awake rhesus macaques. In these areas, the relationship between onset and offset response profiles in frequency and space domains formed a continuum, ranging from highly overlapping to highly nonoverlapping.


2003 ◽  
Vol 89 (6) ◽  
pp. 2889-2903 ◽  
Author(s):  
G. Christopher Stecker ◽  
Brian J. Mickey ◽  
Ewan A. Macpherson ◽  
John C. Middlebrooks

We compared the spatial tuning properties of neurons in two fields [primary auditory cortex (A1) and posterior auditory field (PAF)] of cat auditory cortex. Broadband noise bursts of 80-ms duration were presented from loudspeakers throughout 360° in the horizontal plane (azimuth) or 260° in the vertical median plane (elevation). Sound levels varied from 20 to 40 dB above units' thresholds. We recorded neural spike activity simultaneously from 16 sites in field PAF and/or A1 of α-chloralose-anesthetized cats. We assessed spatial sensitivity by examining the dependence of spike count and response latency on stimulus location. In addition, we used an artificial neural network (ANN) to assess the information about stimulus location carried by spike patterns of single units and of ensembles of 2–32 units. The results indicate increased spatial sensitivity, more uniform distributions of preferred locations, and greater tolerance to changes in stimulus intensity among PAF units relative to A1 units. Compared to A1 units, PAF units responded at significantly longer latencies, and latencies varied more strongly with stimulus location. ANN analysis revealed significantly greater information transmission by spike patterns of PAF than A1 units, primarily reflecting the information transmitted by latency variation in PAF. Finally, information rates grew more rapidly with the number of units included in neural ensembles for PAF than A1. The latter finding suggests more accurate population coding of space in PAF, made possible by a more diverse population of neural response types.


2012 ◽  
Vol 107 (12) ◽  
pp. 3458-3467 ◽  
Author(s):  
Iris Steinmann ◽  
Alexander Gutschalk

Human functional MRI (fMRI) and magnetoencephalography (MEG) studies indicate a pitch-specific area in lateral Heschl's gyrus. Single-cell recordings in monkey suggest that sustained-firing, pitch-specific neurons are located lateral to primary auditory cortex. We reevaluated whether pitch strength contrasts reveal sustained pitch-specific responses in human auditory cortex. Sustained BOLD activity in auditory cortex was found for iterated rippled noise (vs. noise or silence) but not for regular click trains (vs. jittered click trains or silence). In contrast, iterated rippled noise and click trains produced similar pitch responses in MEG. Subsequently performed time-frequency analysis of the MEG data suggested that the dissociation of cortical BOLD activity between iterated rippled noise and click trains is related to theta band activity. It appears that both sustained BOLD and theta activity are associated with slow non-pitch-specific stimulus fluctuations. BOLD activity in the inferior colliculus was sustained for both stimulus types and varied neither with pitch strength nor with the presence of slow stimulus fluctuations. These results suggest that BOLD activity in auditory cortex is much more sensitive to slow stimulus fluctuations than to constant pitch, compromising the accessibility of the latter. In contrast, pitch-related activity in MEG can easily be separated from theta band activity related to slow stimulus fluctuations.


2014 ◽  
Vol 2014 ◽  
pp. 1-11 ◽  
Author(s):  
Iiro P. Jääskeläinen ◽  
Jyrki Ahveninen

The ability to concentrate on relevant sounds in the acoustic environment is crucial for everyday function and communication. Converging lines of evidence suggests that transient functional changes in auditory-cortex neurons, “short-term plasticity”, might explain this fundamental function. Under conditions of strongly focused attention, enhanced processing of attended sounds can take place at very early latencies (~50 ms from sound onset) in primary auditory cortex and possibly even at earlier latencies in subcortical structures. More robust selective-attention short-term plasticity is manifested as modulation of responses peaking at ~100 ms from sound onset in functionally specialized nonprimary auditory-cortical areas by way of stimulus-specific reshaping of neuronal receptive fields that supports filtering of selectively attended sound features from task-irrelevant ones. Such effects have been shown to take effect in ~seconds following shifting of attentional focus. There are findings suggesting that the reshaping of neuronal receptive fields is even stronger at longer auditory-cortex response latencies (~300 ms from sound onset). These longer-latency short-term plasticity effects seem to build up more gradually, within tens of seconds after shifting the focus of attention. Importantly, some of the auditory-cortical short-term plasticity effects observed during selective attention predict enhancements in behaviorally measured sound discrimination performance.


1998 ◽  
Vol 7 (2) ◽  
pp. 99-109 ◽  
Author(s):  
Naohito Fujiwara ◽  
Takashi Nagamine ◽  
Makoto Imai ◽  
Tomohiro Tanaka ◽  
Hiroshi Shibasaki

2008 ◽  
Vol 99 (4) ◽  
pp. 1628-1642 ◽  
Author(s):  
Shveta Malhotra ◽  
G. Christopher Stecker ◽  
John C. Middlebrooks ◽  
Stephen G. Lomber

We examined the contributions of primary auditory cortex (A1) and the dorsal zone of auditory cortex (DZ) to sound localization behavior during separate and combined unilateral and bilateral deactivation. From a central visual fixation point, cats learned to make an orienting response (head movement and approach) to a 100-ms broadband noise burst emitted from a central speaker or one of 12 peripheral sites (located in front of the animal, from left 90° to right 90°, at 15° intervals) along the horizontal plane. Following training, each cat was implanted with separate cryoloops over A1 and DZ bilaterally. Unilateral deactivation of A1 or DZ or simultaneous unilateral deactivation of A1 and DZ (A1/DZ) resulted in spatial localization deficits confined to the contralateral hemifield, whereas sound localization to positions in the ipsilateral hemifield remained unaffected. Simultaneous bilateral deactivation of both A1 and DZ resulted in sound localization performance dropping from near-perfect to chance (7.7% correct) across the entire field. Errors made during bilateral deactivation of A1/DZ tended to be confined to the same hemifield as the target. However, unlike the profound sound localization deficit that occurs when A1 and DZ are deactivated together, deactivation of either A1 or DZ alone produced partial and field-specific deficits. For A1, bilateral deactivation resulted in higher error rates (performance dropping to ∼45%) but relatively small errors (mostly within 30° of the target). In contrast, bilateral deactivation of DZ produced somewhat fewer errors (performance dropping to only ∼60% correct), but the errors tended to be larger, often into the incorrect hemifield. Therefore individual deactivation of either A1 or DZ produced specific and unique sound localization deficits. The results of the present study reveal that DZ plays a role in sound localization. Along with previous anatomical and physiological data, these behavioral data support the view that A1 and DZ are distinct cortical areas. Finally, the findings that deactivation of either A1 or DZ alone produces partial sound localization deficits, whereas deactivation of either posterior auditory field (PAF) or anterior ectosylvian sulcus (AES) produces profound sound localization deficits, suggests that PAF and AES make more significant contributions to sound localization than either A1 or DZ.


1995 ◽  
Vol 74 (3) ◽  
pp. 961-980 ◽  
Author(s):  
J. C. Clarey ◽  
P. Barone ◽  
W. A. Irons ◽  
F. K. Samson ◽  
T. J. Imig

1. A comparison of the azimuth tuning of single neurons to broadband noise and to best frequency (BF) tone bursts was made in primary auditory cortex (AI: n = 173) and the medial geniculate body (MGB: n = 52) of barbiturate-anesthetized cats. Observations were largely restricted to cells located within the tonotopically organized divisions of the MGB (i.e., the ventral nucleus and the lateral division of the posterior nuclear group) and the middle layers of AI. All cells studied had BFs > or = 4 kHz. 2. The responses of each cell to sounds presented from seven frontal azimuthal locations (-90 to +90 degrees in 30 degrees steps; 0 degree elevation) and at five sound pressure levels (SPLs: 0-80 dB or 5-85 dB in 20-dB steps) provided an azimuth-level data set. Responses were averaged over SPL to obtain an azimuth function, and a number of features of this function were used to describe azimuth tuning to noise and to tone stimulation. Azimuth function modulation was used to assess azimuth sensitivity, and cells were categorized as sensitive or insensitive depending on whether modulation was > or = 75% or < 75% of maximum, respectively. The majority (88%) of cells in the sample were azimuth sensitive to noise stimulation, and statistical analyses were restricted to these cells, which are presumably best suited to encode sound source azimuth. Azimuth selectivity was assessed by a preferred azimuth range (PAR) over which azimuth function values exceeded 75% (PAR75) or 50% of maximum response. Cells were categorized according to the location and extent of their noise PARs. Unbounded cells had laterally located PARs that extended to the lateral pole (+/- 90 degrees); bounded cells had PARs that were contained entirely within the frontal hemifield, and a subset of these had PARs centered on the midline (+/- 15 degrees). A final group of cells exhibited multipeaked azimuth functions to noise stimulation. 3. Azimuth functions to noise were generally more selective and/or more sensitive than those to tones. Statistical analyses showed that these differences were significant for cells in each azimuth function category, and for the thalamic and cortical samples. With the exception of multipeaked cells, responsiveness to noise was significantly lower than that to tones in all categories, and for the thalamic and cortical samples.(ABSTRACT TRUNCATED AT 400 WORDS)


Sign in / Sign up

Export Citation Format

Share Document