scholarly journals Resetting of Auditory and Visual Segregation Occurs After Transient Stimuli of the Same Modality

2021 ◽  
Vol 12 ◽  
Author(s):  
Nathan C. Higgins ◽  
Ambar G. Monjaras ◽  
Breanne D. Yerkes ◽  
David F. Little ◽  
Jessica E. Nave-Blodgett ◽  
...  

In the presence of a continually changing sensory environment, maintaining stable but flexible awareness is paramount, and requires continual organization of information. Determining which stimulus features belong together, and which are separate is therefore one of the primary tasks of the sensory systems. Unknown is whether there is a global or sensory-specific mechanism that regulates the final perceptual outcome of this streaming process. To test the extent of modality independence in perceptual control, an auditory streaming experiment, and a visual moving-plaid experiment were performed. Both were designed to evoke alternating perception of an integrated or segregated percept. In both experiments, transient auditory and visual distractor stimuli were presented in separate blocks, such that the distractors did not overlap in frequency or space with the streaming or plaid stimuli, respectively, thus preventing peripheral interference. When a distractor was presented in the opposite modality as the bistable stimulus (visual distractors during auditory streaming or auditory distractors during visual streaming), the probability of percept switching was not significantly different than when no distractor was presented. Conversely, significant differences in switch probability were observed following within-modality distractors, but only when the pre-distractor percept was segregated. Due to the modality-specificity of the distractor-induced resetting, the results suggest that conscious perception is at least partially controlled by modality-specific processing. The fact that the distractors did not have peripheral overlap with the bistable stimuli indicates that the perceptual reset is due to interference at a locus in which stimuli of different frequencies and spatial locations are integrated.

2021 ◽  
Author(s):  
Nathan C. Higgins ◽  
Ambar Monjaras ◽  
Breanne Yerkes ◽  
David F Little ◽  
Jessica Erin Nave-Blodgett ◽  
...  

In the presence of a continually changing sensory environment, maintaining stable but flexible awareness is paramount, and requires continual organization of information. Determining which stimulus features belong together, and which are separate is therefore one of the primary tasks of the sensory systems. Unknown is whether there is a global or sensory-specific mechanism that regulates the final perceptual outcome of this streaming process. To test the extent of modality independence in perceptual control, an auditory streaming experiment, and a visual moving-plaid experiment were performed. Both were designed to evoke alternating perception of an integrated or segregated percept. In both experiments, transient auditory and visual distractor stimuli were presented in separate blocks, such that the distractors did not overlap in frequency or space with the streaming or plaid stimuli, respectively, thus preventing peripheral interference. When a distractor was presented in the opposite modality as the bistable stimulus (visual distractors during auditory streaming or auditory distractors during visual streaming), the rate of percept switching was not significantly different than when no distractor was presented. Conversely, significant differences in switch rate were observed following within-modality distractors, but only when the pre-distractor percept was segregated. Due to the modality-specificity of the distractor-induced resetting, the results suggest that conscious perception is at least partially controlled by modality-specific processing. The fact that the distractors did not have peripheral overlap with the bistable stimuli indicates that the perceptual reset is due to interference at a locus in which stimuli of different frequencies and spatial locations are integrated.


2020 ◽  
Author(s):  
Deon T. Benton ◽  
David H. Rakison

The ability to reason about causal events in the world is fundamental to cognition. Despite the importance of this ability, little is known about how adults represent causal events, what structure or form those representations take, and what the mechanism is that underpins such representations. We report four experiments with adults that examine the perceptual basis on which adults represent four-object launching sequences (Experiments 1 and 2), whether adults representations reflect sensitivity to the causal, perceptual, or causal and perceptual relation among the objects that comprise such sequences (Experiment 3), and whether such representations extend beyond spatiotemporal contiguity to include other low-level stimulus features such as an object’s shape and color (Experiment 4). Based on these results of the four experiments, we argue that a domain-general associative mechanism, rather a modular, domain-specific, mechanism subserves adults’ representations of four-object launching sequences.


2019 ◽  
Vol 5 (7) ◽  
pp. eaaw4358 ◽  
Author(s):  
Philip A. Kragel ◽  
Marianne C. Reddan ◽  
Kevin S. LaBar ◽  
Tor D. Wager

Theorists have suggested that emotions are canonical responses to situations ancestrally linked to survival. If so, then emotions may be afforded by features of the sensory environment. However, few computational models describe how combinations of stimulus features evoke different emotions. Here, we develop a convolutional neural network that accurately decodes images into 11 distinct emotion categories. We validate the model using more than 25,000 images and movies and show that image content is sufficient to predict the category and valence of human emotion ratings. In two functional magnetic resonance imaging studies, we demonstrate that patterns of human visual cortex activity encode emotion category–related model output and can decode multiple categories of emotional experience. These results suggest that rich, category-specific visual features can be reliably mapped to distinct emotions, and they are coded in distributed representations within the human visual system.


2013 ◽  
Vol 5 (Supplement 2) ◽  
pp. 73-100 ◽  
Author(s):  
Susan L. Denham ◽  
Kinga Gyimesi ◽  
Gábor Stefanics ◽  
István Winkler

2019 ◽  
Author(s):  
NC Higgins ◽  
DF Little ◽  
BD Yerkes ◽  
KM Nave ◽  
A Kuruvilla-Mathew ◽  
...  

AbstractUnderstanding the neural underpinning of conscious perception remains one of the primary challenges of cognitive neuroscience. Theories based mostly on studies of the visual system differ according to whether the neural activity giving rise to conscious perception occurs in modality-specific sensory cortex or in associative areas, such as the frontal and parietal cortices. Here, we search for modality-specific conscious processing in the auditory cortex using a bistable stream segregation paradigm that presents a constant stimulus without the confounding influence of physical changes to sound properties. ABA_ triplets (i.e., alternating low, A, and high, B, tones, and _ gap) with a 700 ms silent response period after every third triplet were presented repeatedly, and human participants reported nearly equivalent proportions of 1- and 2-stream percepts. The pattern of behavioral responses was consistent with previous studies of visual and auditory bistable perception. The intermittent response paradigm has the benefit of evoking spontaneous perceptual switches that can be attributed to a well-defined stimulus event, enabling precise identification of the timing of perception-related neural events with event-related potentials (ERPs). Significantly more negative ERPs were observed for 2-streams compared to 1-stream, and for switches compared to non-switches during the sustained potential (500-1000 ms post-stimulus onset). Further analyses revealed that the negativity associated with switching was independent of switch direction, suggesting that spontaneous changes in perception have a unique neural signature separate from the observation that 2-streams has more negative ERPs than 1-stream. Source analysis of the sustained potential showed activity associated with these differences originating in anterior superior temporal gyrus, indicating involvement of the ventral auditory pathway that is important for processing auditory objects.Significance StatementWhen presented with ambiguous stimuli, the auditory system takes the available information and attempts to construct a useful percept. When multiple percepts are possible from the same stimuli, however, perception fluctuates back and forth between alternating percepts in a bistable manner. Here, we examine spontaneous switches in perception using a bistable auditory streaming paradigm with a novel intermittent stimulus paradigm, and measure sustained electrical activity in anterior portions of auditory cortex using event-related potentials. Analyses revealed enhanced sustained cortical activity when perceiving 2-streams compared to 1-stream, and when a switch occurred regardless of switch direction. These results indicate that neural responses in auditory cortex reflect both the content of perception and neural dynamics related to switches in perception.


2016 ◽  
Vol 28 (8) ◽  
pp. 1090-1097 ◽  
Author(s):  
Jason Samaha ◽  
Thomas C. Sprague ◽  
Bradley R. Postle

Many aspects of perception and cognition are supported by activity in neural populations that are tuned to different stimulus features (e.g., orientation, spatial location, color). Goal-directed behavior, such as sustained attention, requires a mechanism for the selective prioritization of contextually appropriate representations. A candidate mechanism of sustained spatial attention is neural activity in the alpha band (8–13 Hz), whose power in the human EEG covaries with the focus of covert attention. Here, we applied an inverted encoding model to assess whether spatially selective neural responses could be recovered from the topography of alpha-band oscillations during spatial attention. Participants were cued to covertly attend to one of six spatial locations arranged concentrically around fixation while EEG was recorded. A linear classifier applied to EEG data during sustained attention demonstrated successful classification of the attended location from the topography of alpha power, although not from other frequency bands. We next sought to reconstruct the focus of spatial attention over time by applying inverted encoding models to the topography of alpha power and phase. Alpha power, but not phase, allowed for robust reconstructions of the specific attended location beginning around 450 msec postcue, an onset earlier than previous reports. These results demonstrate that posterior alpha-band oscillations can be used to track activity in feature-selective neural populations with high temporal precision during the deployment of covert spatial attention.


2021 ◽  
Author(s):  
Karli M Nave ◽  
Erin Hannon ◽  
Joel S. Snyder

Synchronization of movement to music is a seemingly universal human capacity that depends on sustained beat perception. Previous research shows that the frequency of the beat can be observed in the neural activity of the listener. However, the extent to which these neural responses reflect concurrent, conscious perception of musical beat versus stimulus-driven activity is a matter of debate. We investigated whether this kind of periodic brain activity, measured using electroencephalography (EEG), reflects perception of beat, by holding the stimulus constant while manipulating the listener’s perception. Listeners with minimal music training heard a musical excerpt that strongly supported one of two beat patterns (context), followed by a rhythm consistent with either beat pattern (ambiguous phase). During the final phase, listeners indicated whether or not a superimposed drum matched the perceived beat (probe phase). Participants were more likely to indicate that the probe matched the music when that probe matched the original context, suggesting an ability to maintain the beat percept through the ambiguous phase. Likewise, we observed that the spectral amplitude during the ambiguous phase was higher at frequencies corresponding to the beat of the preceding context, and the EEG amplitude at the beat-related frequency predicted performance on the beat induction task on a single-trial basis. Together, these findings provide evidence that auditory cortical activity reflects conscious perception of musical beat and not just stimulus features or effortful attention.


2012 ◽  
Vol 367 (1591) ◽  
pp. 1001-1012 ◽  
Author(s):  
István Winkler ◽  
Susan Denham ◽  
Robert Mill ◽  
Tamás M. Bőhm ◽  
Alexandra Bendixen

Auditory stream segregation involves linking temporally separate acoustic events into one or more coherent sequences. For any non-trivial sequence of sounds, many alternative descriptions can be formed, only one or very few of which emerge in awareness at any time. Evidence from studies showing bi-/multistability in auditory streaming suggest that some, perhaps many of the alternative descriptions are represented in the brain in parallel and that they continuously vie for conscious perception. Here, based on a predictive coding view, we consider the nature of these sound representations and how they compete with each other. Predictive processing helps to maintain perceptual stability by signalling the continuation of previously established patterns as well as the emergence of new sound sources. It also provides a measure of how well each of the competing representations describes the current acoustic scene. This account of auditory stream segregation has been tested on perceptual data obtained in the auditory streaming paradigm.


2018 ◽  
Author(s):  
Philip A. Kragel ◽  
Marianne Reddan ◽  
Kevin S. LaBar ◽  
Tor D. Wager

AbstractTheorists have suggested that emotions are canonical responses to situations ancestrally linked to survival. If so, then emotions may be afforded by features of the sensory environment. However, few computationally explicit models describe how combinations of stimulus features evoke different emotions. Here we develop a convolutional neural network that accurately decodes images into 11 distinct emotion categories. We validate the model using over 25,000 images and movies and show that image content is sufficient to predict the category and valence of human emotion ratings. In two fMRI studies, we demonstrate that patterns of human visual cortex activity encode emotion category-related model output and can decode multiple categories of emotional experience. These results suggest that rich, category-specific emotion representations are embedded within the human visual system.


Sign in / Sign up

Export Citation Format

Share Document