scholarly journals Resetting of Auditory and Visual Segregation Occurs After Transient Stimuli of the Same Modality

2021 ◽  
Author(s):  
Nathan C. Higgins ◽  
Ambar Monjaras ◽  
Breanne Yerkes ◽  
David F Little ◽  
Jessica Erin Nave-Blodgett ◽  
...  

In the presence of a continually changing sensory environment, maintaining stable but flexible awareness is paramount, and requires continual organization of information. Determining which stimulus features belong together, and which are separate is therefore one of the primary tasks of the sensory systems. Unknown is whether there is a global or sensory-specific mechanism that regulates the final perceptual outcome of this streaming process. To test the extent of modality independence in perceptual control, an auditory streaming experiment, and a visual moving-plaid experiment were performed. Both were designed to evoke alternating perception of an integrated or segregated percept. In both experiments, transient auditory and visual distractor stimuli were presented in separate blocks, such that the distractors did not overlap in frequency or space with the streaming or plaid stimuli, respectively, thus preventing peripheral interference. When a distractor was presented in the opposite modality as the bistable stimulus (visual distractors during auditory streaming or auditory distractors during visual streaming), the rate of percept switching was not significantly different than when no distractor was presented. Conversely, significant differences in switch rate were observed following within-modality distractors, but only when the pre-distractor percept was segregated. Due to the modality-specificity of the distractor-induced resetting, the results suggest that conscious perception is at least partially controlled by modality-specific processing. The fact that the distractors did not have peripheral overlap with the bistable stimuli indicates that the perceptual reset is due to interference at a locus in which stimuli of different frequencies and spatial locations are integrated.

2021 ◽  
Vol 12 ◽  
Author(s):  
Nathan C. Higgins ◽  
Ambar G. Monjaras ◽  
Breanne D. Yerkes ◽  
David F. Little ◽  
Jessica E. Nave-Blodgett ◽  
...  

In the presence of a continually changing sensory environment, maintaining stable but flexible awareness is paramount, and requires continual organization of information. Determining which stimulus features belong together, and which are separate is therefore one of the primary tasks of the sensory systems. Unknown is whether there is a global or sensory-specific mechanism that regulates the final perceptual outcome of this streaming process. To test the extent of modality independence in perceptual control, an auditory streaming experiment, and a visual moving-plaid experiment were performed. Both were designed to evoke alternating perception of an integrated or segregated percept. In both experiments, transient auditory and visual distractor stimuli were presented in separate blocks, such that the distractors did not overlap in frequency or space with the streaming or plaid stimuli, respectively, thus preventing peripheral interference. When a distractor was presented in the opposite modality as the bistable stimulus (visual distractors during auditory streaming or auditory distractors during visual streaming), the probability of percept switching was not significantly different than when no distractor was presented. Conversely, significant differences in switch probability were observed following within-modality distractors, but only when the pre-distractor percept was segregated. Due to the modality-specificity of the distractor-induced resetting, the results suggest that conscious perception is at least partially controlled by modality-specific processing. The fact that the distractors did not have peripheral overlap with the bistable stimuli indicates that the perceptual reset is due to interference at a locus in which stimuli of different frequencies and spatial locations are integrated.


2020 ◽  
Author(s):  
Deon T. Benton ◽  
David H. Rakison

The ability to reason about causal events in the world is fundamental to cognition. Despite the importance of this ability, little is known about how adults represent causal events, what structure or form those representations take, and what the mechanism is that underpins such representations. We report four experiments with adults that examine the perceptual basis on which adults represent four-object launching sequences (Experiments 1 and 2), whether adults representations reflect sensitivity to the causal, perceptual, or causal and perceptual relation among the objects that comprise such sequences (Experiment 3), and whether such representations extend beyond spatiotemporal contiguity to include other low-level stimulus features such as an object’s shape and color (Experiment 4). Based on these results of the four experiments, we argue that a domain-general associative mechanism, rather a modular, domain-specific, mechanism subserves adults’ representations of four-object launching sequences.


2019 ◽  
Vol 5 (7) ◽  
pp. eaaw4358 ◽  
Author(s):  
Philip A. Kragel ◽  
Marianne C. Reddan ◽  
Kevin S. LaBar ◽  
Tor D. Wager

Theorists have suggested that emotions are canonical responses to situations ancestrally linked to survival. If so, then emotions may be afforded by features of the sensory environment. However, few computational models describe how combinations of stimulus features evoke different emotions. Here, we develop a convolutional neural network that accurately decodes images into 11 distinct emotion categories. We validate the model using more than 25,000 images and movies and show that image content is sufficient to predict the category and valence of human emotion ratings. In two functional magnetic resonance imaging studies, we demonstrate that patterns of human visual cortex activity encode emotion category–related model output and can decode multiple categories of emotional experience. These results suggest that rich, category-specific visual features can be reliably mapped to distinct emotions, and they are coded in distributed representations within the human visual system.


2013 ◽  
Vol 5 (Supplement 2) ◽  
pp. 73-100 ◽  
Author(s):  
Susan L. Denham ◽  
Kinga Gyimesi ◽  
Gábor Stefanics ◽  
István Winkler

2016 ◽  
Vol 28 (8) ◽  
pp. 1090-1097 ◽  
Author(s):  
Jason Samaha ◽  
Thomas C. Sprague ◽  
Bradley R. Postle

Many aspects of perception and cognition are supported by activity in neural populations that are tuned to different stimulus features (e.g., orientation, spatial location, color). Goal-directed behavior, such as sustained attention, requires a mechanism for the selective prioritization of contextually appropriate representations. A candidate mechanism of sustained spatial attention is neural activity in the alpha band (8–13 Hz), whose power in the human EEG covaries with the focus of covert attention. Here, we applied an inverted encoding model to assess whether spatially selective neural responses could be recovered from the topography of alpha-band oscillations during spatial attention. Participants were cued to covertly attend to one of six spatial locations arranged concentrically around fixation while EEG was recorded. A linear classifier applied to EEG data during sustained attention demonstrated successful classification of the attended location from the topography of alpha power, although not from other frequency bands. We next sought to reconstruct the focus of spatial attention over time by applying inverted encoding models to the topography of alpha power and phase. Alpha power, but not phase, allowed for robust reconstructions of the specific attended location beginning around 450 msec postcue, an onset earlier than previous reports. These results demonstrate that posterior alpha-band oscillations can be used to track activity in feature-selective neural populations with high temporal precision during the deployment of covert spatial attention.


2018 ◽  
Author(s):  
Philip A. Kragel ◽  
Marianne Reddan ◽  
Kevin S. LaBar ◽  
Tor D. Wager

AbstractTheorists have suggested that emotions are canonical responses to situations ancestrally linked to survival. If so, then emotions may be afforded by features of the sensory environment. However, few computationally explicit models describe how combinations of stimulus features evoke different emotions. Here we develop a convolutional neural network that accurately decodes images into 11 distinct emotion categories. We validate the model using over 25,000 images and movies and show that image content is sufficient to predict the category and valence of human emotion ratings. In two fMRI studies, we demonstrate that patterns of human visual cortex activity encode emotion category-related model output and can decode multiple categories of emotional experience. These results suggest that rich, category-specific emotion representations are embedded within the human visual system.


2008 ◽  
Vol 24 (4) ◽  
pp. 218-225 ◽  
Author(s):  
Bertram Gawronski ◽  
Roland Deutsch ◽  
Etienne P. LeBel ◽  
Kurt R. Peters

Over the last decade, implicit measures of mental associations (e.g., Implicit Association Test, sequential priming) have become increasingly popular in many areas of psychological research. Even though successful applications provide preliminary support for the validity of these measures, their underlying mechanisms are still controversial. The present article addresses the role of a particular mechanism that is hypothesized to mediate the influence of activated associations on task performance in many implicit measures: response interference (RI). Based on a review of relevant evidence, we argue that RI effects in implicit measures depend on participants’ attention to association-relevant stimulus features, which in turn can influence the reliability and the construct validity of these measures. Drawing on a moderated-mediation model (MMM) of task performance in RI paradigms, we provide several suggestions on how to address these problems in research using implicit measures.


Author(s):  
Sarah Schäfer ◽  
Dirk Wentura ◽  
Christian Frings

Abstract. Recently, Sui, He, and Humphreys (2012) introduced a new paradigm to measure perceptual self-prioritization processes. It seems that arbitrarily tagging shapes to self-relevant words (I, my, me, and so on) leads to speeded verification times when matching self-relevant word shape pairings (e.g., me – triangle) as compared to non-self-relevant word shape pairings (e.g., stranger – circle). In order to analyze the level at which self-prioritization takes place we analyzed whether the self-prioritization effect is due to a tagging of the self-relevant label and the particular associated shape or due to a tagging of the self with an abstract concept. In two experiments participants showed standard self-prioritization effects with varying stimulus features or different exemplars of a particular stimulus-category suggesting that self-prioritization also works at a conceptual level.


Author(s):  
Kevin Dent

In two experiments participants retained a single color or a set of four spatial locations in memory. During a 5 s retention interval participants viewed either flickering dynamic visual noise or a static matrix pattern. In Experiment 1 memory was assessed using a recognition procedure, in which participants indicated if a particular test stimulus matched the memorized stimulus or not. In Experiment 2 participants attempted to either reproduce the locations or they picked the color from a whole range of possibilities. Both experiments revealed effects of dynamic visual noise (DVN) on memory for colors but not for locations. The implications of the results for theories of working memory and the methodological prospects for DVN as an experimental tool are discussed.


Sign in / Sign up

Export Citation Format

Share Document