visual masking
Recently Published Documents


TOTAL DOCUMENTS

336
(FIVE YEARS 21)

H-INDEX

41
(FIVE YEARS 1)

2021 ◽  
Author(s):  
◽  
Daniel Jenkins

<p>Multisensory integration describes the cognitive processes by which information from various perceptual domains is combined to create coherent percepts. For consciously aware perception, multisensory integration can be inferred when information in one perceptual domain influences subjective experience in another. Yet the relationship between integration and awareness is not well understood. One current question is whether multisensory integration can occur in the absence of perceptual awareness. Because there is subjective experience for unconscious perception, researchers have had to develop novel tasks to infer integration indirectly. For instance, Palmer and Ramsey (2012) presented auditory recordings of spoken syllables alongside videos of faces speaking either the same or different syllables, while masking the videos to prevent visual awareness. The conjunction of matching voices and faces predicted the location of a subsequent Gabor grating (target) on each trial. Participants indicated the location/orientation of the target more accurately when it appeared in the cued location (80% chance), thus the authors inferred that auditory and visual speech events were integrated in the absence of visual awareness. In this thesis, I investigated whether these findings generalise to the integration of auditory and visual expressions of emotion. In Experiment 1, I presented spatially informative cues in which congruent facial and vocal emotional expressions predicted the target location, with and without visual masking. I found no evidence of spatial cueing in either awareness condition. To investigate the lack of spatial cueing, in Experiment 2, I repeated the task with aware participants only, and had half of those participants explicitly report the emotional prosody. A significant spatial-cueing effect was found only when participants reported emotional prosody, suggesting that audiovisual congruence can cue spatial attention during aware perception. It remains unclear whether audiovisual congruence can cue spatial attention without awareness, and whether such effects genuinely imply multisensory integration.</p>


2021 ◽  
Author(s):  
◽  
Daniel Jenkins

<p>Multisensory integration describes the cognitive processes by which information from various perceptual domains is combined to create coherent percepts. For consciously aware perception, multisensory integration can be inferred when information in one perceptual domain influences subjective experience in another. Yet the relationship between integration and awareness is not well understood. One current question is whether multisensory integration can occur in the absence of perceptual awareness. Because there is subjective experience for unconscious perception, researchers have had to develop novel tasks to infer integration indirectly. For instance, Palmer and Ramsey (2012) presented auditory recordings of spoken syllables alongside videos of faces speaking either the same or different syllables, while masking the videos to prevent visual awareness. The conjunction of matching voices and faces predicted the location of a subsequent Gabor grating (target) on each trial. Participants indicated the location/orientation of the target more accurately when it appeared in the cued location (80% chance), thus the authors inferred that auditory and visual speech events were integrated in the absence of visual awareness. In this thesis, I investigated whether these findings generalise to the integration of auditory and visual expressions of emotion. In Experiment 1, I presented spatially informative cues in which congruent facial and vocal emotional expressions predicted the target location, with and without visual masking. I found no evidence of spatial cueing in either awareness condition. To investigate the lack of spatial cueing, in Experiment 2, I repeated the task with aware participants only, and had half of those participants explicitly report the emotional prosody. A significant spatial-cueing effect was found only when participants reported emotional prosody, suggesting that audiovisual congruence can cue spatial attention during aware perception. It remains unclear whether audiovisual congruence can cue spatial attention without awareness, and whether such effects genuinely imply multisensory integration.</p>


2021 ◽  
Author(s):  
Samuel D Gale ◽  
Chelsea Strawder ◽  
Corbett Bennett ◽  
Stefan Mihalas ◽  
Christof Koch ◽  
...  

AbstractVisual masking is used extensively to infer the timescale of conscious perception in humans; yet the underlying circuit mechanisms are not understood. We describe a robust backward masking paradigm in mice, in which the location of a briefly flashed grating is effectively masked within a 50 ms window after stimulus onset. Optogenetic silencing of visual cortex likewise reduces performance in this window, but response rates and accuracy do not match masking, demonstrating cortical silencing and masking are distinct phenomena. Spiking responses recorded in primary visual cortex (V1) are consistent with masked behavior when quantified over long, but not short, time windows, indicating masking involves further downstream processing. Accuracy and performance can be quantitatively recapitulated by a dual accumulator model constrained by V1 activity. The model and the animal”s performance for the earliest decisions imply that the initial spike or two arriving from the periphery trigger a correct response, but subsequent V1 spikes, evoked by the mask, degrade performance for later decisions. To test the necessity of visual cortex for backward masking, we optogenetically silenced mask-evoked cortical activity which fully restored discrimination of target location. Together, these results demonstrate that mice, like humans, are susceptible to backward visual masking and that visual cortex causally contributes to this process.


2021 ◽  
Author(s):  
Bernt Skottun

When two stimuli, s_1 and s_2, are added we get that that the amplitudes in (s_1 + s_2) are smaller than the amplitudes in s_1 plus the amplitudes in s_2 when the two stimuli have different phase spectra. Thus, the full amplitudes in the two stimuli cannot be contained in the amplitudes of the combined stimulus. That is, the amplitudes in s_1 and s_2 are reduced when the stimuli are added. When the amplitudes in a stimulus have been reduced it can no longer be held to be the same stimulus. Consequently, it is possible for two stimuli to have the same appearance but have different amplitudes and can, therefore, not to be held to be the same stimulus. This it would seem, has implications for a number of areas among them visual masking, visual crowding and for the adding of noise to stimuli.


2021 ◽  
Author(s):  
Christopher Whyte ◽  
Jakob Hohwy ◽  
Ryan Smith

Cognitive theories of consciousness, such as global workspace theory and higher-order theories, posit that frontoparietal circuits play a crucial role in conscious access. However, recent studies using no-report paradigms have posed a challenge to cognitive theories by demonstrating conscious accessibility in the apparent absence of prefrontal cortex (PFC) activation. To address this challenge, this paper presents a computational model of conscious access, based upon active inference, that treats working memory gating as a cognitive action. We simulate a visual masking task and show that late P3b-like event-related potentials (ERPs), and increased PFC activity, are induced by the working memory demands of report. When reporting demands are removed, these late ERPs vanish and PFC activity is reduced. These results therefore reproduce, and potentially explain, results from no-report paradigms. However, even without reporting demands, our model shows that simulated PFC activity on visible stimulus trials still crosses the threshold for reportability – maintaining the link between PFC and conscious access. Therefore, our simulations show that evidence provided by no-report paradigms does not necessarily contradict cognitive theories of consciousness.


2021 ◽  
Author(s):  
Qi Dai ◽  
Lichang Yao ◽  
Ikue Hattori ◽  
Qiong Wu ◽  
Jiajia Yang ◽  
...  

2021 ◽  
Author(s):  
Spencer Chen ◽  
Giacomo Benvenuti ◽  
Yuzhi Chen ◽  
Satwant Kumar ◽  
Charu Ramakrishnan ◽  
...  

AbstractCan direct stimulation of primate V1 substitute for a visual stimulus and mimic its perceptual effect? To address this question, we developed an optical-genetic toolkit to “read” neural population responses using widefield calcium imaging, while simultaneously using optogenetics to “write” neural responses into V1 of behaving macaques. We focused on the phenomenon of visual masking, where detection of a dim target is significantly reduced by a co-localized medium-brightness pedestal. Using our toolkit, we tested whether V1 optogenetic stimulation can recapitulate the perceptual masking effect of a visual pedestal. We find that, similar to a visual pedestal, low-power optostimulation can significantly reduce visual detection sensitivity, that a sublinear interaction between visual and optogenetic evoked V1 responses could account for this perceptual effect, and that these neural and behavioral effects are spatially selective. Our toolkit and results open the door for further exploration of perceptual substitutions by direct stimulation of sensory cortex.


2020 ◽  
Vol 20 (11) ◽  
pp. 450
Author(s):  
Tomoya Nakamura ◽  
Ikuya Murakami
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document