scholarly journals Awareness and Integration: Understanding the Challenges of Inferring Multisensory Integration Outside of Awareness

2021 ◽  
Author(s):  
◽  
Daniel Jenkins

<p>Multisensory integration describes the cognitive processes by which information from various perceptual domains is combined to create coherent percepts. For consciously aware perception, multisensory integration can be inferred when information in one perceptual domain influences subjective experience in another. Yet the relationship between integration and awareness is not well understood. One current question is whether multisensory integration can occur in the absence of perceptual awareness. Because there is subjective experience for unconscious perception, researchers have had to develop novel tasks to infer integration indirectly. For instance, Palmer and Ramsey (2012) presented auditory recordings of spoken syllables alongside videos of faces speaking either the same or different syllables, while masking the videos to prevent visual awareness. The conjunction of matching voices and faces predicted the location of a subsequent Gabor grating (target) on each trial. Participants indicated the location/orientation of the target more accurately when it appeared in the cued location (80% chance), thus the authors inferred that auditory and visual speech events were integrated in the absence of visual awareness. In this thesis, I investigated whether these findings generalise to the integration of auditory and visual expressions of emotion. In Experiment 1, I presented spatially informative cues in which congruent facial and vocal emotional expressions predicted the target location, with and without visual masking. I found no evidence of spatial cueing in either awareness condition. To investigate the lack of spatial cueing, in Experiment 2, I repeated the task with aware participants only, and had half of those participants explicitly report the emotional prosody. A significant spatial-cueing effect was found only when participants reported emotional prosody, suggesting that audiovisual congruence can cue spatial attention during aware perception. It remains unclear whether audiovisual congruence can cue spatial attention without awareness, and whether such effects genuinely imply multisensory integration.</p>

2021 ◽  
Author(s):  
◽  
Daniel Jenkins

<p>Multisensory integration describes the cognitive processes by which information from various perceptual domains is combined to create coherent percepts. For consciously aware perception, multisensory integration can be inferred when information in one perceptual domain influences subjective experience in another. Yet the relationship between integration and awareness is not well understood. One current question is whether multisensory integration can occur in the absence of perceptual awareness. Because there is subjective experience for unconscious perception, researchers have had to develop novel tasks to infer integration indirectly. For instance, Palmer and Ramsey (2012) presented auditory recordings of spoken syllables alongside videos of faces speaking either the same or different syllables, while masking the videos to prevent visual awareness. The conjunction of matching voices and faces predicted the location of a subsequent Gabor grating (target) on each trial. Participants indicated the location/orientation of the target more accurately when it appeared in the cued location (80% chance), thus the authors inferred that auditory and visual speech events were integrated in the absence of visual awareness. In this thesis, I investigated whether these findings generalise to the integration of auditory and visual expressions of emotion. In Experiment 1, I presented spatially informative cues in which congruent facial and vocal emotional expressions predicted the target location, with and without visual masking. I found no evidence of spatial cueing in either awareness condition. To investigate the lack of spatial cueing, in Experiment 2, I repeated the task with aware participants only, and had half of those participants explicitly report the emotional prosody. A significant spatial-cueing effect was found only when participants reported emotional prosody, suggesting that audiovisual congruence can cue spatial attention during aware perception. It remains unclear whether audiovisual congruence can cue spatial attention without awareness, and whether such effects genuinely imply multisensory integration.</p>


2018 ◽  
Vol 30 (8) ◽  
pp. 1119-1129 ◽  
Author(s):  
Cooper A. Smout ◽  
Jason B. Mattingley

Recent evidence suggests that voluntary spatial attention can affect neural processing of visual stimuli that do not enter conscious awareness (i.e., invisible stimuli), supporting the notion that attention and awareness are dissociable processes [Wyart, V., Dehaene, S., & Tallon-Baudry, C. Early dissociation between neural signatures of endogenous spatial attention and perceptual awareness during visual masking. Frontiers in Human Neuroscience, 6, 1–14, 2012; Watanabe, M., Cheng, K., Murayama, Y., Ueno, K., Asamizuya, T., Tanaka, K., et al. Attention but not awareness modulates the BOLD signal in the human V1 during binocular suppression. Science, 334, 829–831, 2011]. To date, however, no study has demonstrated that these effects reflect enhancement of the neural representation of invisible stimuli per se, as opposed to other neural processes not specifically tied to the stimulus in question. In addition, it remains unclear whether spatial attention can modulate neural representations of invisible stimuli in direct competition with highly salient and visible stimuli. Here we developed a novel EEG frequency-tagging paradigm to obtain a continuous readout of human brain activity associated with visible and invisible signals embedded in dynamic noise. Participants ( n = 23) detected occasional contrast changes in one of two flickering image streams on either side of fixation. Each image stream contained a visible or invisible signal embedded in every second noise image, the visibility of which was titrated and checked using a two-interval forced-choice detection task. Steady-state visual-evoked potentials were computed from EEG data at the signal and noise frequencies of interest. Cluster-based permutation analyses revealed significant neural responses to both visible and invisible signals across posterior scalp electrodes. Control analyses revealed that these responses did not reflect a subharmonic response to noise stimuli. In line with previous findings, spatial attention increased the neural representation of visible signals. Crucially, spatial attention also increased the neural representation of invisible signals. As such, the present results replicate and extend previous studies by demonstrating that attention can modulate the neural representation of invisible signals that are in direct competition with highly salient masking stimuli.


2006 ◽  
Vol 18 (2) ◽  
pp. 258-266 ◽  
Author(s):  
R. Weidner ◽  
N. J. Shah ◽  
G. R. Fink

Four-dot masking is a new form of visual masking that does not involve local contour interactions or spatial superimposition of the target stimulus and the mask (as, e.g., in pattern or metacontrast masking). Rather, the effective masking mechanism is based on object substitution. Object substitution masking occurs when low-level visual information representations are altered before target identification through iterative interaction with high-level visual processing stages has been completed. Interestingly, object substitution interacts with attention processes: Strong masking effects are observed when attentional orientation toward the target location is delayed. In contrast, no masking occurs when attention can be rapidly shifted to and engaged onto the target location. We investigated the neural basis of object substitution masking by studying the interaction of spatial attention and masking processes using functional magnetic resonance imaging. Behavioral data indicated a two-way interaction between the factors Spatial Attention (valid vs. invalid cueing) and Masking (four-dot vs. pattern masking). As expected, spatial attention improved performance more strongly during object substitution masking. Functional correlates of this interaction were found in the primary visual cortex, higher visual areas, and left intraparietal sulcus. A region-of-interest analysis in these areas revealed that the largest blood oxygenation level-dependent signal changes occurred during effective four-dot masking. In contrast, the weakest signal changes in these areas were observed when target visibility was highest. The data suggest that these areas represent an object substitution network dedicated to the generation and testing of a perceptual hypotheses as described by the object substitution theory of masking of Di-Lollo et al. [Competition for consciousness among visual events: The psychophysics of reentrant visual processes. Journal of Experimental Psychology: General, 129, 481–507, 2000].


2021 ◽  
Vol 49 (9) ◽  
pp. 1-6
Author(s):  
Jiyou Gu ◽  
Huiqin Dong

Using a spatial-cueing paradigm in which trait words were set as visual cues and gender words were set as auditory targets, we examined whether cross-modal spatial attention was influenced by gender stereotypes. Results of an experiment conducted with 24 participants indicate that they tended to focus on targets in the valid-cue condition (i.e., the cues located at the same position as targets), regardless of the modality of cues and targets, which is consistent with the cross-modal attention effect found in previous studies. Participants tended to focus on targets that were stereotype-consistent with cues only when the cues were valid, which shows that stereotype-consistent information facilitated visual–auditory cross-modal spatial attention. These results suggest that cognitive schema, such as gender stereotypes, have an effect on cross-modal spatial attention.


2018 ◽  
Vol 119 (5) ◽  
pp. 1981-1992 ◽  
Author(s):  
Laura Mikula ◽  
Valérie Gaveau ◽  
Laure Pisella ◽  
Aarlenne Z. Khan ◽  
Gunnar Blohm

When reaching to an object, information about the target location as well as the initial hand position is required to program the motor plan for the arm. The initial hand position can be determined by proprioceptive information as well as visual information, if available. Bayes-optimal integration posits that we utilize all information available, with greater weighting on the sense that is more reliable, thus generally weighting visual information more than the usually less reliable proprioceptive information. The criterion by which information is weighted has not been explicitly investigated; it has been assumed that the weights are based on task- and effector-dependent sensory reliability requiring an explicit neuronal representation of variability. However, the weights could also be determined implicitly through learned modality-specific integration weights and not on effector-dependent reliability. While the former hypothesis predicts different proprioceptive weights for left and right hands, e.g., due to different reliabilities of dominant vs. nondominant hand proprioception, we would expect the same integration weights if the latter hypothesis was true. We found that the proprioceptive weights for the left and right hands were extremely consistent regardless of differences in sensory variability for the two hands as measured in two separate complementary tasks. Thus we propose that proprioceptive weights during reaching are learned across both hands, with high interindividual range but independent of each hand’s specific proprioceptive variability. NEW & NOTEWORTHY How visual and proprioceptive information about the hand are integrated to plan a reaching movement is still debated. The goal of this study was to clarify how the weights assigned to vision and proprioception during multisensory integration are determined. We found evidence that the integration weights are modality specific rather than based on the sensory reliabilities of the effectors.


1996 ◽  
Vol 49 (2) ◽  
pp. 490-518 ◽  
Author(s):  
Anthony J. Lambert ◽  
Alexander L. Sumich

Three experiments tested whether spatial attention can be influenced by a predictive relation between incidental information and the location of target events. Subjects performed a simple dot detection task; 600 msec prior to each target a word was presented briefly 5° to the left or right of fixation. There was a predictive relationship between the semantic category (living or non-living) of the words and target location. However, subjects were instructed to ignore the words, and a post-experiment questionnaire confirmed that they remained unaware of the word-target relationship. In all three experiments, given some practice on the task, response times were faster when targets appeared at likely ( p = 0.8), compared to unlikely ( p = 0.2) locations, in relation to lateral word category. Experiments 2 and 3 confirmed that this effect was driven by semantic encoding of the irrelevant words, and not by repetition of individual stimuli. Theoretical implications of this finding are discussed.


PLoS ONE ◽  
2019 ◽  
Vol 14 (3) ◽  
pp. e0212998 ◽  
Author(s):  
Jiaqing Chen ◽  
Jagjot Kaur ◽  
Hana Abbas ◽  
Ming Wu ◽  
Wenyi Luo ◽  
...  

Perception ◽  
2016 ◽  
Vol 46 (1) ◽  
pp. 6-17 ◽  
Author(s):  
N. Van der Stoep ◽  
S. Van der Stigchel ◽  
T. C. W. Nijboer ◽  
C. Spence

Multisensory integration (MSI) and exogenous spatial attention can both speedup responses to perceptual events. Recently, it has been shown that audiovisual integration at exogenously attended locations is reduced relative to unattended locations. This effect was observed at short cue-target intervals (200–250 ms). At longer intervals, however, the initial benefits of exogenous shifts of spatial attention at the cued location are often replaced by response time (RT) costs (also known as Inhibition of Return, IOR). Given these opposing cueing effects at shorter versus longer intervals, we decided to investigate whether MSI would also be affected by IOR. Uninformative exogenous visual spatial cues were presented between 350 and 450 ms prior to the onset of auditory, visual, and audiovisual targets. As expected, IOR was observed for visual targets (invalid cue RT < valid cue RT). For auditory and audiovisual targets, neither IOR nor any spatial cueing effects were observed. The amount of relative multisensory response enhancement and race model inequality violation was larger for uncued as compared with cued locations indicating that IOR reduces MSI. The results are discussed in the context of changes in unisensory signal strength at cued as compared with uncued locations.


Sign in / Sign up

Export Citation Format

Share Document