Rapid Audiovisual Temporal Recalibration Generalises Across Spatial Location

2019 ◽  
Vol 32 (3) ◽  
pp. 215-234 ◽  
Author(s):  
Angela Ju ◽  
Emily Orchard-Mills ◽  
Erik van der Burg ◽  
David Alais

Abstract Recent exposure to asynchronous multisensory signals has been shown to shift perceived timing between the sensory modalities, a phenomenon known as ‘temporal recalibration’. Recently, Van der Burg et al. (2013, J Neurosci, 33, pp. 14633–14637) reported results showing that recalibration to asynchronous audiovisual events can happen extremely rapidly. In an extended series of variously asynchronous trials, simultaneity judgements were analysed based on the modality order in the preceding trial and showed that shifts in the point of subjective synchrony occurred almost instantaneously, shifting from one trial to the next. Here we replicate the finding that shifts in perceived timing occur following exposure to a single, asynchronous audiovisual stimulus and by manipulating the spatial location of the audiovisual events we demonstrate that recalibration occurs even when the adapting stimulus is presented in a different location. Timing shifts were also observed when the adapting audiovisual pair were defined only by temporal proximity, with the auditory component presented over headphones rather than being collocated with the visual stimulus. Combined with previous findings showing that timing shifts are independent of stimulus features such as colour and pitch, our finding that recalibration is not spatially specific provides strong evidence for a rapid recalibration process that is solely dependent on recent temporal information, regardless of feature or location. These rapid and automatic shifts in perceived synchrony may allow our sensory systems to flexibly adjust to the variation in timing of neural signals occurring as a result of delayed environmental transmission and differing neural latencies for processing vision and audition.

2020 ◽  
Vol 123 (6) ◽  
pp. 2406-2425
Author(s):  
Tyler R. Sizemore ◽  
Laura M. Hurley ◽  
Andrew M. Dacks

The serotonergic system has been widely studied across animal taxa and different functional networks. This modulatory system is therefore well positioned to compare the consequences of neuromodulation for sensory processing across species and modalities at multiple levels of sensory organization. Serotonergic neurons that innervate sensory networks often bidirectionally exchange information with these networks but also receive input representative of motor events or motivational state. This convergence of information supports serotonin’s capacity for contextualizing sensory information according to the animal’s physiological state and external events. At the level of sensory circuitry, serotonin can have variable effects due to differential projections across specific sensory subregions, as well as differential serotonin receptor type expression within those subregions. Functionally, this infrastructure may gate or filter sensory inputs to emphasize specific stimulus features or select among different streams of information. The near-ubiquitous presence of serotonin and other neuromodulators within sensory regions, coupled with their strong effects on stimulus representation, suggests that these signaling pathways should be considered integral components of sensory systems.


2017 ◽  
Vol 372 (1714) ◽  
pp. 20160099 ◽  
Author(s):  
Hirohito M. Kondo ◽  
Anouk M. van Loon ◽  
Jun-Ichiro Kawahara ◽  
Brian C. J. Moore

We perceive the world as stable and composed of discrete objects even though auditory and visual inputs are often ambiguous owing to spatial and temporal occluders and changes in the conditions of observation. This raises important questions regarding where and how ‘scene analysis’ is performed in the brain. Recent advances from both auditory and visual research suggest that the brain does not simply process the incoming scene properties. Rather, top-down processes such as attention, expectations and prior knowledge facilitate scene perception. Thus, scene analysis is linked not only with the extraction of stimulus features and formation and selection of perceptual objects, but also with selective attention, perceptual binding and awareness. This special issue covers novel advances in scene-analysis research obtained using a combination of psychophysics, computational modelling, neuroimaging and neurophysiology, and presents new empirical and theoretical approaches. For integrative understanding of scene analysis beyond and across sensory modalities, we provide a collection of 15 articles that enable comparison and integration of recent findings in auditory and visual scene analysis. This article is part of the themed issue ‘Auditory and visual scene analysis’.


2015 ◽  
Vol 282 (1804) ◽  
pp. 20143083 ◽  
Author(s):  
Erik Van der Burg ◽  
Patrick T. Goodbourn

The brain is adaptive. The speed of propagation through air, and of low-level sensory processing, differs markedly between auditory and visual stimuli; yet the brain can adapt to compensate for the resulting cross-modal delays. Studies investigating temporal recalibration to audiovisual speech have used prolonged adaptation procedures, suggesting that adaptation is sluggish. Here, we show that adaptation to asynchronous audiovisual speech occurs rapidly. Participants viewed a brief clip of an actor pronouncing a single syllable. The voice was either advanced or delayed relative to the corresponding lip movements, and participants were asked to make a synchrony judgement. Although we did not use an explicit adaptation procedure, we demonstrate rapid recalibration based on a single audiovisual event. We find that the point of subjective simultaneity on each trial is highly contingent upon the modality order of the preceding trial. We find compelling evidence that rapid recalibration generalizes across different stimuli, and different actors. Finally, we demonstrate that rapid recalibration occurs even when auditory and visual events clearly belong to different actors. These results suggest that rapid temporal recalibration to audiovisual speech is primarily mediated by basic temporal factors, rather than higher-order factors such as perceived simultaneity and source identity.


2020 ◽  
Vol 30 (5) ◽  
pp. 2823-2833 ◽  
Author(s):  
Elisa C Dias ◽  
Abraham C Van Voorhis ◽  
Filipe Braga ◽  
Julianne Todd ◽  
Javier Lopez-Calderon ◽  
...  

Abstract During normal visual behavior, individuals scan the environment through a series of saccades and fixations. At each fixation, the phase of ongoing rhythmic neural oscillations is reset, thereby increasing efficiency of subsequent visual processing. This phase-reset is reflected in the generation of a fixation-related potential (FRP). Here, we evaluate the integrity of theta phase-reset/FRP generation and Guided Visual Search task in schizophrenia. Subjects performed serial and parallel versions of the task. An initial study (15 healthy controls (HC)/15 schizophrenia patients (SCZ)) investigated behavioral performance parametrically across stimulus features and set-sizes. A subsequent study (25-HC/25-SCZ) evaluated integrity of search-related FRP generation relative to search performance and evaluated visual span size as an index of parafoveal processing. Search times were significantly increased for patients versus controls across all conditions. Furthermore, significantly, deficits were observed for fixation-related theta phase-reset across conditions, that fully predicted impaired reduced visual span and search performance and correlated with impaired visual components of neurocognitive processing. By contrast, overall search strategy was similar between groups. Deficits in theta phase-reset mechanisms are increasingly documented across sensory modalities in schizophrenia. Here, we demonstrate that deficits in fixation-related theta phase-reset during naturalistic visual processing underlie impaired efficiency of early visual function in schizophrenia.


2016 ◽  
Vol 28 (8) ◽  
pp. 1090-1097 ◽  
Author(s):  
Jason Samaha ◽  
Thomas C. Sprague ◽  
Bradley R. Postle

Many aspects of perception and cognition are supported by activity in neural populations that are tuned to different stimulus features (e.g., orientation, spatial location, color). Goal-directed behavior, such as sustained attention, requires a mechanism for the selective prioritization of contextually appropriate representations. A candidate mechanism of sustained spatial attention is neural activity in the alpha band (8–13 Hz), whose power in the human EEG covaries with the focus of covert attention. Here, we applied an inverted encoding model to assess whether spatially selective neural responses could be recovered from the topography of alpha-band oscillations during spatial attention. Participants were cued to covertly attend to one of six spatial locations arranged concentrically around fixation while EEG was recorded. A linear classifier applied to EEG data during sustained attention demonstrated successful classification of the attended location from the topography of alpha power, although not from other frequency bands. We next sought to reconstruct the focus of spatial attention over time by applying inverted encoding models to the topography of alpha power and phase. Alpha power, but not phase, allowed for robust reconstructions of the specific attended location beginning around 450 msec postcue, an onset earlier than previous reports. These results demonstrate that posterior alpha-band oscillations can be used to track activity in feature-selective neural populations with high temporal precision during the deployment of covert spatial attention.


2019 ◽  
Author(s):  
Kirsten Ziman ◽  
Madeline R. Lee ◽  
Alejandro R. Martinez ◽  
Ethan D. Adner ◽  
Jeremy R. Manning

Our ongoing subjective experiences, and our memories of those experiences, are shaped by our prior experiences, goals, and situational understanding. These factors shape how we allocate our attentional resources over different aspects of our ongoing experiences. These attentional shifts may happen overtly (e.g., when we change where we are looking) or covertly (e.g., without any explicit physical manifestation). Additionally, we may attend to what is happening at a specific spatial location (e.g., because we think something important is happening there) or we may attend to particular features irrespective of their locations (e.g., when we search for a friend's face in a crowd). We ran two covert attention experiments that differed in how long they asked participants to maintain the focus of the features or locations they were attending. Later, the participants performed a recognition memory task for attended, unattended, and novel stimuli. Participants were able to shift the location of their covert attentional focus more rapidly than they were able to shift their focus of covert attention to stimulus features, and the effects of location-based attention on memory were longer-lasting than the effects of feature-based attention.


2020 ◽  
Author(s):  
Or Yizhar ◽  
Galit Buchs ◽  
Benedetta Heimler ◽  
Doron Friedman ◽  
Amir Amedi

ABSTRACTPerceiving the spatial location and physical dimensions of objects that we touch is crucial for goal-directed actions. To achieve this, our brain transforms skin-based coordinates into a reference frame by integrating visual and proprioceptive cues, a process known as tactile remapping. In the current study, we examine the role of proprioception in the remapping process when information from the more dominant visual modality is withheld. We developed a new visual-to-touch sensory substitution device and asked participants to perform a spatial localization task in three different arm postures that included posture switches between blocks of trials. We observed that in the absence of visual information novel proprioceptive inputs can be overridden after switching postures. This behavior demonstrates effective top-down modulations of proprioception and points to the unequal contribution of different sensory modalities to tactile remapping.


2019 ◽  
Author(s):  
Marc M. Himmelberg ◽  
Federico G. Segala ◽  
Ryan T. Maloney ◽  
Julie M. Harris ◽  
Alex R. Wade

AbstractTwo stereoscopic cues that underlie the perception of motion-in-depth (MID) are changes in retinal disparity over time (CD) and interocular velocity differences (IOVD). These cues have independent spatiotemporal sensitivity profiles, depend upon different low-level stimulus properties, and are potentially processed along separate cortical pathways. Here, we ask whether these MID cues code for different motion directions: do they give rise to discriminable patterns of neural signals, and is there evidence for their convergence onto a single ‘motion-in-depth’ pathway? To answer this, we use a decoding algorithm to test whether, and when, patterns of electroencephalogram (EEG) signals measured from across the full scalp, generated in response to CD- and IOVD-isolating stimuli moving towards or away in depth can be distinguished. We find that both MID cue type and 3D-motion direction can be decoded at different points in the EEG timecourse and that direction decoding cannot be accounted for by static disparity information. Remarkably, we find evidence for late processing convergence: IOVD motion direction can be decoded relatively late in the timecourse based on a decoder trained on CD stimuli, and vice versa. We conclude that early CD and IOVD direction decoding performance is dependent upon fundamentally different low-level stimulus features, but that later stages of decoding performance may be driven by a central, shared pathway that is agnostic to these features. Overall, these data are the first to show that neural responses to CD and IOVD cues that move towards and away in depth can be decoded from EEG signals, and that different aspects of MID-cues contribute to decoding performance at different points along the EEG timecourse.


2019 ◽  
Author(s):  
Francisco Cervantes Constantino ◽  
Santiago Garat ◽  
Eliana Nicolaisen-Sobesky ◽  
Valentina Paz ◽  
Eduardo Martínez-Montes ◽  
...  

AbstractElecting whether to cooperate with someone else is well typified in the iterated prisoner’s dilemma (iPD) game, although the neural processes that unfold after its distinct outcomes have been only partly described. Recent theoretical models emphasize the ubiquity of intuitive cooperation, raising questions on the neural timelines involved. We studied the outcome stage of an iPD with electroencephalography (EEG) methods. Results showed that neural signals that are modulated by the iPD outcomes can also be indicative of future choice, in an outcome-dependent manner: (i) after zero-gain ‘sucker’s payoffs’ (unreciprocated cooperation), a participant’s decision thereafter may be indicated by changes to the feedback-related negativity (FRN); (ii) after one-sided non-cooperation (participant gain), by the P3; (iii) after mutual cooperation, by late frontal delta-band modulations. Critically, faster choices to reciprocate cooperation were predicted, on a single-trial basis, by P3 and frontal delta modulations at the immediately preceding trial. Delta band signaling is considered in relation to homeostatic regulation processing in the literature. The findings relate feedback to decisional processes in the iPD, providing a first neural account of the brief timelines implied in heuristic modes of cooperation.


2016 ◽  
Author(s):  
Peter Kok ◽  
Lieke L.F. van Lieshout ◽  
Floris P. de Lange

AbstractDuring natural perception, we often form expectations about upcoming input. These expectations are usually multifaceted – we expect a particular object at a particular location. However, expectations about spatial location and stimulus features have mostly been studied in isolation, and it is unclear whether feature-based expectation can be spatially specific. Interestingly, feature-based attention automatically spreads to unattended locations. It is still an open question whether the neural mechanisms underlying feature-based expectation differ from those underlying feature-based attention. Therefore, establishing whether the effects of feature-based expectation are spatially specific may inform this debate. Here, we investigated this by inducing expectations of a specific stimulus feature at a specific location, and probing the effects on sensory processing across the visual field using fMRI. We found an enhanced sensory response for unexpected stimuli, which was elicited only when there was a violation of expectation at the specific location where participants formed a stimulus expectation. The neural consequences of this expectation violation, however, spread to cortical locations processing the stimulus in the opposite hemifield. This suggests that an expectation violation at one location in the visual world can lead to a spatially non-specific gain increase across the visual field.


Sign in / Sign up

Export Citation Format

Share Document