scholarly journals Auditory and visual distractors disrupt multisensory temporal acuity in the crossmodal temporal order judgment task

PLoS ONE ◽  
2017 ◽  
Vol 12 (7) ◽  
pp. e0179564 ◽  
Author(s):  
Cassandra L. Dean ◽  
Brady A. Eggleston ◽  
Kyla David Gibney ◽  
Enimielen Aligbe ◽  
Marissa Blackwell ◽  
...  
2020 ◽  
Author(s):  
Vincent van de Ven ◽  
Moritz Jaeckels ◽  
Peter De Weerd

We tend to mentally segment a series of events according to perceptual contextual changes, such that items from a shared context are more strongly associated in memory than items from different contexts. It is also known that temporal context provides a scaffold to structure experiences in memory, but its role in event segmentation has not been investigated. We adapted a previous paradigm, which was used to investigate event segmentation using visual contexts, to study the effects of changes in temporal contexts on event segmentation in associative memory. We presented lists of items in which the inter-stimulus intervals (ISIs) ranged across lists between 0.5 and 4 s in 0.5 s steps. After each set of six lists, participants judged which one of two test items were shown first (temporal order judgment) for items that were either drawn from the same list or from consecutive lists. Further, participants judged from memory whether the ISI associated to an item lasted longer than a standard interval (2.25s) that was not previously shown. Results showed faster responses for temporal order judgments when items were drawn from the same context, as opposed to items drawn from different contexts. Further, we found that participants were well able to provide temporal duration judgments based on recalled durations. Finally, we found temporal acuity, as estimated by psychometric curve fitting parameters of the recalled durations, correlated inversely with within-list temporal order judgments. These findings show that changes in temporal context support event segmentation in associative memory.


2021 ◽  
Author(s):  
Ramya Mudumba ◽  
Narayanan Srinivasan

The nature of spatiotemporal interactions in visual perception due to modulations of attention is still not well understood. Transient shifts of attention have been shown to induce a trade-off in spatiotemporal acuities at the cued location. Attention also can be varied in terms of scope and the evidence for the effects of scope on the spatiotemporal resolution for coupling or trade-offs have been equivocal. We predicted that scaling or changing the scope of attention would rather result in a spatiotemporal trade-off based on the complementary spatial and temporal frequency properties of the magnocellular and parvocellular channels. We manipulated the scope of attention by asking participants to perform a global or local target detection task with hierarchical stimuli. In addition, participants performed a temporal order judgment task with two discs presented alongside the hierarchical stimuli. We found higher temporal sensitivity with broad scope of attention or global processing compared to narrow scope of attention or local processing. The results provide evidence for a spatiotemporal processing trade-off when attention is scaled spatially. This result throws doubt on a general coupling or resource metaphor explanation irrespective of the spatial or temporal nature of the tasks. The results indicate the further need for carefully investigating the spatial and temporal properties of attention and its effect on spatiotemporal processing at different scales.


Author(s):  
Jan Tünnermann ◽  
Ingrid Scharlau

Humans are incapable of judging the temporal order of visual events at brief temporal separations with perfect accuracy. Their performance---which is of much interest in visual cognition and attention research---can be measured with the temporal-order judgment task, which typically produces S-shaped psychometric functions. Occasionally, researchers reported plateaus within these functions, and some theories predict such deviation from the basic S shape. However, the centers of the psychometric functions result from the weakest performance at the most difficult presentations and therefore fluctuate strongly, leaving existence and exact shapes of plateaus unclear. This study set out to investigate whether plateaus disappear if the data accuracy is enhanced, or if we are ``stuck on a plateau'', or rather with it. For this purpose, highly accurate data were assessed by model-based analysis. The existence of plateaus is confidently confirmed and two plausible mechanisms derived from very different models are presented. Neither model, however, performs well in the presence of a strong attention manipulation, and model comparison remains unclear on the question which of the models describes the data best. Nevertheless, the present study includes the highest accuracy in visual TOJ data and the most explicit models of plateaus in TOJ studied so far.


Author(s):  
Jan Tünnermann ◽  
Ingrid Scharlau

Humans are incapable of judging the temporal order of visual events at brief temporal separations with perfect accuracy. Their performance---which is of much interest in visual cognition and attention research---can be measured with the temporal-order judgment task, which typically produces S-shaped psychometric functions. Occasionally, researchers reported plateaus within these functions, and some theories predict such deviation from the basic S shape. However, the centers of the psychometric functions result from the weakest performance at the most difficult presentations and therefore fluctuate strongly, leaving existence and exact shapes of plateaus unclear. This study set out to investigate whether plateaus disappear if the data accuracy is enhanced, or if we are ``stuck on a plateau'', or rather with it. For this purpose, highly accurate data were assessed by model-based analysis. The existence of plateaus is confidently confirmed and two plausible mechanisms derived from very different models are presented. Neither model, however, performs well in the presence of a strong attention manipulation, and model comparison remains unclear on the question which of the models describes the data best. Nevertheless, the present study includes the highest accuracy in visual TOJ data and the most explicit models of plateaus in TOJ studied so far.


2021 ◽  
Author(s):  
Jacques Pesnot Lerousseau ◽  
Cesare Parise ◽  
Marc O. Ernst ◽  
Virginie van Wassenhove

ABSTRACTNeural mechanisms that arbitrate between integrating and segregating multisensory information are essential for complex scene analysis and for the resolution of the multisensory correspondence problem. However, these mechanisms and their dynamics remain largely unknown, partly because classical models of multisensory integration are static. Here, we used the Multisensory Correlation Detector, a model that provides a good explanatory power for human behavior while incorporating dynamic computations. Participants judged whether sequences of auditory and visual signals originated from the same source (causal inference) or whether one modality was leading the other (temporal order), while being recorded with magnetoencephalography. To test the match between the Multisensory Correlation Detector dynamics and the magnetoencephalographic recordings, we developed a novel dynamic encoding-model approach of electrophysiological activity, which relied on temporal response functions. First, we confirm that the Multisensory Correlation Detector explains causal inference and temporal order patterns well. Second, we found strong fits of brain activity to the two outputs of the Multisensory Correlation Detector in temporo-parietal cortices, a region with known multisensory integrative properties. Finally, we report an asymmetry in the goodness of the fits, which were more reliable during the causal inference than during the temporal order judgment task. Overall, our results suggest the plausible existence of multisensory correlation detectors in the human brain, which explain why and how causal inference is strongly driven by the temporal correlation of multisensory signals.


Sign in / Sign up

Export Citation Format

Share Document