scholarly journals The Efficacy of Single-Trial Multisensory Memories

2013 ◽  
Vol 26 (5) ◽  
pp. 483-502 ◽  
Author(s):  
Antonia Thelen ◽  
Micah M. Murray

This review article summarizes evidence that multisensory experiences at one point in time have long-lasting effects on subsequent unisensory visual and auditory object recognition. The efficacy of single-trial exposure to task-irrelevant multisensory events is its ability to modulate memory performance and brain activity to unisensory components of these events presented later in time. Object recognition (either visual or auditory) is enhanced if the initial multisensory experience had been semantically congruent and can be impaired if this multisensory pairing was either semantically incongruent or entailed meaningless information in the task-irrelevant modality, when compared to objects encountered exclusively in a unisensory context. Processes active during encoding cannot straightforwardly explain these effects; performance on all initial presentations was indistinguishable despite leading to opposing effects with stimulus repetitions. Brain responses to unisensory stimulus repetitions differ during early processing stages (∼100 ms post-stimulus onset) according to whether or not they had been initially paired in a multisensory context. Plus, the network exhibiting differential responses varies according to whether or not memory performance is enhanced or impaired. The collective findings we review indicate that multisensory associations formedviasingle-trial learning exert influences on later unisensory processing to promote distinct object representations that manifest as differentiable brain networks whose activity is correlated with memory performance. These influences occur incidentally, despite many intervening stimuli, and are distinguishable from the encoding/learning processes during the formation of the multisensory associations. The consequences of multisensory interactions thus persist over time to impact memory retrieval and object discrimination.

2019 ◽  
Author(s):  
Mattson Ogg ◽  
Thomas A. Carlson ◽  
L. Robert Slevc

Human listeners are bombarded by acoustic information that the brain rapidly organizes into coherent percepts of objects and events in the environment, which aids speech and music perception. The efficiency of auditory object recognition belies the critical constraint that acoustic stimuli necessarily require time to unfold. Using magentoencephalography (MEG), we studied the time course of the neural processes that transform dynamic acoustic information into auditory object representations. Participants listened to a diverse set of 36 tokens comprising everyday sounds from a typical human environment. Multivariate pattern analysis was used to decode the sound tokens from the MEG recordings. We show that sound tokens can be decoded from brain activity beginning 90 milliseconds after stimulus onset with peak decoding performance occurring at 155 milliseconds post stimulus onset. Decoding performance was primarily driven by differences between category representations (e.g., environmental vs. instrument sounds), although within-category decoding was better than chance. Representational similarity analysis revealed that these emerging neural representations were related to harmonic and spectrotemporal differences among the stimuli, which correspond to canonical acoustic features processed by the auditory pathway. Our findings begin to link the processing of physical sound properties with the perception of auditory objects and events in cortex.


2020 ◽  
Vol 32 (1) ◽  
pp. 111-123 ◽  
Author(s):  
Mattson Ogg ◽  
Thomas A. Carlson ◽  
L. Robert Slevc

Human listeners are bombarded by acoustic information that the brain rapidly organizes into coherent percepts of objects and events in the environment, which aids speech and music perception. The efficiency of auditory object recognition belies the critical constraint that acoustic stimuli necessarily require time to unfold. Using magnetoencephalography, we studied the time course of the neural processes that transform dynamic acoustic information into auditory object representations. Participants listened to a diverse set of 36 tokens comprising everyday sounds from a typical human environment. Multivariate pattern analysis was used to decode the sound tokens from the magnetoencephalographic recordings. We show that sound tokens can be decoded from brain activity beginning 90 msec after stimulus onset with peak decoding performance occurring at 155 msec poststimulus onset. Decoding performance was primarily driven by differences between category representations (e.g., environmental vs. instrument sounds), although within-category decoding was better than chance. Representational similarity analysis revealed that these emerging neural representations were related to harmonic and spectrotemporal differences among the stimuli, which correspond to canonical acoustic features processed by the auditory pathway. Our findings begin to link the processing of physical sound properties with the perception of auditory objects and events in cortex.


2021 ◽  
Author(s):  
Miles Wischnewski ◽  
Marius V. Peelen

Objects can be recognized based on their intrinsic features, including shape, color, and texture. In daily life, however, such features are often not clearly visible, for example when objects appear in the periphery, in clutter, or at a distance. Interestingly, object recognition can still be highly accurate under these conditions when objects are seen within their typical scene context. What are the neural mechanisms of context-based object recognition? According to parallel processing accounts, context-based object recognition is supported by the parallel processing of object and scene information in separate pathways. Output of these pathways is then combined in downstream regions, leading to contextual benefits in object recognition. Alternatively, according to feedback accounts, context-based object recognition is supported by feedback from scene-selective to object-selective regions. Here, in three pre-registered transcranial magnetic stimulation (TMS) experiments, we tested a key prediction of the feedback hypothesis: that scene-selective cortex causally and selectively supports context-based object recognition before object-selective cortex does. Early visual cortex (EVC), object-selective lateral occipital cortex (LOC), and scene-selective occipital place area (OPA) were stimulated at three time points relative to stimulus onset while participants categorized degraded objects in scenes and intact objects in isolation, in different trials. Results confirmed our predictions: relative to isolated object recognition, context-based object recognition was selectively and causally supported by OPA at 160-200 ms after onset, followed by LOC at 260-300 ms after onset. These results indicate that context-based expectations facilitate object recognition by disambiguating object representations in visual cortex.


2017 ◽  
Author(s):  
Radoslaw M. Cichy ◽  
Nikolaus Kriegeskorte ◽  
Kamila M. Jozwik ◽  
Jasper J.F. van den Bosch ◽  
Ian Charest

1AbstractVision involves complex neuronal dynamics that link the sensory stream to behaviour. To capture the richness and complexity of the visual world and the behaviour it entails, we used an ecologically valid task with a rich set of real-world object images. We investigated how human brain activity, resolved in space with functional MRI and in time with magnetoencephalography, links the sensory stream to behavioural responses. We found that behaviour-related brain activity emerged rapidly in the ventral visual pathway within 200ms of stimulus onset. The link between stimuli, brain activity, and behaviour could not be accounted for by either category membership or visual features (as provided by an artificial deep neural network model). Our results identify behaviourally-relevant brain activity during object vision, and suggest that object representations guiding behaviour are complex and can neither be explained by visual features or semantic categories alone. Our findings support the view that visual representations in the ventral visual stream need to be understood in terms of their relevance to behaviour, and highlight the importance of complex behavioural assessment for human brain mapping.


eLife ◽  
2021 ◽  
Vol 10 ◽  
Author(s):  
Miles Wischnewski ◽  
Marius V Peelen

Objects can be recognized based on their intrinsic features, including shape, color, and texture. In daily life, however, such features are often not clearly visible, for example when objects appear in the periphery, in clutter, or at a distance. Interestingly, object recognition can still be highly accurate under these conditions when objects are seen within their typical scene context. What are the neural mechanisms of context-based object recognition? According to parallel processing accounts, context-based object recognition is supported by the parallel processing of object and scene information in separate pathways. Output of these pathways is then combined in downstream regions, leading to contextual benefits in object recognition. Alternatively, according to feedback accounts, context-based object recognition is supported by (direct or indirect) feedback from scene-selective to object-selective regions. Here, in three pre-registered transcranial magnetic stimulation (TMS) experiments, we tested a key prediction of the feedback hypothesis: that scene-selective cortex causally and selectively supports context-based object recognition before object-selective cortex does. Early visual cortex (EVC), object-selective lateral occipital cortex (LOC), and scene-selective occipital place area (OPA) were stimulated at three time points relative to stimulus onset while participants categorized degraded objects in scenes and intact objects in isolation, in different trials. Results confirmed our predictions: relative to isolated object recognition, context-based object recognition was selectively and causally supported by OPA at 160–200 ms after onset, followed by LOC at 260–300 ms after onset. These results indicate that context-based expectations facilitate object recognition by disambiguating object representations in the visual cortex.


2012 ◽  
Vol 25 (0) ◽  
pp. 180
Author(s):  
Antonia Thelen ◽  
Céline Cappe ◽  
Micah M. Murray

Multisensory experiences influence subsequent memory performance and brain responses. Studies have thus far concentrated on semantically congruent pairings, leaving unresolved the influence of stimulus pairing and memory sub-types. Here, we paired images with unique, meaningless sounds during a continuous recognition task to determine if purely episodic, single-trial multisensory experiences can incidentally impact subsequent visual object discrimination. Psychophysics and electrical neuroimaging analyses of visual evoked potentials (VEPs) compared responses to repeated images either paired or not with a meaningless sound during initial encounters. Recognition accuracy was significantly impaired for images initially presented as multisensory pairs and could not be explained in terms of differential attention or transfer of effects from encoding to retrieval. VEP modulations occurred at 100–130 and 270–310 ms and stemmed from topographic differences indicative of network configuration changes within the brain. Distributed source estimations localized the earlier effect to regions of the right posterior temporal gyrus (STG) and the later effect to regions of the middle temporal gyrus (MTG). Responses in these regions were stronger for images previously encountered as multisensory pairs. Only the later effect correlated with performance such that greater MTG activity in response to repeated visual stimuli was linked with greater performance decrements. The present findings suggest that brain networks involved in this discrimination may critically depend on whether multisensory events facilitate or impair later visual memory performance. More generally, the data support models whereby effects of multisensory interactions persist to incidentally affect subsequent behavior as well as visual processing during its initial stages.


2021 ◽  
Vol 35 (1) ◽  
pp. 15-22
Author(s):  
Kohei Fuseda ◽  
Jun’ichi Katayama

Abstract. Interest is a positive emotion related to attention. The event-related brain potential (ERP) probe technique is a useful method to evaluate the level of interest in dynamic stimuli. However, even in the irrelevant probe technique, the probe is presented as a physical stimulus and steals the observer’s attentional resources, although no overt response is required. Therefore, the probe might become a problematic distractor, preventing deep immersion of participants. Heartbeat-evoked brain potential (HEP) is a brain activity, time-locked to a cardiac event. No probe is required to obtain HEP data. Thus, we aimed to investigate whether the HEP can be used to evaluate the level of interest. Twenty-four participants (12 males and 12 females) watched attractive and unattractive individuals of the opposite sex in interesting and uninteresting videos (7 min each), respectively. We performed two techniques each for both the interesting and the uninteresting videos: the ERP probe and the HEP techniques. In the former, somatosensory stimuli were presented as task-irrelevant probes while participants watched videos: frequent (80%) and infrequent (20%) stimuli were presented at each wrist in random order. In the latter, participants watched videos without the probe. The P2 amplitude in response to the somatosensory probe was smaller and the positive wave amplitudes of HEP were larger while watching the videos of attractive individuals than while watching the videos of unattractive ones. These results indicate that the HEP technique is a useful method to evaluate the level of interest without an external probe stimulus.


2015 ◽  
Vol 29 (4) ◽  
pp. 135-146 ◽  
Author(s):  
Miroslaw Wyczesany ◽  
Szczepan J. Grzybowski ◽  
Jan Kaiser

Abstract. In the study, the neural basis of emotional reactivity was investigated. Reactivity was operationalized as the impact of emotional pictures on the self-reported ongoing affective state. It was used to divide the subjects into high- and low-responders groups. Independent sources of brain activity were identified, localized with the DIPFIT method, and clustered across subjects to analyse the visual evoked potentials to affective pictures. Four of the identified clusters revealed effects of reactivity. The earliest two started about 120 ms from the stimulus onset and were located in the occipital lobe and the right temporoparietal junction. Another two with a latency of 200 ms were found in the orbitofrontal and the right dorsolateral cortices. Additionally, differences in pre-stimulus alpha level over the visual cortex were observed between the groups. The attentional modulation of perceptual processes is proposed as an early source of emotional reactivity, which forms an automatic mechanism of affective control. The role of top-down processes in affective appraisal and, finally, the experience of ongoing emotional states is also discussed.


2014 ◽  
Vol 28 (3) ◽  
pp. 148-161 ◽  
Author(s):  
David Friedman ◽  
Ray Johnson

A cardinal feature of aging is a decline in episodic memory (EM). Nevertheless, there is evidence that some older adults may be able to “compensate” for failures in recollection-based processing by recruiting brain regions and cognitive processes not normally recruited by the young. We review the evidence suggesting that age-related declines in EM performance and recollection-related brain activity (left-parietal EM effect; LPEM) are due to altered processing at encoding. We describe results from our laboratory on differences in encoding- and retrieval-related activity between young and older adults. We then show that, relative to the young, in older adults brain activity at encoding is reduced over a brain region believed to be crucial for successful semantic elaboration in a 400–1,400-ms interval (left inferior prefrontal cortex, LIPFC; Johnson, Nessler, & Friedman, 2013 ; Nessler, Friedman, Johnson, & Bersick, 2007 ; Nessler, Johnson, Bersick, & Friedman, 2006 ). This reduced brain activity is associated with diminished subsequent recognition-memory performance and the LPEM at retrieval. We provide evidence for this premise by demonstrating that disrupting encoding-related processes during this 400–1,400-ms interval in young adults affords causal support for the hypothesis that the reduction over LIPFC during encoding produces the hallmarks of an age-related EM deficit: normal semantic retrieval at encoding, reduced subsequent episodic recognition accuracy, free recall, and the LPEM. Finally, we show that the reduced LPEM in young adults is associated with “additional” brain activity over similar brain areas as those activated when older adults show deficient retrieval. Hence, rather than supporting the compensation hypothesis, these data are more consistent with the scaffolding hypothesis, in which the recruitment of additional cognitive processes is an adaptive response across the life span in the face of momentary increases in task demand due to poorly-encoded episodic memories.


2006 ◽  
Vol 11 (4) ◽  
pp. 304-311 ◽  
Author(s):  
Lars-Göran Nilsson

This paper presents four domains of markers that have been found to predict later cognitive impairment and neurodegenerative disease. These four domains are (1) data patterns of memory performance, (2) cardiovascular factors, (3) genetic markers, and (4) brain activity. The critical features of each domain are illustrated with data from the longitudinal Betula Study on memory, aging, and health ( Nilsson et al., 1997 ; Nilsson et al., 2004 ). Up to now, early signs regarding these domains have been examined one by one and it has been found that they are associated with later cognitive impairment and neurodegenerative disease. However, it was also found that each marker accounts for only a very small part of the total variance, implying that single markers should not be used as predictors for cognitive decline or neurodegenerative disease. It is discussed whether modeling and simulations should be used as tools to combine markers at different levels to increase the amount of explained variance.


Sign in / Sign up

Export Citation Format

Share Document