scholarly journals The representational dynamics of visual objects in rapid serial visual processing streams

2018 ◽  
Author(s):  
Tijl Grootswagers ◽  
Amanda K. Robinson ◽  
Thomas A. Carlson

AbstractIn our daily lives, we are bombarded with a stream of rapidly changing visual input. Humans have the remarkable capacity to detect and identify objects in fast-changing scenes. Yet, when studying brain representations, stimuli are generally presented in isolation. Here, we studied the dynamics of human vision using a combination of fast stimulus presentation rates, electroencephalography and multivariate decoding analyses. Using a presentation rate of 5 images per second, we obtained the representational structure of a large number of stimuli, and showed the emerging abstract categorical organisation of this structure. Furthermore, we could separate the temporal dynamics of perceptual processing from higher-level target selection effects. In a second experiment, we used the same paradigm at 20Hz to show that shorter image presentation limits the categorical abstraction of object representations. Our results show that applying multivariate pattern analysis to every image in rapid serial visual processing streams has unprecedented potential for studying the temporal dynamics of the structure of representations in the human visual system.

2019 ◽  
Author(s):  
Sophia M. Shatek ◽  
Tijl Grootswagers ◽  
Amanda K. Robinson ◽  
Thomas A. Carlson

AbstractMental imagery is the ability to generate images in the mind in the absence of sensory input. Both perceptual visual processing and internally generated imagery engage large, overlapping networks of brain regions. However, it is unclear whether they are characterized by similar temporal dynamics. Recent magnetoencephalography work has shown that object category information was decodable from brain activity during mental imagery, but the timing was delayed relative to perception. The current study builds on these findings, using electroencephalography to investigate the dynamics of mental imagery. Sixteen participants viewed two images of the Sydney Harbour Bridge and two images of Santa Claus. On each trial, they viewed a sequence of the four images and were asked to imagine one of them, which was cued retroactively by its temporal location in the sequence. Time-resolved multivariate pattern analysis was used to decode the viewed and imagined stimuli. Our results indicate that the dynamics of imagery processes are more variable across, and within, participants compared to perception of physical stimuli. Although category and exemplar information was decodable for viewed stimuli, there were no informative patterns of activity during mental imagery. The current findings suggest stimulus complexity, task design and individual differences may influence the ability to successfully decode imagined images. We discuss the implications of these results for our understanding of the neural processes underlying mental imagery.


Vision ◽  
2019 ◽  
Vol 3 (4) ◽  
pp. 53
Author(s):  
Sophia M. Shatek ◽  
Tijl Grootswagers ◽  
Amanda K. Robinson ◽  
Thomas A. Carlson

Mental imagery is the ability to generate images in the mind in the absence of sensory input. Both perceptual visual processing and internally generated imagery engage large, overlapping networks of brain regions. However, it is unclear whether they are characterized by similar temporal dynamics. Recent magnetoencephalography work has shown that object category information was decodable from brain activity during mental imagery, but the timing was delayed relative to perception. The current study builds on these findings, using electroencephalography to investigate the dynamics of mental imagery. Sixteen participants viewed two images of the Sydney Harbour Bridge and two images of Santa Claus. On each trial, they viewed a sequence of the four images and were asked to imagine one of them, which was cued retroactively by its temporal location in the sequence. Time-resolved multivariate pattern analysis was used to decode the viewed and imagined stimuli. Although category and exemplar information was decodable for viewed stimuli, there were no informative patterns of activity during mental imagery. The current findings suggest stimulus complexity, task design and individual differences may influence the ability to successfully decode imagined images. We discuss the implications of these results in the context of prior findings of mental imagery.


2017 ◽  
Author(s):  
Kandan Ramakrishnan ◽  
Iris I.A. Groen ◽  
Arnold W.M. Smeulders ◽  
H. Steven Scholte ◽  
Sennay Ghebreab

AbstractConvolutional neural networks (CNNs) have recently emerged as promising models of human vision based on their ability to predict hemodynamic brain responses to visual stimuli measured with functional magnetic resonance imaging (fMRI). However, the degree to which CNNs can predict temporal dynamics of visual object recognition reflected in neural measures with millisecond precision is less understood. Additionally, while deeper CNNs with higher numbers of layers perform better on automated object recognition, it is unclear if this also results into better correlation to brain responses. Here, we examined 1) to what extent CNN layers predict visual evoked responses in the human brain over time and 2) whether deeper CNNs better model brain responses. Specifically, we tested how well CNN architectures with 7 (CNN-7) and 15 (CNN-15) layers predicted electro-encephalography (EEG) responses to several thousands of natural images. Our results show that both CNN architectures correspond to EEG responses in a hierarchical spatio-temporal manner, with lower layers explaining responses early in time at electrodes overlying early visual cortex, and higher layers explaining responses later in time at electrodes overlying lateral-occipital cortex. While the explained variance of neural responses by individual layers did not differ between CNN-7 and CNN-15, combining the representations across layers resulted in improved performance of CNN-15 compared to CNN-7, but only after 150 ms after stimulus-onset. This suggests that CNN representations reflect both early (feed-forward) and late (feedback) stages of visual processing. Overall, our results show that depth of CNNs indeed plays a role in explaining time-resolved EEG responses.


2016 ◽  
Author(s):  
Radoslaw Martin Cichy ◽  
Dimitrios Pantazis

1AbstractMultivariate pattern analysis of magnetoencephalography (MEG) and electroencephalography (EEG) data can reveal the rapid neural dynamics underlying cognition. However, MEG and EEG have systematic differences in sampling neural activity. This poses the question to which degree such measurement differences consistently bias the results of multivariate analysis applied to MEG and EEG activation patterns. To investigate, we conducted a concurrent MEG/EEG study while participants viewed images of everyday objects. We applied multivariate classification analyses to MEG and EEG data, and compared the resulting time courses to each other, and to fMRI data for an independent evaluation in space. We found that both MEG and EEG revealed the millisecond spatio-temporal dynamics of visual processing with largely equivalent results. Beyond yielding convergent results, we found that MEG and EEG also captured partly unique aspects of visual representations. Those unique components emerged earlier in time for MEG than for EEG. Identifying the sources of those unique components with fMRI, we found the locus for both MEG and EEG in high-level visual cortex, and in addition for MEG in early visual cortex. Together, our results show that multivariate analyses of MEG and EEG data offer a convergent and complimentary view on neural processing, and motivate the wider adoption of these methods in both MEG and EEG research.


2018 ◽  
Author(s):  
Hamid Karimi-Rouzbahani

AbstractInvariant object recognition, which refers to the ability of precisely and rapidly recognizing objects in the presence of variations, has been a central question in human vision research. The general consensus is that the ventral and dorsal visual streams are the major processing pathways which undertake category and variation encoding in entangled layers. This overlooks the mounting evidence which support the role of peri-frontal areas in category encoding. These recent studies, however, have left open several aspects of visual processing in peri-frontal areas including whether these areas contributed only in active tasks, whether they interacted with peri-occipital areas or processed information independently and differently. To address these concerns, a passive EEG paradigm was designed in which subjects viewed a set of variation-controlled object images. Using multivariate pattern analysis, noticeable category and variation information were observed in occipital, parietal, temporal and prefrontal areas, supporting their contribution to visual processing. Using task specificity indices, phase and Granger causality analyses, three distinct stages of processing were identified which revealed transfer of information between peri-frontal and peri-occipital areas suggesting their parallel and interactive processing of visual information. A brain-plausible computational model supported the possibility of parallel processing mechanisms in peri-occipital and peri-frontal areas. These findings, while advocating previous results on the role of prefrontal areas in object recognition, extend their contribution from active recognition, in which peri-frontal to peri-occipital feedback mechanisms are activated, to the general case of object and variation processing, which is an integral part of visual processing and play role even during passive viewing.


PLoS Biology ◽  
2020 ◽  
Vol 18 (12) ◽  
pp. e3000987
Author(s):  
Clare L. Kinnear ◽  
Elsa Hansen ◽  
Valerie J. Morley ◽  
Kevin C. Tracy ◽  
Meghan Forstchen ◽  
...  

The antimicrobial resistance crisis has persisted despite broad attempts at intervention. It has been proposed that an important driver of resistance is selection imposed on bacterial populations that are not the intended target of antimicrobial therapy. But to date, there has been limited quantitative measure of the mean and variance of resistance following antibiotic exposure. Here we focus on the important nosocomial pathogen Enterococcus faecium in a hospital system where resistance to daptomycin is evolving despite standard interventions. We hypothesized that the intravenous use of daptomycin generates off-target selection for resistance in transmissible gastrointestinal (carriage) populations of E. faecium. We performed a cohort study in which the daptomycin resistance of E. faecium isolated from rectal swabs from daptomycin-exposed patients was compared to a control group of patients exposed to linezolid, a drug with similar indications. In the daptomycin-exposed group, daptomycin resistance of E. faecium from the off-target population was on average 50% higher than resistance in the control group (n = 428 clones from 22 patients). There was also greater phenotypic diversity in daptomycin resistance within daptomycin-exposed patients. In patients where multiple samples over time were available, a wide variability in temporal dynamics were observed, from long-term maintenance of resistance to rapid return to sensitivity after daptomycin treatment stopped. Sequencing of isolates from a subset of patients supports the argument that selection occurs within patients. Our results demonstrate that off-target gastrointestinal populations rapidly respond to intravenous antibiotic exposure. Focusing on the off-target evolutionary dynamics may offer novel avenues to slow the spread of antibiotic resistance.


2019 ◽  
Author(s):  
David A. Tovar ◽  
Micah M. Murray ◽  
Mark T. Wallace

AbstractObjects are the fundamental building blocks of how we create a representation of the external world. One major distinction amongst objects is between those that are animate versus inanimate. Many objects are specified by more than a single sense, yet the nature by which multisensory objects are represented by the brain remains poorly understood. Using representational similarity analysis of human EEG signals, we show enhanced encoding of audiovisual objects when compared to their corresponding visual and auditory objects. Surprisingly, we discovered the often-found processing advantages for animate objects was not evident in a multisensory context due to greater neural enhancement of inanimate objects—the more weakly encoded objects under unisensory conditions. Further analysis showed that the selective enhancement of inanimate audiovisual objects corresponded with an increase in shared representations across brain areas, suggesting that neural enhancement was mediated by multisensory integration. Moreover, a distance-to-bound analysis provided critical links between neural findings and behavior. Improvements in neural decoding at the individual exemplar level for audiovisual inanimate objects predicted reaction time differences between multisensory and unisensory presentations during a go/no-go animate categorization task. Interestingly, links between neural activity and behavioral measures were most prominent 100 to 200ms and 350 to 500ms after stimulus presentation, corresponding to time periods associated with sensory evidence accumulation and decision-making, respectively. Collectively, these findings provide key insights into a fundamental process the brain uses to maximize information it captures across sensory systems to perform object recognition.Significance StatementOur world is filled with an ever-changing milieu of sensory information that we are able to seamlessly transform into meaningful perceptual experience. We accomplish this feat by combining different features from our senses to construct objects. However, despite the fact that our senses do not work in isolation but rather in concert with each other, little is known about how the brain combines the senses together to form object representations. Here, we used EEG and machine learning to study how the brain processes auditory, visual, and audiovisual objects. Surprisingly, we found that non-living objects, the objects which were more difficult to process with one sense alone, benefited the most from engaging multiple senses.


2019 ◽  
Vol 46 (6) ◽  
pp. 843-878 ◽  
Author(s):  
Xena Welch ◽  
Stevo Pavićević ◽  
Thomas Keil ◽  
Tomi Laamanen

Despite the long-standing research interest in the pre-deal phase of mergers and acquisitions, many important questions remain unanswered. We review and synthesize the extensive but rather fragmented research on this topic area in the fields of management, finance, accounting, and economics. We organize our review according to six themes, that is, deal initiation, target selection, bidding and negotiation, valuation and financing, announcement, and closure, which represent the main categories of activities performed during the pre-deal phase. Our review shows that most of the existing research relies on a rather high-level, simplified, and static conception of the pre-deal phase. On the basis of our review, we put forward a research agenda that calls for a more granular examination of individual activities and decisions, a more comprehensive analysis of the interplay among the different actors involved in the pre-deal phase, a better understanding of the role of the temporal dynamics, and the extension of the theoretical base from variance-based to process-based theorizing.


2020 ◽  
Vol 32 (1) ◽  
pp. 50-64
Author(s):  
Christelle Larzabal ◽  
Nadège Bacon-Macé ◽  
Sophie Muratot ◽  
Simon J. Thorpe

Unlike familiarity, recollection involves the ability to reconstruct mentally previous events that results in a strong sense of reliving. According to the reinstatement hypothesis, this specific feature emerges from the reactivation of cortical patterns involved during information exposure. Over time, the retrieval of specific details becomes more difficult, and memories become increasingly supported by familiarity judgments. The multiple trace theory (MTT) explains the gradual loss of episodic details by a transformation in the memory representation, a view that is not shared by the standard consolidation model. In this study, we tested the MTT in light of the reinstatement hypothesis. The temporal dynamics of mental imagery from long-term memory were investigated and tracked over the passage of time. Participant EEG activity was recorded during the recall of short audiovisual clips that had been watched 3 weeks, 1 day, or a few hours beforehand. The recall of the audiovisual clips was assessed using a Remember/Know/New procedure, and snapshots of clips were used as recall cues. The decoding matrices obtained from the multivariate pattern analyses revealed sustained patterns that occurred at long latencies (>500 msec poststimulus onset) that faded away over the retention intervals and that emerged from the same neural processes. Overall, our data provide further evidence toward the MTT and give new insights into the exploration of our “mind's eye.”


Sign in / Sign up

Export Citation Format

Share Document