scholarly journals Human EEG recordings for 1,854 concepts presented in rapid serial visual presentation streams

2022 ◽  
Vol 9 (1) ◽  
Author(s):  
Tijl Grootswagers ◽  
Ivy Zhou ◽  
Amanda K. Robinson ◽  
Martin N. Hebart ◽  
Thomas A. Carlson

AbstractThe neural basis of object recognition and semantic knowledge has been extensively studied but the high dimensionality of object space makes it challenging to develop overarching theories on how the brain organises object knowledge. To help understand how the brain allows us to recognise, categorise, and represent objects and object categories, there is a growing interest in using large-scale image databases for neuroimaging experiments. In the current paper, we present THINGS-EEG, a dataset containing human electroencephalography responses from 50 subjects to 1,854 object concepts and 22,248 images in the THINGS stimulus set, a manually curated and high-quality image database that was specifically designed for studying human vision. The THINGS-EEG dataset provides neuroimaging recordings to a systematic collection of objects and concepts and can therefore support a wide array of research to understand visual object processing in the human brain.

2021 ◽  
Author(s):  
Tijl Grootswagers ◽  
Ivy Zhou ◽  
Amanda K Robinson ◽  
Martin N Hebart ◽  
Thomas A Carlson

The neural basis of object recognition and semantic knowledge have been the focus of a large body of research but given the high dimensionality of object space, it is challenging to develop an overarching theory on how brain organises object knowledge. To help understand how the brain allows us to recognise, categorise, and represent objects and object categories, there is a growing interest in using large-scale image databases for neuroimaging experiments. Traditional image databases are based on manually selected object concepts and often single images per concept. In contrast, 'big data' stimulus sets typically consist of images that can vary significantly in quality and may be biased in content. To address this issue, recent work developed THINGS: a large stimulus set of 1,854 object concepts and 26,107 associated images. In the current paper, we present THINGS-EEG, a dataset containing human electroencephalography responses from 50 subjects to all concepts and 22,248 images in the THINGS stimulus set. The THINGS-EEG dataset provides neuroimaging recordings to a systematic collection of objects and concepts and can therefore support a wide array of research to understand visual object processing in the human brain.


2018 ◽  
Vol 116 (1) ◽  
pp. 96-105 ◽  
Author(s):  
Lichao Chen ◽  
Sudhir Singh ◽  
Thomas Kailath ◽  
Vwani Roychowdhury

Despite significant recent progress, machine vision systems lag considerably behind their biological counterparts in performance, scalability, and robustness. A distinctive hallmark of the brain is its ability to automatically discover and model objects, at multiscale resolutions, from repeated exposures to unlabeled contextual data and then to be able to robustly detect the learned objects under various nonideal circumstances, such as partial occlusion and different view angles. Replication of such capabilities in a machine would require three key ingredients: (i) access to large-scale perceptual data of the kind that humans experience, (ii) flexible representations of objects, and (iii) an efficient unsupervised learning algorithm. The Internet fortunately provides unprecedented access to vast amounts of visual data. This paper leverages the availability of such data to develop a scalable framework for unsupervised learning of object prototypes—brain-inspired flexible, scale, and shift invariant representations of deformable objects (e.g., humans, motorcycles, cars, airplanes) comprised of parts, their different configurations and views, and their spatial relationships. Computationally, the object prototypes are represented as geometric associative networks using probabilistic constructs such as Markov random fields. We apply our framework to various datasets and show that our approach is computationally scalable and can construct accurate and operational part-aware object models much more efficiently than in much of the recent computer vision literature. We also present efficient algorithms for detection and localization in new scenes of objects and their partial views.


2017 ◽  
Author(s):  
Cameron Parro ◽  
Matthew L Dixon ◽  
Kalina Christoff

AbstractCognitive control mechanisms support the deliberate regulation of thought and behavior based on current goals. Recent work suggests that motivational incentives improve cognitive control, and has begun to elucidate the brain regions that may support this effect. Here, we conducted a quantitative meta-analysis of neuroimaging studies of motivated cognitive control using activation likelihood estimation (ALE) and Neurosynth in order to delineate the brain regions that are consistently activated across studies. The analysis included functional neuroimaging studies that investigated changes in brain activation during cognitive control tasks when reward incentives were present versus absent. The ALE analysis revealed consistent recruitment in regions associated with the frontoparietal control network including the inferior frontal sulcus (IFS) and intraparietal sulcus (IPS), as well as consistent recruitment in regions associated with the salience network including the anterior insula and anterior mid-cingulate cortex (aMCC). A large-scale exploratory meta-analysis using Neurosynth replicated the ALE results, and also identified the caudate nucleus, nucleus accumbens, medial thalamus, inferior frontal junction/premotor cortex (IFJ/PMC), and hippocampus. Finally, we conducted separate ALE analyses to compare recruitment during cue and target periods, which tap into proactive engagement of rule-outcome associations, and the mobilization of appropriate viscero-motor states to execute a response, respectively. We found that largely distinct sets of brain regions are recruited during cue and target periods. Altogether, these findings suggest that flexible interactions between frontoparietal, salience, and dopaminergic midbrain-striatal networks may allow control demands to be precisely tailored based on expected value.


2012 ◽  
Vol 24 (1) ◽  
pp. 133-147 ◽  
Author(s):  
Carin Whitney ◽  
Marie Kirk ◽  
Jamie O'Sullivan ◽  
Matthew A. Lambon Ralph ◽  
Elizabeth Jefferies

To understand the meanings of words and objects, we need to have knowledge about these items themselves plus executive mechanisms that compute and manipulate semantic information in a task-appropriate way. The neural basis for semantic control remains controversial. Neuroimaging studies have focused on the role of the left inferior frontal gyrus (LIFG), whereas neuropsychological research suggests that damage to a widely distributed network elicits impairments of semantic control. There is also debate about the relationship between semantic and executive control more widely. We used TMS in healthy human volunteers to create “virtual lesions” in structures typically damaged in patients with semantic control deficits: LIFG, left posterior middle temporal gyrus (pMTG), and intraparietal sulcus (IPS). The influence of TMS on tasks varying in semantic and nonsemantic control demands was examined for each region within this hypothesized network to gain insights into (i) their functional specialization (i.e., involvement in semantic representation, controlled retrieval, or selection) and (ii) their domain dependence (i.e., semantic or cognitive control). The results revealed that LIFG and pMTG jointly support both the controlled retrieval and selection of semantic knowledge. IPS specifically participates in semantic selection and responds to manipulations of nonsemantic control demands. These observations are consistent with a large-scale semantic control network, as predicted by lesion data, that draws on semantic-specific (LIFG and pMTG) and domain-independent executive components (IPS).


2021 ◽  
Author(s):  
Irene Caprara ◽  
Peter Janssen

Abstract To perform tasks like grasping, the brain has to process visual object information so that the grip aperture can be adjusted before touching the object. Previous studies have demonstrated that the posterior subsector of the Anterior Intraparietal area (pAIP) is connected to area 45B, and its anterior counterpart (aAIP) to F5a. However, the role of area 45B and F5a in visually-guided grasping is poorly understood. Here, we investigated the role of area 45B, F5a and F5p in object processing during visually-guided grasping in two monkeys. If the presentation of an object activates a motor command related to the preshaping of the hand, as in F5p, such neurons should prefer objects presented within reachable distance. Conversely, neurons encoding a purely visual representation of an object – possibly in area 45B and F5a – should be less affected by viewing distance. Contrary to our expectations, we found that most neurons in area 45B were object- and viewing distance-selective (mostly Near-preferring). Area F5a showed much weaker object selectivity compared to 45B, with a similar preference for objects presented at the Near position. Finally, F5p neurons were less object selective and frequently Far-preferring. In sum, area 45B – but not F5p– prefers objects presented in peripersonal space.


2020 ◽  
Author(s):  
Matthew Perich ◽  
Kanaka Rajan

The neural control of behavior is distributed across many functionally and anatomically distinct brain regions even in small nervous systems. While classical neuroscience models treated these regions as a set of hierarchically isolated nodes, the brain comprises a recurrently interconnected network in which each region is intimately modulated by many others. Uncovering these interactions is now possible through experimental techniques that access large neural populations from many brain regions simultaneously. Harnessing these large-scale datasets, however, requires new theoretical approaches. Here, we review recent work to understand brain-wide interactions using multi-region "network of networks" models and discuss how they can guide future experiments. We also emphasize the importance of multi-region recordings, and posit that studying individual components in isolation will be insufficient to understand the neural basis of behavior.


2016 ◽  
Author(s):  
Waitsang Keung ◽  
Daniel Osherson ◽  
Jonathan D. Cohen

AbstractThe neural representation of an object can change depending on its context. For instance, a horse may be more similar to a bear than to a dog in terms of size, but more similar to a dog in terms of domesticity. We used behavioral measures of similarity together with representational similarity analysis and functional connectivity of fMRI data in humans to reveal how the neural representation of semantic knowledge can change to match the current goal demand. Here we present evidence that objects similar to each other in a given context are also represented more similarly in the brain and that these similarity relationships are modulated by context specific activations in frontal areas.Significance statementThe judgment of similarity between two objects can differ in different contexts. Here we report a study that tested the hypothesis that brain areas associated with task context and cognitive control modulate semantic representations of objects in a task-specific way.We first demonstrate that task instructions impact how objects are represented in the brain. We then show that the expression of these representations is correlated with activity in regions of frontal cortex widely thought to represent context, attention and control.In addition, we introduce spatial variance as a novel index of representational expression and attentional modulation. This promises to lay the groundwork for more exacting studies of the neural basis of semantics, as well as the dynamics of attentional modulation.


NeuroImage ◽  
2011 ◽  
Vol 55 (1) ◽  
pp. 304-311 ◽  
Author(s):  
Carmen Schmid ◽  
Christian Büchel ◽  
Michael Rose

2019 ◽  
Author(s):  
Lionel Barnett ◽  
Suresh D. Muthukumaraswamy ◽  
Robin L. Carhart-Harris ◽  
Anil K. Seth

AbstractNeuroimaging studies of the psychedelic state offer a unique window onto the neural basis of conscious perception and selfhood. Despite well understood pharmacological mechanisms of action, the large-scale changes in neural dynamics induced by psychedelic compounds remain poorly understood. Using source-localised, steady-state MEG recordings, we describe changes in functional connectivity following the controlled administration of LSD, psilocybin and low-dose ketamine, as well as, for comparison, the (non-psychedelic) anticonvulsant drug tiagabine. We compare both undirected and directed measures of functional connectivity between placebo and drug conditions. We observe a general decrease in directed functional connectivity for all three psychedelics, as measured by Granger causality, throughout the brain. These data support the view that the psychedelic state involves a breakdown in patterns of functional organisation or information flow in the brain. In the case of LSD, the decrease in directed functional connectivity is coupled with an increase in undirected functional connectivity, which we measure using correlation and coherence. This surprising opposite movement of directed and undirected measures is of more general interest for functional connectivity analyses, which we interpret using analytical modelling. Overall, our results uncover the neural dynamics of information flow in the psychedelic state, and highlight the importance of comparing multiple measures of functional connectivity when analysing time-resolved neuroimaging data.


2001 ◽  
Vol 13 (6) ◽  
pp. 793-799 ◽  
Author(s):  
Moshe Bar

The nature of visual object representation in the brain is the subject of a prolonged debate. One set of theories asserts that objects are represented by their structural description and the representation is “object-centered.” Theories from the other side of the debate suggest that humans store multiple “snapshots” for each object, depicting it as seen under various conditions, and the representation is therefore “viewer-centered.” The principal tool that has been used to support and criticize each of these hypotheses is subjects' performance in recognizing objects under novel viewing conditions. For example, if subjects take more time in recognizing an object from an unfamiliar viewpoint, it is common to claim that the representation of that object is viewpoint-dependent and therefore viewer-centered. It is suggested here, however, that performance cost in recognition of objects under novel conditions may be misleading when studying the nature of object representation. Specifically, it is argued that viewpoint-dependent performance is not necessarily an indication of viewer-centered representation. An account for the neural basis of perceptual priming is first provided. In light of this account, it is conceivable that viewpoint dependency reflects the utilization of neural paths with different levels of sensitivity en route to the same representation, rather than the existence of viewpoint-specific representations. New experimental paradigms are required to study the validity of the viewer-centered approach.


Sign in / Sign up

Export Citation Format

Share Document