scholarly journals Distillation of Regional Activity Reveals Hidden Content of Neural Information in Visual Processing

2021 ◽  
Vol 15 ◽  
Author(s):  
Trung Quang Pham ◽  
Shota Nishiyama ◽  
Norihiro Sadato ◽  
Junichi Chikazoe

Multivoxel pattern analysis (MVPA) has become a standard tool for decoding mental states from brain activity patterns. Recent studies have demonstrated that MVPA can be applied to decode activity patterns of a certain region from those of the other regions. By applying a similar region-to-region decoding technique, we examined whether the information represented in the visual areas can be explained by those represented in the other visual areas. We first predicted the brain activity patterns of an area on the visual pathway from the others, then subtracted the predicted patterns from their originals. Subsequently, the visual features were derived from these residuals. During the visual perception task, the elimination of the top-down signals enhanced the simple visual features represented in the early visual cortices. By contrast, the elimination of the bottom-up signals enhanced the complex visual features represented in the higher visual cortices. The directions of such modulation effects varied across visual perception/imagery tasks, indicating that the information flow across the visual cortices is dynamically altered, reflecting the contents of visual processing. These results demonstrated that the distillation approach is a useful tool to estimate the hidden content of information conveyed across brain regions.

2022 ◽  
Author(s):  
Jun Kai Ho ◽  
Tomoyasu Horikawa ◽  
Kei Majima ◽  
Yukiyasu Kamitani

The sensory cortex is characterized by general organizational principles such as topography and hierarchy. However, measured brain activity given identical input exhibits substantially different patterns across individuals. While anatomical and functional alignment methods have been proposed in functional magnetic resonance imaging (fMRI) studies, it remains unclear whether and how hierarchical and fine-grained representations can be converted between individuals while preserving the encoded perceptual contents. In this study, we evaluated machine learning models called neural code converters that predict one's brain activity pattern (target) from another's (source) given the same stimulus by the decoding of hierarchical visual features and the reconstruction of perceived images. The training data for converters consisted of fMRI data obtained with identical sets of natural images presented to pairs of individuals. Converters were trained using the whole visual cortical voxels from V1 through the ventral object areas, without explicit labels of visual areas. We decoded the converted brain activity patterns into hierarchical visual features of a deep neural network (DNN) using decoders pre-trained on the target brain and then reconstructed images via the decoded features. Without explicit information about visual cortical hierarchy, the converters automatically learned the correspondence between the visual areas of the same levels. DNN feature decoding at each layer showed higher decoding accuracies from corresponding levels of visual areas, indicating that hierarchical representations were preserved after conversion. The viewed images were faithfully reconstructed with recognizable silhouettes of objects even with relatively small amounts of data for converter training. The conversion also allows pooling data across multiple individuals, leading to stably high reconstruction accuracy compared to those converted between individuals. These results demonstrate that the conversion learns hierarchical correspondence and preserves the fine-grained representations of visual features, enabling visual image reconstruction using decoders trained on other individuals.


2019 ◽  
Author(s):  
Sirui Liu ◽  
Qing Yu ◽  
Peter U. Tse ◽  
Patrick Cavanagh

SummaryWhen perception differs from the physical stimulus, as it does for visual illusions and binocular rivalry, the opportunity arises to localize where perception emerges in the visual processing hierarchy. Representations prior to that stage differ from the eventual conscious percept even though they provide input to it. Here we investigate where and how a remarkable misperception of position emerges in the brain. This “double-drift” illusion causes a dramatic mismatch between retinal and perceived location, producing a perceived path that can differ from its physical path by 45° or more [1]. The deviations in the perceived trajectory can accumulate over at least a second [1] whereas other motion-induced position shifts accumulate over only 80 to 100 ms before saturating [2]. Using fMRI and multivariate pattern analysis, we find that the illusory path does not share activity patterns with a matched physical path in any early visual areas. In contrast, a whole-brain searchlight analysis reveals a shared representation in more anterior regions of the brain. These higher-order areas would have the longer time constants required to accumulate the small moment-to-moment position offsets that presumably originate in early visual cortices, and then transform these sensory inputs into a final conscious percept. The dissociation between perception and the activity in early sensory cortex suggests that perceived position does not emerge in what is traditionally regarded as the visual system but emerges instead at a much higher level.


2017 ◽  
Vol 24 (3) ◽  
pp. 277-293 ◽  
Author(s):  
Selen Atasoy ◽  
Gustavo Deco ◽  
Morten L. Kringelbach ◽  
Joel Pearson

A fundamental characteristic of spontaneous brain activity is coherent oscillations covering a wide range of frequencies. Interestingly, these temporal oscillations are highly correlated among spatially distributed cortical areas forming structured correlation patterns known as the resting state networks, although the brain is never truly at “rest.” Here, we introduce the concept of harmonic brain modes—fundamental building blocks of complex spatiotemporal patterns of neural activity. We define these elementary harmonic brain modes as harmonic modes of structural connectivity; that is, connectome harmonics, yielding fully synchronous neural activity patterns with different frequency oscillations emerging on and constrained by the particular structure of the brain. Hence, this particular definition implicitly links the hitherto poorly understood dimensions of space and time in brain dynamics and its underlying anatomy. Further we show how harmonic brain modes can explain the relationship between neurophysiological, temporal, and network-level changes in the brain across different mental states ( wakefulness, sleep, anesthesia, psychedelic). Notably, when decoded as activation of connectome harmonics, spatial and temporal characteristics of neural activity naturally emerge from the interplay between excitation and inhibition and this critical relation fits the spatial, temporal, and neurophysiological changes associated with different mental states. Thus, the introduced framework of harmonic brain modes not only establishes a relation between the spatial structure of correlation patterns and temporal oscillations (linking space and time in brain dynamics), but also enables a new dimension of tools for understanding fundamental principles underlying brain dynamics in different states of consciousness.


2021 ◽  
Author(s):  
Yingying Huang ◽  
Frank Pollick ◽  
Ming Liu ◽  
Delong Zhang

Abstract Visual mental imagery and visual perception have been shown to share a hierarchical topological visual structure of neural representation. Meanwhile, many studies have reported a dissociation of neural substrate between mental imagery and perception in function and structure. However, we have limited knowledge about how the visual hierarchical cortex involved into internally generated mental imagery and perception with visual input. Here we used a dataset from previous fMRI research (Horikawa & Kamitani, 2017), which included a visual perception and an imagery experiment with human participants. We trained two types of voxel-wise encoding models, based on Gabor features and activity patterns of high visual areas, to predict activity in the early visual cortex (EVC, i.e., V1, V2, V3) during perception, and then evaluated the performance of these models during mental imagery. Our results showed that during perception and imagery, activities in the EVC could be independently predicted by the Gabor features and activity of high visual areas via encoding models, which suggested that perception and imagery might share neural representation in the EVC. We further found that there existed a Gabor-specific and a non-Gabor-specific neural response pattern to stimuli in the EVC, which were shared by perception and imagery. These findings provide insight into mechanisms of how visual perception and imagery shared representation in the EVC.


2015 ◽  
Vol 27 (7) ◽  
pp. 1376-1387 ◽  
Author(s):  
Jessica Bulthé ◽  
Bert De Smedt ◽  
Hans P. Op de Beeck

In numerical cognition, there is a well-known but contested hypothesis that proposes an abstract representation of numerical magnitude in human intraparietal sulcus (IPS). On the other hand, researchers of object cognition have suggested another hypothesis for brain activity in IPS during the processing of number, namely that this activity simply correlates with the number of visual objects or units that are perceived. We contrasted these two accounts by analyzing multivoxel activity patterns elicited by dot patterns and Arabic digits of different magnitudes while participants were explicitly processing the represented numerical magnitude. The activity pattern elicited by the digit “8” was more similar to the activity pattern elicited by one dot (with which the digit shares the number of visual units but not the magnitude) compared to the activity pattern elicited by eight dots, with which the digit shares the represented abstract numerical magnitude. A multivoxel pattern classifier trained to differentiate one dot from eight dots classified all Arabic digits in the one-dot pattern category, irrespective of the numerical magnitude symbolized by the digit. These results were consistently obtained for different digits in IPS, its subregions, and many other brain regions. As predicted from object cognition theories, the number of presented visual units forms the link between the parietal activation elicited by symbolic and nonsymbolic numbers. The current study is difficult to reconcile with the hypothesis that parietal activation elicited by numbers would reflect a format-independent representation of number.


2014 ◽  
Vol 369 (1641) ◽  
pp. 20130534 ◽  
Author(s):  
Theofanis I. Panagiotaropoulos ◽  
Vishal Kapoor ◽  
Nikos K. Logothetis

The combination of electrophysiological recordings with ambiguous visual stimulation made possible the detection of neurons that represent the content of subjective visual perception and perceptual suppression in multiple cortical and subcortical brain regions. These neuronal populations, commonly referred to as the neural correlates of consciousness , are more likely to be found in the temporal and prefrontal cortices as well as the pulvinar, indicating that the content of perceptual awareness is represented with higher fidelity in higher-order association areas of the cortical and thalamic hierarchy, reflecting the outcome of competitive interactions between conflicting sensory information resolved in earlier stages. However, despite the significant insights into conscious perception gained through monitoring the activities of single neurons and small, local populations, the immense functional complexity of the brain arising from correlations in the activity of its constituent parts suggests that local, microscopic activity could only partially reveal the mechanisms involved in perceptual awareness. Rather, the dynamics of functional connectivity patterns on a mesoscopic and macroscopic level could be critical for conscious perception. Understanding these emergent spatio-temporal patterns could be informative not only for the stability of subjective perception but also for spontaneous perceptual transitions suggested to depend either on the dynamics of antagonistic ensembles or on global intrinsic activity fluctuations that may act upon explicit neural representations of sensory stimuli and induce perceptual reorganization. Here, we review the most recent results from local activity recordings and discuss the potential role of effective, correlated interactions during perceptual awareness.


2018 ◽  
Author(s):  
Natalia I. Córdova ◽  
Nicholas B. Turk-Browne ◽  
Mariam Aly

AbstractHippocampal episodic memory is fundamentally relational, consisting of links between events and the spatial and temporal contexts in which they occurred. Such relations are also important over much shorter time periods, during online visual perception. For example, how do we assess the relative spatial positions of objects, their temporal order, or the relationship between their features? Here, we investigate the role of the hippocampus in such online relational processing by manipulating visual attention to different kinds of relations in a dynamic display. While undergoing high-resolution fMRI, participants viewed two images in rapid succession on each trial and performed one of three relational tasks, judging the images’ relative: spatial positions, temporal onsets, or sizes. As a control, they sometimes also judged whether one image was tilted, irrespective of the other; this served as a baseline item task with no demands on relational processing. All hippocampal regions of interest (CA1, CA2/3/DG, subiculum) showed reliable deactivation when participants attended to relational vs. item information. Attention to temporal relations was associated with more robust deactivation than the other conditions. One possible interpretation of such deactivation is that it reflects hippocampal disengagement. If true, there should be reduced information content and noisier, less reliable patterns of activity in the hippocampus for the temporal vs. other tasks. Instead, analyses of multivariate activity patterns revealed more stable hippocampal representations in the temporal task. Additional analyses showed that this increased pattern similarity was not simply a reflection of the lower univariate activity. Thus, the hippocampus differentiates between relational and item processing even during online visual perception, and its representations of temporal relations in particular are robust and stable. Together, these findings suggest that the relational computations of the hippocampus, known to be important for memory, extend beyond this purpose, enabling the rapid online extraction of relational information in visual perception.


Author(s):  
Maria Tsantani ◽  
Nikolaus Kriegeskorte ◽  
Katherine Storrs ◽  
Adrian Lloyd Williams ◽  
Carolyn McGettigan ◽  
...  

AbstractFaces of different people elicit distinct functional MRI (fMRI) patterns in several face-selective brain regions. Here we used representational similarity analysis to investigate what type of identity-distinguishing information is encoded in three face-selective regions: fusiform face area (FFA), occipital face area (OFA), and posterior superior temporal sulcus (pSTS). We used fMRI to measure brain activity patterns elicited by naturalistic videos of famous face identities, and compared their representational distances in each region with models of the differences between identities. Models included low-level to high-level image-computable properties and complex human-rated properties. We found that the FFA representation reflected perceived face similarity, social traits, and gender, and was well accounted for by the OpenFace model (deep neural network, trained to cluster faces by identity). The OFA encoded low-level image-based properties (pixel-wise and Gabor-jet dissimilarities). Our results suggest that, although FFA and OFA can both discriminate between identities, the FFA representation is further removed from the image, encoding higher-level perceptual and social face information.


2021 ◽  
Author(s):  
Mathilde Salagnon ◽  
Sandrine Cremona ◽  
Marc Joliot ◽  
Francesco d'Errico ◽  
Emmanuel Mellet

It has been suggested that engraved abstract patterns dating from the Middle and Lower Palaeolithic served as means of representation and communication. Identifying the brain regions involved in visual processing of these engravings can provide insights into their function. In this study, brain activity was measured during perception of the earliest known Palaeolithic engraved patterns and compared to natural patterns mimicking human-made engravings. Participants were asked to categorise marks as being intentionally made by humans or due to natural processes (e.g. erosion, root etching). To simulate the putative familiarity of our ancestors with the marks, the responses of expert archaeologists and control participants were compared, allowing characterisation of the effect of previous knowledge on both behaviour and brain activity in perception of the marks. Besides a set of regions common to both groups and involved in visual analysis and decision-making, the experts exhibited greater activity in the inferior part of the lateral occipital cortex, ventral occipitotemporal cortex, and medial thalamic regions. These results are consistent with those reported in visual expertise studies, and confirm the importance of the integrative visual areas in the perception of the earliest abstract engravings. The attribution of a natural rather than human origin to the marks elicited greater activity in the salience network in both groups, reflecting the uncertainty and ambiguity in the perception of, and decision-making for, natural patterns. The activation of the salience network might also be related to the process at work in the attribution of an intention to the marks. The primary visual area was not specifically involved in the visual processing of engravings, which argued against its central role in the emergence of engraving production.


2019 ◽  
Author(s):  
Sophia M. Shatek ◽  
Tijl Grootswagers ◽  
Amanda K. Robinson ◽  
Thomas A. Carlson

AbstractMental imagery is the ability to generate images in the mind in the absence of sensory input. Both perceptual visual processing and internally generated imagery engage large, overlapping networks of brain regions. However, it is unclear whether they are characterized by similar temporal dynamics. Recent magnetoencephalography work has shown that object category information was decodable from brain activity during mental imagery, but the timing was delayed relative to perception. The current study builds on these findings, using electroencephalography to investigate the dynamics of mental imagery. Sixteen participants viewed two images of the Sydney Harbour Bridge and two images of Santa Claus. On each trial, they viewed a sequence of the four images and were asked to imagine one of them, which was cued retroactively by its temporal location in the sequence. Time-resolved multivariate pattern analysis was used to decode the viewed and imagined stimuli. Our results indicate that the dynamics of imagery processes are more variable across, and within, participants compared to perception of physical stimuli. Although category and exemplar information was decodable for viewed stimuli, there were no informative patterns of activity during mental imagery. The current findings suggest stimulus complexity, task design and individual differences may influence the ability to successfully decode imagined images. We discuss the implications of these results for our understanding of the neural processes underlying mental imagery.


Sign in / Sign up

Export Citation Format

Share Document