scholarly journals Attention modulates neural representation to render reconstructions according to subjective appearance

2022 ◽  
Vol 5 (1) ◽  
Author(s):  
Tomoyasu Horikawa ◽  
Yukiyasu Kamitani

AbstractStimulus images can be reconstructed from visual cortical activity. However, our perception of stimuli is shaped by both stimulus-induced and top-down processes, and it is unclear whether and how reconstructions reflect top-down aspects of perception. Here, we investigate the effect of attention on reconstructions using fMRI activity measured while subjects attend to one of two superimposed images. A state-of-the-art method is used for image reconstruction, in which brain activity is translated (decoded) to deep neural network (DNN) features of hierarchical layers then to an image. Reconstructions resemble the attended rather than unattended images. They can be modeled by superimposed images with biased contrasts, comparable to the appearance during attention. Attentional modulations are found in a broad range of hierarchical visual representations and mirror the brain–DNN correspondence. Our results demonstrate that top-down attention counters stimulus-induced responses, modulating neural representations to render reconstructions in accordance with subjective appearance.

2020 ◽  
Author(s):  
Tomoyasu Horikawa ◽  
Yukiyasu Kamitani

SummaryVisual image reconstruction from brain activity produces images whose features are consistent with the neural representations in the visual cortex given arbitrary visual instances [1–3], presumably reflecting the person’s visual experience. Previous reconstruction studies have been concerned either with how stimulus images are faithfully reconstructed or with whether mentally imagined contents can be reconstructed in the absence of external stimuli. However, many lines of vision research have demonstrated that even stimulus perception is shaped both by stimulus-induced processes and top-down processes. In particular, attention (or the lack of it) is known to profoundly affect visual experience [4–8] and brain activity [9–21]. Here, to investigate how top-down attention impacts the neural representation of visual images and the reconstructions, we use a state-of-the-art method (deep image reconstruction [3]) to reconstruct visual images from fMRI activity measured while subjects attend to one of two images superimposed with equally weighted contrasts. Deep image reconstruction exploits the hierarchical correspondence between the brain and a deep neural network (DNN) to translate (decode) brain activity into DNN features of multiple layers, and then create images that are consistent with the decoded DNN features [3, 22, 23]. Using the deep image reconstruction model trained on fMRI responses to single natural images, we decode brain activity during the attention trials. Behavioral evaluations show that the reconstructions resemble the attended rather than the unattended images. The reconstructions can be modeled by superimposed images with contrasts biased to the attended one, which are comparable to the appearance of the stimuli under attention measured in a separate session. Attentional modulations are found in a broad range of hierarchical visual representations and mirror the brain–DNN correspondence. Our results demonstrate that top-down attention counters stimulus-induced responses and modulate neural representations to render reconstructions in accordance with subjective appearance. The reconstructions appear to reflect the content of visual experience and volitional control, opening a new possibility of brain-based communication and creation.


2021 ◽  
Author(s):  
Rohan Saha ◽  
Jennifer Campbell ◽  
Janet F. Werker ◽  
Alona Fyshe

Infants start developing rudimentary language skills and can start understanding simple words well before their first birthday. This development has also been shown primarily using Event Related Potential (ERP) techniques to find evidence of word comprehension in the infant brain. While these works validate the presence of semantic representations of words (word meaning) in infants, they do not tell us about the mental processes involved in the manifestation of these semantic representations or the content of the representations. To this end, we use a decoding approach where we employ machine learning techniques on Electroencephalography (EEG) data to predict the semantic representations of words found in the brain activity of infants. We perform multiple analyses to explore word semantic representations in two groups of infants (9-month-old and 12-month-old). Our analyses show significantly above chance decodability of overall word semantics, word animacy, and word phonetics. As we analyze brain activity, we observe that participants in both age groups show signs of word comprehension immediately after word onset, marked by our model's significantly above chance word prediction accuracy. We also observed strong neural representations of word phonetics in the brain data for both age groups, some likely correlated to word decoding accuracy and others not. Lastly, we discover that the neural representations of word semantics are similar in both infant age groups. Our results on word semantics, phonetics, and animacy decodability, give us insights into the evolution of neural representation of word meaning in infants.


2020 ◽  
Vol 32 (18) ◽  
pp. 15249-15262
Author(s):  
Sid Ghoshal ◽  
Stephen Roberts

Abstract Much of modern practice in financial forecasting relies on technicals, an umbrella term for several heuristics applying visual pattern recognition to price charts. Despite its ubiquity in financial media, the reliability of its signals remains a contentious and highly subjective form of ‘domain knowledge’. We investigate the predictive value of patterns in financial time series, applying machine learning and signal processing techniques to 22 years of US equity data. By reframing technical analysis as a poorly specified, arbitrarily preset feature-extractive layer in a deep neural network, we show that better convolutional filters can be learned directly from the data, and provide visual representations of the features being identified. We find that an ensemble of shallow, thresholded convolutional neural networks optimised over different resolutions achieves state-of-the-art performance on this domain, outperforming technical methods while retaining some of their interpretability.


2013 ◽  
Vol 347-350 ◽  
pp. 2516-2520
Author(s):  
Jian Hua Jiang ◽  
Xu Yu ◽  
Zhi Xing Huang

Over the last decade, functional magnetic resonance imaging (fMRI) has become a primary tool to predict the brain activity.During the past research, researchers transfer the focus from the picture to the word.The results of these researches are relatively successful. In this paper, several typical methods which are machine learning methods are introduced. And most of the methods are by using fMRI data associated with words features. The semantic features (properties or factors) support words neural representation, and have a certain commonality in the people.The purpose of the application of these methods is used for prediction or classification.


2020 ◽  
Author(s):  
David Badre ◽  
Apoorva Bhandari ◽  
Haley Keglovits ◽  
Atsushi Kikumoto

Cognitive control allows us to think and behave flexibly based on our context and goals. At the heart of theories of cognitive control is a control representation that enables the same input to produce different outputs contingent on contextual factors. In this review, we focus on an important property of the control representation’s neural code: its representational dimensionality. Dimensionality of a neural representation balances a basic separability/generalizability trade-off in neural computation. We will discuss the implications of this trade-off for cognitive control. We will then briefly review current neuroscience findings regarding the dimensionality of control representations in the brain, particularly the prefrontal cortex. We conclude by highlighting open questions and crucial directions for future research.


2021 ◽  
Author(s):  
Ze Fu ◽  
Xiaosha Wang ◽  
Xiaoying Wang ◽  
Huichao Yang ◽  
Jiahuan Wang ◽  
...  

A critical way for humans to acquire, represent and communicate information is through language, yet the underlying computation mechanisms through which language contributes to our word meaning representations are poorly understood. We compared three major types of word computation mechanisms from large language corpus (simple co-occurrence, graph-space relations and neural-network-vector-embedding relations) in terms of the association of words’ brain activity patterns, measured by two functional magnetic resonance imaging (fMRI) experiments. Word relations derived from a graph-space representation, and not neural-network-vector-embedding, had unique explanatory power for the neural activity patterns in brain regions that have been shown to be particularly sensitive to language processes, including the anterior temporal lobe (capturing graph-common-neighbors), inferior frontal gyrus, and posterior middle/inferior temporal gyrus (capturing graph-shortest-path). These results were robust across different window sizes and graph sizes and were relatively specific to language inputs. These findings highlight the role of cumulative language inputs in organizing word meaning neural representations and provide a mathematical model to explain how different brain regions capture different types of language-derived information.


2017 ◽  
Author(s):  
J. Brendan Ritchie ◽  
David Michael Kaplan ◽  
Colin Klein

AbstractSince its introduction, multivariate pattern analysis (MVPA), or “neural decoding”, has transformed the field of cognitive neuroscience. Underlying its influence is a crucial inference, which we call the Decoder’s Dictum: if information can be decoded from patterns of neural activity, then this provides strong evidence about what information those patterns represent. Although the Dictum is a widely held and well-motivated principle in decoding research, it has received scant philosophical attention. We critically evaluate the Dictum, arguing that it is false: decodability is a poor guide for revealing the content of neural representations. However, we also suggest how the Dictum can be improved on, in order to better justify inferences about neural representation using MVPA.


2019 ◽  
Author(s):  
Mohamed Abdelhack ◽  
Yukiyasu Kamitani

AbstractVisual recognition involves integrating visual information with other sensory information and prior knowledge. In accord with Bayesian inference under conditions of unreliable visual input, the brain relies on the prior as a source of information to achieve the inference process. This drives a top-down process to improve the neural representation of visual input. However, the extent to which non-stimulus-driven top-down information affects processing in the ventral stream is still unclear. We conducted a perceptual decision-making task using blurred images, while conducting functional magnetic resonance imaging. We then transformed brain activity into deep neural network features to distinguish bottom-up and top-down signals. We found that top-down information unrelated to the stimulus had a minimal effect on lower-level visual processes. The neural representations of degraded stimuli that were misrecognized were still correlated with the correct object category in the lower levels of processing. In contrast, activity in the higher cognitive areas was more strongly correlated with recognition reported by the subjects. The results indicated a discrepancy between the results of processing at the lower and higher levels, indicating the existence of a stimulus-independent top-down signal flowing back down the hierarchy. These findings suggest that integration between bottom-up and top-down information takes the form of competing evidence in higher visual areas between prior-driven top-down and stimulus-driven bottom-up signals. These findings could provide important insight into the different modes of integration of neural signals in the visual cortex that contribute to the visual inference process.


2017 ◽  
Author(s):  
Cooper A. Smout ◽  
Jason B. Mattingley

AbstractRecent evidence suggests that voluntary spatial attention can affect neural processing of visual stimuli that do not enter conscious awareness (i.e. invisible stimuli), supporting the notion that attention and awareness are dissociable processes (Watanabe et al., 2011; Wyart, Dehaene, & Tallon-Baudry, 2012). To date, however, no study has demonstrated that these effects reflect enhancement of the neural representation of invisible stimuli per se, as opposed to other neural processes not specifically tied to the stimulus in question. In addition, it remains unclear whether spatial attention can modulate neural representations of invisible stimuli in direct competition with highly salient and visible stimuli. Here we developed a novel electroencephalography (EEG) frequency-tagging paradigm to obtain a continuous readout of human brain activity associated with visible and invisible signals embedded in dynamic noise. Participants (N = 23) detected occasional contrast changes in one of two flickering image streams on either side of fixation. Each image stream contained a visible or invisible signal embedded in every second noise image, the visibility of which was titrated and checked using a two-interval forced-choice detection task. Steady-state visual-evoked potentials (SSVEPs) were computed from EEG data at the signal and noise frequencies of interest. Cluster-based permutation analyses revealed significant neural responses to both visible and invisible signals across posterior scalp electrodes. Control analyses revealed that these responses did not reflect a subharmonic response to noise stimuli. In line with previous findings, spatial attention increased the neural representation of visible signals. Crucially, spatial attention also increased the neural representation of invisible signals. As such, the present results replicate and extend previous studies by demonstrating that attention can modulate the neural representation of invisible signals that are in direct competition with highly salient masking stimuli.


2021 ◽  
Author(s):  
Sheena Waters ◽  
Elise Kanber ◽  
Nadine Lavan ◽  
Michel Belyk ◽  
Daniel Carey ◽  
...  

Humans have a remarkable capacity to finely control the muscles of the larynx, via distinct patterns of cortical topography and innervation that may underpin our sophisticated vocal capabilities compared with non-human primates. Here, we investigated the behavioural and neural correlates of laryngeal control, and their relationship to vocal expertise, using an imitation task that required adjustments of larynx musculature during speech. Highly-trained human singers and non-singer control participants modulated voice pitch and vocal tract length (VTL) to mimic auditory speech targets, while undergoing real-time anatomical scans of the vocal tract and functional scans of brain activity. Multivariate analyses of speech acoustics, larynx movements and brain activation data were used to quantify vocal modulation behaviour, and to search for neural representations of the two modulated vocal parameters during the preparation and execution of speech. We found that singers showed more accurate task-relevant modulations of speech pitch and VTL (i.e. larynx height, as measured with vocal tract MRI) during speech imitation; this was accompanied by stronger representation of VTL within a region of right dorsal somatosensory cortex. Our findings suggest a common neural basis for enhanced vocal control in speech and song.


Sign in / Sign up

Export Citation Format

Share Document