scholarly journals Insights into Early Word Comprehension: Tracking the Neural Representations of Word Semantics in Infants.

2021 ◽  
Author(s):  
Rohan Saha ◽  
Jennifer Campbell ◽  
Janet F. Werker ◽  
Alona Fyshe

Infants start developing rudimentary language skills and can start understanding simple words well before their first birthday. This development has also been shown primarily using Event Related Potential (ERP) techniques to find evidence of word comprehension in the infant brain. While these works validate the presence of semantic representations of words (word meaning) in infants, they do not tell us about the mental processes involved in the manifestation of these semantic representations or the content of the representations. To this end, we use a decoding approach where we employ machine learning techniques on Electroencephalography (EEG) data to predict the semantic representations of words found in the brain activity of infants. We perform multiple analyses to explore word semantic representations in two groups of infants (9-month-old and 12-month-old). Our analyses show significantly above chance decodability of overall word semantics, word animacy, and word phonetics. As we analyze brain activity, we observe that participants in both age groups show signs of word comprehension immediately after word onset, marked by our model's significantly above chance word prediction accuracy. We also observed strong neural representations of word phonetics in the brain data for both age groups, some likely correlated to word decoding accuracy and others not. Lastly, we discover that the neural representations of word semantics are similar in both infant age groups. Our results on word semantics, phonetics, and animacy decodability, give us insights into the evolution of neural representation of word meaning in infants.

2021 ◽  
Author(s):  
Ze Fu ◽  
Xiaosha Wang ◽  
Xiaoying Wang ◽  
Huichao Yang ◽  
Jiahuan Wang ◽  
...  

A critical way for humans to acquire, represent and communicate information is through language, yet the underlying computation mechanisms through which language contributes to our word meaning representations are poorly understood. We compared three major types of word computation mechanisms from large language corpus (simple co-occurrence, graph-space relations and neural-network-vector-embedding relations) in terms of the association of words’ brain activity patterns, measured by two functional magnetic resonance imaging (fMRI) experiments. Word relations derived from a graph-space representation, and not neural-network-vector-embedding, had unique explanatory power for the neural activity patterns in brain regions that have been shown to be particularly sensitive to language processes, including the anterior temporal lobe (capturing graph-common-neighbors), inferior frontal gyrus, and posterior middle/inferior temporal gyrus (capturing graph-shortest-path). These results were robust across different window sizes and graph sizes and were relatively specific to language inputs. These findings highlight the role of cumulative language inputs in organizing word meaning neural representations and provide a mathematical model to explain how different brain regions capture different types of language-derived information.


2020 ◽  
Author(s):  
Tomoyasu Horikawa ◽  
Yukiyasu Kamitani

SummaryVisual image reconstruction from brain activity produces images whose features are consistent with the neural representations in the visual cortex given arbitrary visual instances [1–3], presumably reflecting the person’s visual experience. Previous reconstruction studies have been concerned either with how stimulus images are faithfully reconstructed or with whether mentally imagined contents can be reconstructed in the absence of external stimuli. However, many lines of vision research have demonstrated that even stimulus perception is shaped both by stimulus-induced processes and top-down processes. In particular, attention (or the lack of it) is known to profoundly affect visual experience [4–8] and brain activity [9–21]. Here, to investigate how top-down attention impacts the neural representation of visual images and the reconstructions, we use a state-of-the-art method (deep image reconstruction [3]) to reconstruct visual images from fMRI activity measured while subjects attend to one of two images superimposed with equally weighted contrasts. Deep image reconstruction exploits the hierarchical correspondence between the brain and a deep neural network (DNN) to translate (decode) brain activity into DNN features of multiple layers, and then create images that are consistent with the decoded DNN features [3, 22, 23]. Using the deep image reconstruction model trained on fMRI responses to single natural images, we decode brain activity during the attention trials. Behavioral evaluations show that the reconstructions resemble the attended rather than the unattended images. The reconstructions can be modeled by superimposed images with contrasts biased to the attended one, which are comparable to the appearance of the stimuli under attention measured in a separate session. Attentional modulations are found in a broad range of hierarchical visual representations and mirror the brain–DNN correspondence. Our results demonstrate that top-down attention counters stimulus-induced responses and modulate neural representations to render reconstructions in accordance with subjective appearance. The reconstructions appear to reflect the content of visual experience and volitional control, opening a new possibility of brain-based communication and creation.


2022 ◽  
Vol 5 (1) ◽  
Author(s):  
Tomoyasu Horikawa ◽  
Yukiyasu Kamitani

AbstractStimulus images can be reconstructed from visual cortical activity. However, our perception of stimuli is shaped by both stimulus-induced and top-down processes, and it is unclear whether and how reconstructions reflect top-down aspects of perception. Here, we investigate the effect of attention on reconstructions using fMRI activity measured while subjects attend to one of two superimposed images. A state-of-the-art method is used for image reconstruction, in which brain activity is translated (decoded) to deep neural network (DNN) features of hierarchical layers then to an image. Reconstructions resemble the attended rather than unattended images. They can be modeled by superimposed images with biased contrasts, comparable to the appearance during attention. Attentional modulations are found in a broad range of hierarchical visual representations and mirror the brain–DNN correspondence. Our results demonstrate that top-down attention counters stimulus-induced responses, modulating neural representations to render reconstructions in accordance with subjective appearance.


2021 ◽  
Author(s):  
Mo Shahdloo ◽  
Emin Çelik ◽  
Burcu A Urgen ◽  
Jack L. Gallant ◽  
Tolga Çukur

Object and action perception in cluttered dynamic natural scenes relies on efficient allocation of limited brain resources to prioritize the attended targets over distractors. It has been suggested that during visual search for objects, distributed semantic representation of hundreds of object categories is warped to expand the representation of targets. Yet, little is known about whether and where in the brain visual search for action categories modulates semantic representations. To address this fundamental question, we studied human brain activity recorded via functional magnetic resonance imaging while subjects viewed natural movies and searched for either communication or locomotion actions. We find that attention directed to action categories elicits tuning shifts that warp semantic representations broadly across neocortex, and that these shifts interact with intrinsic selectivity of cortical voxels for target actions. These results suggest that attention serves to facilitate task performance during social interactions by dynamically shifting semantic selectivity towards target actions, and that tuning shifts are a general feature of conceptual representations in the brain.


2013 ◽  
Vol 347-350 ◽  
pp. 2516-2520
Author(s):  
Jian Hua Jiang ◽  
Xu Yu ◽  
Zhi Xing Huang

Over the last decade, functional magnetic resonance imaging (fMRI) has become a primary tool to predict the brain activity.During the past research, researchers transfer the focus from the picture to the word.The results of these researches are relatively successful. In this paper, several typical methods which are machine learning methods are introduced. And most of the methods are by using fMRI data associated with words features. The semantic features (properties or factors) support words neural representation, and have a certain commonality in the people.The purpose of the application of these methods is used for prediction or classification.


2020 ◽  
Author(s):  
David Badre ◽  
Apoorva Bhandari ◽  
Haley Keglovits ◽  
Atsushi Kikumoto

Cognitive control allows us to think and behave flexibly based on our context and goals. At the heart of theories of cognitive control is a control representation that enables the same input to produce different outputs contingent on contextual factors. In this review, we focus on an important property of the control representation’s neural code: its representational dimensionality. Dimensionality of a neural representation balances a basic separability/generalizability trade-off in neural computation. We will discuss the implications of this trade-off for cognitive control. We will then briefly review current neuroscience findings regarding the dimensionality of control representations in the brain, particularly the prefrontal cortex. We conclude by highlighting open questions and crucial directions for future research.


2003 ◽  
Vol 20 (4) ◽  
pp. 357-382 ◽  
Author(s):  
Laura Bischoff Renninger ◽  
Roni I. Granot ◽  
Emanuel Donchin

Our primary goal has been to elucidate a model of pitch memory by examining the brain activity of musicians with and without absolute pitch during listening tasks. Subjects, screened for both absolute and relative pitch abilities, were presented with two auditory tasks and one visual task that served as a control. In the first auditory task (pitch memory task), subjects were asked to differentiate between diatonic and nondiatonic tones within a tonal framework. In the second auditory task (contour task), subjects were presented with the same pitch sequences but instead asked to differentiate between tones moving upward or downward. For the visual control task, subjects were presented again with the same pitch sequences and asked to determine whether each pitch was diatonic or nondiatonic, only this time the note names appeared visually on the computer screen. Our findings strongly suggest that there are various levels of absolute pitch ability. Some absolute pitch subjects have, in addition to this skill, strong relative pitch abilities, and these differences are reflected quite consistently by the behavior of the P300 component of the event-related potential. Our research also strengthens the idea that the memory system for pitch and interval distances is distinct from the memory system for contour (W. J. Dowling, 1978). Our results are discussed within the context of the current absolute pitch literature.


2017 ◽  
Author(s):  
J. Brendan Ritchie ◽  
David Michael Kaplan ◽  
Colin Klein

AbstractSince its introduction, multivariate pattern analysis (MVPA), or “neural decoding”, has transformed the field of cognitive neuroscience. Underlying its influence is a crucial inference, which we call the Decoder’s Dictum: if information can be decoded from patterns of neural activity, then this provides strong evidence about what information those patterns represent. Although the Dictum is a widely held and well-motivated principle in decoding research, it has received scant philosophical attention. We critically evaluate the Dictum, arguing that it is false: decodability is a poor guide for revealing the content of neural representations. However, we also suggest how the Dictum can be improved on, in order to better justify inferences about neural representation using MVPA.


2017 ◽  
Author(s):  
Cooper A. Smout ◽  
Jason B. Mattingley

AbstractRecent evidence suggests that voluntary spatial attention can affect neural processing of visual stimuli that do not enter conscious awareness (i.e. invisible stimuli), supporting the notion that attention and awareness are dissociable processes (Watanabe et al., 2011; Wyart, Dehaene, & Tallon-Baudry, 2012). To date, however, no study has demonstrated that these effects reflect enhancement of the neural representation of invisible stimuli per se, as opposed to other neural processes not specifically tied to the stimulus in question. In addition, it remains unclear whether spatial attention can modulate neural representations of invisible stimuli in direct competition with highly salient and visible stimuli. Here we developed a novel electroencephalography (EEG) frequency-tagging paradigm to obtain a continuous readout of human brain activity associated with visible and invisible signals embedded in dynamic noise. Participants (N = 23) detected occasional contrast changes in one of two flickering image streams on either side of fixation. Each image stream contained a visible or invisible signal embedded in every second noise image, the visibility of which was titrated and checked using a two-interval forced-choice detection task. Steady-state visual-evoked potentials (SSVEPs) were computed from EEG data at the signal and noise frequencies of interest. Cluster-based permutation analyses revealed significant neural responses to both visible and invisible signals across posterior scalp electrodes. Control analyses revealed that these responses did not reflect a subharmonic response to noise stimuli. In line with previous findings, spatial attention increased the neural representation of visible signals. Crucially, spatial attention also increased the neural representation of invisible signals. As such, the present results replicate and extend previous studies by demonstrating that attention can modulate the neural representation of invisible signals that are in direct competition with highly salient masking stimuli.


2021 ◽  
Author(s):  
Sheena Waters ◽  
Elise Kanber ◽  
Nadine Lavan ◽  
Michel Belyk ◽  
Daniel Carey ◽  
...  

Humans have a remarkable capacity to finely control the muscles of the larynx, via distinct patterns of cortical topography and innervation that may underpin our sophisticated vocal capabilities compared with non-human primates. Here, we investigated the behavioural and neural correlates of laryngeal control, and their relationship to vocal expertise, using an imitation task that required adjustments of larynx musculature during speech. Highly-trained human singers and non-singer control participants modulated voice pitch and vocal tract length (VTL) to mimic auditory speech targets, while undergoing real-time anatomical scans of the vocal tract and functional scans of brain activity. Multivariate analyses of speech acoustics, larynx movements and brain activation data were used to quantify vocal modulation behaviour, and to search for neural representations of the two modulated vocal parameters during the preparation and execution of speech. We found that singers showed more accurate task-relevant modulations of speech pitch and VTL (i.e. larynx height, as measured with vocal tract MRI) during speech imitation; this was accompanied by stronger representation of VTL within a region of right dorsal somatosensory cortex. Our findings suggest a common neural basis for enhanced vocal control in speech and song.


Sign in / Sign up

Export Citation Format

Share Document