scholarly journals Neural alignment predicts learning outcomes in students taking an introduction to computer science course

2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Meir Meshulam ◽  
Liat Hasenfratz ◽  
Hanna Hillman ◽  
Yun-Fei Liu ◽  
Mai Nguyen ◽  
...  

AbstractDespite major advances in measuring human brain activity during and after educational experiences, it is unclear how learners internalize new content, especially in real-life and online settings. In this work, we introduce a neural approach to predicting and assessing learning outcomes in a real-life setting. Our approach hinges on the idea that successful learning involves forming the right set of neural representations, which are captured in canonical activity patterns shared across individuals. Specifically, we hypothesized that learning is mirrored in neural alignment: the degree to which an individual learner’s neural representations match those of experts, as well as those of other learners. We tested this hypothesis in a longitudinal functional MRI study that regularly scanned college students enrolled in an introduction to computer science course. We additionally scanned graduate student experts in computer science. We show that alignment among students successfully predicts overall performance in a final exam. Furthermore, within individual students, we find better learning outcomes for concepts that evoke better alignment with experts and with other students, revealing neural patterns associated with specific learned concepts in individuals.

2020 ◽  
Author(s):  
Meir Meshulam ◽  
Liat Hasenfratz ◽  
Hanna Hillman ◽  
Yun-Fei Liu ◽  
Mai Nguyen ◽  
...  

AbstractHow do students understand and remember new information? Despite major advances in measuring human brain activity during and after educational experiences, it is unclear how learners internalize new content, especially in real-life and online settings. In this work, we introduce a neural measure for predicting and assessing learning outcomes. Our approach hinges on the idea that successful learning involves forming the “right” set of neural representations, which are captured in “canonical” activity patterns shared across individuals. Specifically, we hypothesized that understanding is mirrored in “neural alignment”: the degree to which an individual learner’s neural representations match those of experts, as well as those of other learners. We tested this hypothesis in a longitudinal functional MRI study that regularly scanned college students enrolled in an introduction to computer science course. We additionally scanned graduate student “experts” in computer science. We found that alignment among students successfully predicted overall performance in a final exam. Furthermore, within individual students, concepts that evoked better alignment with the experts and with their fellow students were better understood, revealing neural patterns associated with understanding specific concepts. These results provide support for a novel neural measure of concept understanding that can be used to assess and predict learning outcomes in real-life contexts.


2019 ◽  
Author(s):  
S. A. Herff ◽  
C. Herff ◽  
A. J. Milne ◽  
G. D. Johnson ◽  
J. J. Shih ◽  
...  

AbstractRhythmic auditory stimuli are known to elicit matching activity patterns in neural populations. Furthermore, recent research has established the particular importance of high-gamma brain activity in auditory processing by showing its involvement in auditory phrase segmentation and envelope-tracking. Here, we use electrocorticographic (ECoG) recordings from eight human listeners, to see whether periodicities in high-gamma activity track the periodicities in the envelope of musical rhythms during rhythm perception and imagination. Rhythm imagination was elicited by instructing participants to imagine the rhythm to continue during pauses of several repetitions. To identify electrodes whose periodicities in high-gamma activity track the periodicities in the musical rhythms, we compute the correlation between the autocorrelations (ACC) of both the musical rhythms and the neural signals. A condition in which participants listened to white noise was used to establish a baseline. High-gamma autocorrelations in auditory areas in the superior temporal gyrus and in frontal areas on both hemispheres significantly matched the autocorrelation of the musical rhythms. Overall, numerous significant electrodes are observed on the right hemisphere. Of particular interest is a large cluster of electrodes in the right prefrontal cortex that is active during both rhythm perception and imagination. This indicates conscious processing of the rhythms’ structure as opposed to mere auditory phenomena. The ACC approach clearly highlights that high-gamma activity measured from cortical electrodes tracks both attended and imagined rhythms.


2016 ◽  
Author(s):  
Janice Chen ◽  
Yuan Chang Leong ◽  
Kenneth A Norman ◽  
Uri Hasson

Our daily lives revolve around sharing experiences and memories with others. When different people recount the same events, how similar are their underlying neural representations? In this study, participants viewed a fifty-minute audio-visual movie, then verbally described the events while undergoing functional MRI. These descriptions were completely unguided and highly detailed, lasting for up to forty minutes. As each person spoke, event-specific spatial patterns were reinstated (movie-vs.-recall correlation) in default network, medial temporal, and high-level visual areas; moreover, individual event patterns were highly discriminable and similar between people during recollection (recall-vs.-recall similarity), suggesting the existence of spatially organized memory representations. In posterior medial cortex, medial prefrontal cortex, and angular gyrus, activity patterns during recall were more similar between people than to patterns elicited by the movie, indicating systematic reshaping of percept into memory across individuals. These results reveal striking similarity in how neural activity underlying real-life memories is organized and transformed in the brains of different people as they speak spontaneously about past events.


2010 ◽  
Vol 103 (1) ◽  
pp. 360-370 ◽  
Author(s):  
Vincenzo Maffei ◽  
Emiliano Macaluso ◽  
Iole Indovina ◽  
Guy Orban ◽  
Francesco Lacquaniti

Neural substrates for processing constant speed visual motion have been extensively studied. Less is known about the brain activity patterns when the target speed changes continuously, for instance under the influence of gravity. Using functional MRI (fMRI), here we compared brain responses to accelerating/decelerating targets with the responses to constant speed targets. The target could move along the vertical under gravity (1 g), under reversed gravity (−1 g), or at constant speed (0 g). In the first experiment, subjects observed targets moving in smooth motion and responded to a GO signal delivered at a random time after target arrival. As expected, we found that the timing of the motor responses did not depend significantly on the specific motion law. Therefore brain activity in the contrast between different motion laws was not related to motor timing responses. Average BOLD signals were significantly greater for 1 g targets than either 0 g or −1 g targets in a distributed network including bilateral insulae, left lingual gyrus, and brain stem. Moreover, in these regions, the mean activity decreased monotonically from 1 g to 0 g and to −1 g. In the second experiment, subjects intercepted 1 g, 0 g, and −1 g targets either in smooth motion (RM) or in long-range apparent motion (LAM). We found that the sites in the right insula and left lingual gyrus, which were selectively engaged by 1 g targets in the first experiment, were also significantly more active during 1 g trials than during −1 g trials both in RM and LAM. The activity in 0 g trials was again intermediate between that in 1 g trials and that in −1 g trials. Therefore in these regions the global activity modulation with the law of vertical motion appears to hold for both RM and LAM. Instead, a region in the inferior parietal lobule showed a preference for visual gravitational motion only in LAM but not RM.


2021 ◽  
Author(s):  
Ze Fu ◽  
Xiaosha Wang ◽  
Xiaoying Wang ◽  
Huichao Yang ◽  
Jiahuan Wang ◽  
...  

A critical way for humans to acquire, represent and communicate information is through language, yet the underlying computation mechanisms through which language contributes to our word meaning representations are poorly understood. We compared three major types of word computation mechanisms from large language corpus (simple co-occurrence, graph-space relations and neural-network-vector-embedding relations) in terms of the association of words’ brain activity patterns, measured by two functional magnetic resonance imaging (fMRI) experiments. Word relations derived from a graph-space representation, and not neural-network-vector-embedding, had unique explanatory power for the neural activity patterns in brain regions that have been shown to be particularly sensitive to language processes, including the anterior temporal lobe (capturing graph-common-neighbors), inferior frontal gyrus, and posterior middle/inferior temporal gyrus (capturing graph-shortest-path). These results were robust across different window sizes and graph sizes and were relatively specific to language inputs. These findings highlight the role of cumulative language inputs in organizing word meaning neural representations and provide a mathematical model to explain how different brain regions capture different types of language-derived information.


2019 ◽  
Author(s):  
Lau M. Andersen ◽  
Christoph Pfeiffer ◽  
Silvia Ruffieux ◽  
Bushra Riaz ◽  
Dag Winkler ◽  
...  

AbstractMagnetoencephalography (MEG) has a unique capacity to resolve the spatio-temporal development of brain activity from non-invasive measurements. Conventional MEG, however, relies on sensors that sample from a distance (20-40 mm) to the head due to thermal insulation requirements (the MEG sensors function at 4 K in a helmet). A gain in signal strength and spatial resolution may be achieved if sensors are moved closer to the head. Here, we report a study comparing measurements from a seven-channel on-scalp SQUID MEG system to those from a conventional (in-helmet) SQUID MEG system.We compared spatio-temporal resolution between on-scalp and conventional MEG by comparing the discrimination accuracy for neural activity patterns resulting from stimulating five different phalanges of the right hand. Because of proximity and sensor density differences between on-scalp and conventional MEG, we hypothesized that on-scalp MEG would allow for a more high-resolved assessment of these activity patterns, and therefore also a better classification performance in discriminating between neural activations from the different phalanges.We observed that on-scalp MEG provided better classification performance during an early post-stimulus period (15-30 ms). This corresponded to electroencephalographic (EEG) response components N16 and P23, and was an unexpected observation as these components are usually not observed in conventional MEG. They indicate that on-scalp MEG opens up for a richer registration of the cortical signal, allowing for sensitivity to what are potentially sources in the thalamo-cortical radiation and to quasi-radial sources.We had originally expected that on-scalp MEG would provide better classification accuracy based on activity in proximity to the P60m component compared to conventional MEG. This component indeed allowed for the best classification performance for both MEG systems (60-75%, chance 50%). However, we did not find that on-scalp MEG allowed for better classification than conventional MEG at this latency. We believe this may be due to the limited sensor coverage in the recording, in combination with our strategy for positioning the on-scalp MEG sensors. We discuss how sensor density and coverage as well as between-phalange source field dissimilarities may influence our hypothesis testing, which we believe to be useful for future benchmarking measurements.


2021 ◽  
Author(s):  
Gabrielle Toupin ◽  
Mohamed S. Benlamine ◽  
Claude Frasson

Amusement can help modulate psychological disorders and cognitive functions. Unfortunately, algorithms classifying emotions still combine multiple positive emotions into a unique emotion, namely joy, making it hard to use amusement in a real-life setting. Here we train a Long-Short-Term-Memory (LSTM) on electroencephalography (EEG) to predict amusement on a categorical scale. Participants (n=10) watched and rated 120 videos with various funniness levels while their brain activity was recorded with an Emotiv Headset. Participant’s ratings were divided into four bins of amusement (low, medium, high & very high) based on the participant’s ranking’s percentile. Nested cross-validation was used to validate the models. We first left out one video from each participant for the final model’s validation and a leave-one-group-out technique was used to test the model on an unseen participant during the training phase. The nested cross-validation was tested on sixteen different videos. We created an LSTM model with five hidden layers, vatch size of 256 and an input layer of 14 x 128 (number of electrodes x 1 sec of recording) and four nodes representing the different levels of amusement. The best model obtained during the training phase was tested on the unseen video. While the level of accuracy between the validation videos varies slightly (mean=57.3%, std=13.7%), our best model obtained an accuracy of 82,4%. This high accuracy supports the use of brain activity to predict amusement. Moreover, the validation process we design conveys that models using this technique are transferable across participants and videos.


2019 ◽  
Author(s):  
Alina Leminen ◽  
Maxime Verwoert ◽  
Mona Moisala ◽  
Viljami Salmela ◽  
Patrik Wikman ◽  
...  

AbstractIn real-life noisy situations, we can selectively attend to conversations in the presence of irrelevant voices, but neurocognitive mechanisms in such natural listening situations remaiin largely unexplored. Previous research has shown distributed activity in the mid superior temporal gyrus (STG) and sulcus (STS) while listening to speech and human voices, in the posterior STS and fusiform gyrus when combining auditory, visual and linguistic information, as well as in lefthemisphere temporal and frontal cortical areas during comprehension. In the present functional magnetic resonance imaging (fMRI) study, we investigated how selective attention modulates neural responses to naturalistic audiovisual dialogues. Our healthy adult participants (N = 15) selectively attended to video-taped dialogues between a man and woman in the presence of irrelevant continuous speech in the background. We modulated the auditory quality of dialogues with noise vocoding and their visual quality by masking speech-related facial movements. Both increased auditory quality and increased visual quality were associated with bilateral activity enhancements in the STG/STS. In addition, decreased audiovisual stimulus quality elicited enhanced fronto-parietal activity, presumably reflecting increased attentional demands. Finally, attention to the dialogues, in relation to a control task where a fixation cross was attended and the dialogue ignored, yielded enhanced activity in the left planum polare, angular gyrus, the right temporal pole, as well as in the orbitofrontal/ventromedial prefrontal cortex and posterior cingulate gyrus. Our findings suggest that naturalistic conversations effectively engage participants and reveal brain networks related to social perception in addition to speech and semantic processing networks.


2020 ◽  
Vol 15 (5) ◽  
pp. 523-536 ◽  
Author(s):  
Wei Liu ◽  
Nancy Peeters ◽  
Guillén Fernández ◽  
Nils Kohn

Abstract Inhibitory control is crucial for regulating emotions and may also enable memory control. However, evidence for their shared neurobiological correlates is limited. Here, we report meta-analyses of neuroimaging studies on emotion regulation, or memory control and link neural commonalities to transcriptional commonalities using the Allen Human Brain Atlas (AHBA). Based on 95 functional magnetic resonance imaging studies, we reveal a role of the right inferior parietal lobule embedded in a frontal–parietal–insular network during emotion regulation and memory control, which is similarly recruited during response inhibition. These co-activation patterns also overlap with the networks associated with ‘inhibition’, ‘cognitive control’ and ‘working memory’ when consulting the Neurosynth. Using the AHBA, we demonstrate that emotion regulation- and memory control-related brain activity patterns are associated with transcriptional profiles of a specific set of ‘inhibition-related’ genes. Gene ontology enrichment analysis of these ‘inhibition-related’ genes reveal associations with the neuronal transmission and risk for major psychiatric disorders as well as seizures and alcoholic dependence. In summary, this study identified a neural network and a set of genes associated with inhibitory control across emotion regulation and memory control. These findings facilitate our understanding of the neurobiological correlates of inhibitory control and may contribute to the development of brain stimulation and pharmacological interventions.


Sign in / Sign up

Export Citation Format

Share Document