scholarly journals Learning hierarchical sequence representations across human cortex and hippocampus

2019 ◽  
Author(s):  
Simon Henin ◽  
Nicholas B. Turk-Browne ◽  
Daniel Friedman ◽  
Anli Liu ◽  
Patricia Dugan ◽  
...  

ABSTRACTSensory input arrives in continuous sequences that humans experience as units, e.g., words and events. The brain’s ability to discover extrinsic regularities is called statistical learning. Structure can be represented at multiple levels, including transitional probabilities, ordinal position, and identity of units. To investigate sequence encoding in cortex and hippocampus, we recorded from intracranial electrodes in human subjects as they were exposed to auditory and visual sequences containing temporal regularities. We find neural tracking of regularities within minutes, with characteristic profiles across brain areas. Early processing tracked lower-level features (e.g., syllables) and learned units (e.g., words); while later processing tracked only learned units. Learning rapidly shaped neural representations, with a gradient of complexity from early brain areas encoding transitional probability, to associative regions and hippocampus encoding ordinal position and identity of units. These findings indicate the existence of multiple, parallel computational systems for sequence learning across hierarchically organized cortico-hippocampal circuits.

2021 ◽  
Vol 7 (8) ◽  
pp. eabc4530
Author(s):  
Simon Henin ◽  
Nicholas B. Turk-Browne ◽  
Daniel Friedman ◽  
Anli Liu ◽  
Patricia Dugan ◽  
...  

Sensory input arrives in continuous sequences that humans experience as segmented units, e.g., words and events. The brain’s ability to discover regularities is called statistical learning. Structure can be represented at multiple levels, including transitional probabilities, ordinal position, and identity of units. To investigate sequence encoding in cortex and hippocampus, we recorded from intracranial electrodes in human subjects as they were exposed to auditory and visual sequences containing temporal regularities. We find neural tracking of regularities within minutes, with characteristic profiles across brain areas. Early processing tracked lower-level features (e.g., syllables) and learned units (e.g., words), while later processing tracked only learned units. Learning rapidly shaped neural representations, with a gradient of complexity from early brain areas encoding transitional probability, to associative regions and hippocampus encoding ordinal position and identity of units. These findings indicate the existence of multiple, parallel computational systems for sequence learning across hierarchically organized cortico-hippocampal circuits.


1984 ◽  
Vol 59 (3) ◽  
pp. 731-738 ◽  
Author(s):  
E. A. Sersen ◽  
J. Majkowski ◽  
J. Clausen ◽  
G. M. Heaney

BAERs from 16 subjects during 3 sessions varied in the latency or amplitude of some components depending upon level of arousal as indicated by EEG patterns. There was a general tendency for activation to produce the fastest responses with the largest amplitudes and for drowsiness to produce the slowest responses with the smallest amplitudes. The latency of P2 was significantly prolonged during drowsiness, relative to those during relaxation or activation. For right-ear stimulation, P5 latency was longest during drowsiness, and shortest during activation while for left-ear stimulation the shortest latency occurred during relaxation. The amplitudes of Wave II and Wave VII were significantly smaller during drowsiness than during activation. Although the differences were below the level of clinical significance, the data indicate a modification in the characteristics of brainstem transmission as a function of concurrent activity in other brain areas.


2020 ◽  
Vol 38 (4) ◽  
pp. 635-671
Author(s):  
Carlos León ◽  
Pablo Gervás ◽  
Pablo Delatorre ◽  
Alan Tapscott

AbstractEvaluating the extent to which computer-produced stories are structured like human-invented narratives can be an important component of the quality of a story plot. In this paper, we report on an empirical experiment in which human subjects have invented short plots in a constrained scenario. The stories were annotated according to features commonly found in existing automatic story generators. The annotation was designed to measure the proportion and relations of story components that should be used in automatic computational systems for matching human behaviour. Results suggest that there are relatively common patterns that can be used as input data for identifying similarity to human-invented stories in automatic storytelling systems. The found patterns are in line with narratological models, and the results provide numerical quantification and layout of story components. The proposed method of story analysis is tested over two additional sources, the ROCStories corpus and stories generated by automated storytellers, to illustrate the valuable insights that may be derived from them.


2020 ◽  
Vol 71 (1) ◽  
pp. 25-48 ◽  
Author(s):  
Rebecca M. Todd ◽  
Vladimir Miskovic ◽  
Junichi Chikazoe ◽  
Adam K. Anderson

Recent advances in our understanding of information states in the human brain have opened a new window into the brain's representation of emotion. While emotion was once thought to constitute a separate domain from cognition, current evidence suggests that all events are filtered through the lens of whether they are good or bad for us. Focusing on new methods of decoding information states from brain activation, we review growing evidence that emotion is represented at multiple levels of our sensory systems and infuses perception, attention, learning, and memory. We provide evidence that the primary function of emotional representations is to produce unified emotion, perception, and thought (e.g., “That is a good thing”) rather than discrete and isolated psychological events (e.g., “That is a thing. I feel good”). The emergent view suggests ways in which emotion operates as a fundamental feature of cognition, by design ensuring that emotional outcomes are the central object of perception, thought, and action.


2012 ◽  
Vol 107 (8) ◽  
pp. 2033-2041 ◽  
Author(s):  
Yadong Wang ◽  
Nai Ding ◽  
Nayef Ahmar ◽  
Juanjuan Xiang ◽  
David Poeppel ◽  
...  

Slow acoustic modulations below 20 Hz, of varying bandwidths, are dominant components of speech and many other natural sounds. The dynamic neural representations of these modulations are difficult to study through noninvasive neural-recording methods, however, because of the omnipresent background of slow neural oscillations throughout the brain. We recorded the auditory steady-state responses (aSSR) to slow amplitude modulations (AM) from 14 human subjects using magnetoencephalography. The responses to five AM rates (1.5, 3.5, 7.5, 15.5, and 31.5 Hz) and four types of carrier (pure tone and 1/3-, 2-, and 5-octave pink noise) were investigated. The phase-locked aSSR was detected reliably in all conditions. The response power generally decreases with increasing modulation rate, and the response latency is between 100 and 150 ms for all but the highest rates. Response properties depend only weakly on the bandwidth. Analysis of the complex-valued aSSR magnetic fields in the Fourier domain reveals several neural sources with different response phases. These neural sources of the aSSR, when approximated by a single equivalent current dipole (ECD), are distinct from and medial to the ECD location of the N1m response. These results demonstrate that the globally synchronized activity in the human auditory cortex is phase locked to slow temporal modulations below 30 Hz, and the neural sensitivity decreases with an increasing AM rate, with relative insensitivity to bandwidth.


2019 ◽  
Author(s):  
Dimitris A. Pinotsis ◽  
Markus Siegel ◽  
Earl K. Miller

AbstractMany recent advances in artificial intelligence (AI) are rooted in visual neuroscience. However, ideas from more complicated paradigms like decision-making are less used. Although automated decision-making systems are ubiquitous (driverless cars, pilot support systems, medical diagnosis algorithms etc.), achieving human-level performance in decision making tasks is still a challenge. At the same time, these tasks that are hard for AI are easy for humans. Thus, understanding human brain dynamics during these decision-making tasks and modeling them using deep neural networks could improve AI performance. Here we modelled some of the complex neural interactions during a sensorimotor decision making task. We investigated how brain dynamics flexibly represented and distinguished between sensory processing and categorization in two sensory domains: motion direction and color. We used two different approaches for understanding neural representations. We compared brain responses to 1) the geometry of a sensory or category domain (domain selectivity) and 2) predictions from deep neural networks (computation selectivity). Both approaches gave us similar results. This confirmed the validity of our analyses. Using the first approach, we found that neural representations changed depending on context. We then trained deep recurrent neural networks to perform the same tasks as the animals. Using the second approach, we found that computations in different brain areas also changed flexibly depending on context. Color computations appeared to rely more on sensory processing, while motion computations more on abstract categories. Overall, our results shed light to the biological basis of categorization and differences in selectivity and computations in different brain areas. They also suggest a way for studying sensory and categorical representations in the brain: compare brain responses to both a behavioral model and a deep neural network and test if they give similar results.


2021 ◽  
Author(s):  
Changfu Pei ◽  
Yuan Qiu ◽  
Fali Li ◽  
Xunan Huang ◽  
Yajing Si ◽  
...  

Human linguistic units are hierarchical, and our brain responds differently when processing linguistic units during sentence comprehension, especially when the modality of the received signal is different (auditory, visual, or audio-visual). However, it is unclear how the brain processes and integrates language information at different linguistic units (words, phrases, and sentences) provided simultaneously in audio and visual modalities. To address the issue, we presented participants with sequences of short Chinese sentences through auditory or visual or combined audio-visual modalities, while electroencephalographic responses were recorded. With a frequency tagging approach, we analyzed the neural representations of basic linguistic units (i.e., characters/monosyllabic words) and higher-level linguistic structures (i.e., phrases and sentences) across the three modalities separately. We found that audio-visual integration occurs at all linguistic units, and the brain areas involved in the integration varied across different linguistic levels. In particular, the integration of sentences activated the local left prefrontal area. Therefore, we used continuous theta-burst stimulation (cTBS) to verify that the left prefrontal cortex plays a vital role in the audio-visual integration of sentence information. Our findings suggest the advantage of bimodal language comprehension at hierarchical stages in language-related information processing and provide evidence for the causal role of the left prefrontal regions in processing information of audio-visual sentences.


2020 ◽  
Vol 30 (11) ◽  
pp. 5988-6003 ◽  
Author(s):  
Vinitha Rangarajan ◽  
Corentin Jacques ◽  
Robert T Knight ◽  
Kevin S Weiner ◽  
Kalanit Grill-Spector

Abstract Repeated stimulus presentations commonly produce decreased neural responses—a phenomenon known as repetition suppression (RS) or adaptation—in ventral temporal cortex (VTC) of humans and nonhuman primates. However, the temporal features of RS in human VTC are not well understood. To fill this gap in knowledge, we utilized the precise spatial localization and high temporal resolution of electrocorticography (ECoG) from nine human subjects implanted with intracranial electrodes in the VTC. The subjects viewed nonrepeated and repeated images of faces with long-lagged intervals and many intervening stimuli between repeats. We report three main findings: 1) robust RS occurs in VTC for activity in high-frequency broadband (HFB), but not lower-frequency bands; 2) RS of the HFB signal is associated with lower peak magnitude (PM), lower total responses, and earlier peak responses; and 3) RS effects occur early within initial stages of stimulus processing and persist for the entire stimulus duration. We discuss these findings in the context of early and late components of visual perception, as well as theoretical models of repetition suppression.


2010 ◽  
Vol 1 (1) ◽  
pp. 7-15 ◽  
Author(s):  
Denis Noble

This article uses an integrative systems biological view of the relationship between genotypes and phenotypes to clarify some conceptual problems in biological debates about causality. The differential (gene-centric) view is incomplete in a sense analogous to using differentiation without integration in mathematics. Differences in genotype are frequently not reflected in significant differences in phenotype as they are buffered by networks of molecular interactions capable of substituting an alternative pathway to achieve a given phenotype characteristic when one pathway is removed. Those networks integrate the influences of many genes on each phenotype so that the effect of a modification in DNA depends on the context in which it occurs. Mathematical modelling of these interactions can help to understand the mechanisms of buffering and the contextual-dependence of phenotypic outcome, and so to represent correctly and quantitatively the relations between genomes and phenotypes. By incorporating all the causal factors in generating a phenotype, this approach also highlights the role of non-DNA forms of inheritance, and of the interactions at multiple levels.


2020 ◽  
Author(s):  
Ying Fan ◽  
Qiming Han ◽  
Simeng Guo ◽  
Huan Luo

AbstractWhen retaining a sequence of auditory tones in working memory (WM), two forms of information – frequency (content) and ordinal position (structure) – have to be maintained in the brain. Here, we employed a time-resolved multivariate decoding analysis on content and structure information separately to examine their neural representations in human auditory WM. We demonstrate that content and structure are stored in a dissociated manner and show distinct characteristics. First, each tone is associated with two separate codes in parallel, characterizing its frequency and ordinal position, respectively. Second, during retention, a structural retrocue reactivates structure but not content, whereas a following white noise triggers content but not structure. Third, structure representation remains unchanged whereas content undergoes a transformation throughout memory progress. Finally, content reactivations during retention correlate with WM behavior. Overall, our results support a factorized content-structure representation in auditory WM, which might help efficient memory formation and storage by generalizing stable structure to new auditory inputs.


Sign in / Sign up

Export Citation Format

Share Document