scholarly journals Dissociated neural representations of content and ordinal structure in auditory sequence memory

2020 ◽  
Author(s):  
Ying Fan ◽  
Qiming Han ◽  
Simeng Guo ◽  
Huan Luo

AbstractWhen retaining a sequence of auditory tones in working memory (WM), two forms of information – frequency (content) and ordinal position (structure) – have to be maintained in the brain. Here, we employed a time-resolved multivariate decoding analysis on content and structure information separately to examine their neural representations in human auditory WM. We demonstrate that content and structure are stored in a dissociated manner and show distinct characteristics. First, each tone is associated with two separate codes in parallel, characterizing its frequency and ordinal position, respectively. Second, during retention, a structural retrocue reactivates structure but not content, whereas a following white noise triggers content but not structure. Third, structure representation remains unchanged whereas content undergoes a transformation throughout memory progress. Finally, content reactivations during retention correlate with WM behavior. Overall, our results support a factorized content-structure representation in auditory WM, which might help efficient memory formation and storage by generalizing stable structure to new auditory inputs.

2019 ◽  
Vol 10 (1) ◽  
Author(s):  
Ce Mo ◽  
Junshi Lu ◽  
Bichan Wu ◽  
Jianrong Jia ◽  
Huan Luo ◽  
...  

AbstractWhen a feature is attended, all locations containing this feature are enhanced throughout the visual field. However, how the brain concurrently attends to multiple features remains unknown and cannot be easily deduced from classical attention theories. Here, we recorded human magnetoencephalography signals when subjects concurrently attended to two spatially overlapping orientations. A time-resolved multivariate inverted encoding model was employed to track the ongoing temporal courses of the neural representations of the attended orientations. We show that the two orientation representations alternate with each other and undergo a theta-band (~4 Hz) rhythmic fluctuation over time. Similar temporal profiles are also revealed in the orientation discrimination performance. Computational modeling suggests a tuning competition process between the two neuronal populations that are selectively tuned to one of the attended orientations. Taken together, our findings reveal for the first time a rhythm-based, time-multiplexing neural machinery underlying concurrent multi-feature attention.


2018 ◽  
Author(s):  
Andrea E. Martin

Hierarchical structure and compositionality imbue human language with unparalleled expressive power and set it apart from other perception-action systems. However, neither formal nor neurobiological models account for how these defining computational properties might arise in a physiological system. I attempt to reconcile hierarchy and compositionality with principles from cell assembly computation in neuroscience; the result is an emerging theory of how the brain could convert distributed perceptual representations into hierarchical structures across multiple timescales while representing interpretable incremental stages of (de)compositional meaning. The model's architecture - a multidimensional coordinate system based on neurophysiological models of sensory processing - proposes that a manifold of neural trajectories encodes sensory, motor, and abstract linguistic states. Gain modulation, including inhibition, tunes the path in the manifold in accordance with behavior, and is how latent structure is inferred. As a consequence, predictive information about upcoming sensory input during production and comprehension is available without a separate operation. The proposed processing mechanism is synthesized from current models of neural entrainment to speech, concepts from systems neuroscience and category theory, and a symbolic-connectionist computational model that uses time and rhythm to structure information. I build on evidence from cognitive neuroscience and computational modeling that suggests a formal and mechanistic alignment between structure building and neural oscillations, and moves towards unifying basic insights from linguistics and psycholinguistics with the currency of neural computation.


2020 ◽  
Author(s):  
Matthias Loidolt ◽  
Lucas Rudelt ◽  
Viola Priesemann

AbstractHow does spontaneous activity during development prepare cortico-cortical connections for sensory input? We here analyse the development of sequence memory, an intrinsic feature of recurrent networks that supports temporal perception. We use a recurrent neural network model with homeostatic and spike-timing-dependent plasticity (STDP). This model has been shown to learn specific sequences from structured input. We show that development even under unstructured input increases unspecific sequence memory. Moreover, networks “pre-shaped” by such unstructured input subsequently learn specific sequences faster. The key structural substrate is the emergence of strong and directed synapses due to STDP and synaptic competition. These construct self-amplifying preferential paths of activity, which can quickly encode new input sequences. Our results suggest that memory traces are not printed on a tabula rasa, but instead harness building blocks already present in the brain.


2020 ◽  
Author(s):  
Yaelan Jung ◽  
Dirk B. Walther

AbstractNatural scenes deliver rich sensory information about the world. Decades of research has shown that the scene-selective network in the visual cortex represents various aspects of scenes. It is, however, unknown how such complex scene information is processed beyond the visual cortex, such as in the prefrontal cortex. It is also unknown how task context impacts the process of scene perception, modulating which scene content is represented in the brain. In this study, we investigate these questions using scene images from four natural scene categories, which also depict two types of global scene properties, temperature (warm or cold), and sound-level (noisy or quiet). A group of healthy human subjects from both sexes participated in the present study using fMRI. In the study, participants viewed scene images under two different task conditions; temperature judgment and sound-level judgment. We analyzed how different scene attributes (scene categories, temperature, and sound-level information) are represented across the brain under these task conditions. Our findings show that global scene properties are only represented in the brain, especially in the prefrontal cortex, when they are task-relevant. However, scene categories are represented in the brain, in both the parahippocampal place area and the prefrontal cortex, regardless of task context. These findings suggest that the prefrontal cortex selectively represents scene content according to task demands, but this task selectivity depends on the types of scene content; task modulates neural representations of global scene properties but not of scene categories.


2021 ◽  
pp. 1-17
Author(s):  
Avital Sternin ◽  
Lucy M. McGarry ◽  
Adrian M. Owen ◽  
Jessica A. Grahn

Abstract We investigated how familiarity alters music and language processing in the brain. We used fMRI to measure brain responses before and after participants were familiarized with novel music and language stimuli. To manipulate the presence of language and music in the stimuli, there were four conditions: (1) whole music (music and words together), (2) instrumental music (no words), (3) a capella music (sung words, no instruments), and (4) spoken words. To manipulate participants' familiarity with the stimuli, we used novel stimuli and a familiarization paradigm designed to mimic “natural” exposure, while controlling for autobiographical memory confounds. Participants completed two fMRI scans that were separated by a stimulus training period. Behaviorally, participants learned the stimuli over the training period. However, there were no significant neural differences between the familiar and unfamiliar stimuli in either univariate or multivariate analyses. There were differences in neural activity in frontal and temporal regions based on the presence of language in the stimuli, and these differences replicated across the two scanning sessions. These results indicate that the way we engage with music is important for creating a memory of that music, and these aspects, over and above familiarity on its own, may be responsible for the robust nature of musical memory in the presence of neurodegenerative disorders such as Alzheimer's disease.


Author(s):  
Mohammad Saleh Nambakhsh ◽  
M. Shiva

Exchange of databases between hospitals needs efficient and reliable transmission and storage techniques to cut down the cost of health care. This exchange involves a large amount of vital patient information such as biosignals and medical images. Interleaving one form of data such as 1-D signal over digital images can combine the advantages of data security with efficient memory utilization (Norris, Englehart & Lovely, 2001), but nothing prevents the user from manipulating or copying the decrypted data for illegal uses. Embedding vital information of patients inside their scan images will help physicians make a better diagnosis of a disease. In order to solve these issues, watermark algorithms have been proposed as a way to complement the encryption processes and provide some tools to track the retransmission and manipulation of multimedia contents (Barni, Podilchuk, Bartolini & Delp, 2001; Vallabha, 2003). A watermarking system is based on an imperceptible insertion of a watermark (a signal) in an image. This technique is adapted here for interleaving graphical ECG signals within medical images to reduce storage and transmission overheads as well as helping for computer-aided diagnostics system. In this chapter, we present a new wavelet-based watermarking method combined with the EZW coder. The principle is to replace significant wavelet coefficients of ECG signals by the corresponding significant wavelet coefficients belonging to the host image, which is much bigger in size than the mark signal. This chapter presents a brief introduction to watermarking and the EZW coder that acts as a platform for our watermarking algorithm.


2019 ◽  
Vol 121 (5) ◽  
pp. 1718-1734 ◽  
Author(s):  
Kevin C. Chen ◽  
Yi Zhou ◽  
Hui-Hui Zhao

Two macroscopic parameters describe the interstitial diffusion of substances in the extracellular space (ECS) of the brain, the ECS volume fraction α and the diffusion tortuosity λ. Past methods based on sampling the extracellular concentration of a membrane-impermeable ion tracer, such as tetramethylammonium (TMA+), can characterize either the dynamic α( t) alone or the constant α and λ in resting state but never the dynamic α( t) and λ( t) simultaneously in short-lived brain events. In this work, we propose to use a sinusoidal method of TMA+ to provide time-resolved quantification of α( t) and λ( t) in acute brain events. This method iontophoretically injects TMA+ in the brain ECS by a sinusoidal time pattern, samples the resulting TMA+ diffusion waveform at a distance, and analyzes the transient modulations of the amplitude and phase lag of the sampled TMA+ waveform to infer α( t) and λ( t). Applicability of the sinusoidal method was verified through computer simulations of the sinusoidal TMA+ diffusion waveform in cortical spreading depression. Parameter sensitivity analysis identified the sinusoidal frequency and the interelectrode distance as two key operating parameters. Compared with other TMA+-based methods, the sinusoidal method can more accurately capture the dynamic α( t) and λ( t) in acute brain events and is equally applicable to other pathological episodes such as epilepsy, transient ischemic attack, and brain injury. Future improvement of the method should focus on high-fidelity extraction of the waveform amplitude and phase angle. NEW & NOTEWORTHY An iontophoretic sinusoidal method of tetramethylammonium is described to capture the dynamic brain extracellular space volume fraction α and diffusion tortuosity λ. The sinusoidal frequency and interelectrode distance are two key operating parameters affecting the method’s accuracy in capturing α( t) and λ( t). High-fidelity extraction of the waveform amplitude and phase lag is critical to successful sinusoidal analyses.


Sign in / Sign up

Export Citation Format

Share Document