scholarly journals One Step Back, Two Steps Forward: Interference and Learning in Recurrent Neural Networks

2019 ◽  
Vol 31 (10) ◽  
pp. 1985-2003 ◽  
Author(s):  
Chen Beer ◽  
Omri Barak

Artificial neural networks, trained to perform cognitive tasks, have recently been used as models for neural recordings from animals performing these tasks. While some progress has been made in performing such comparisons, the evolution of network dynamics throughout learning remains unexplored. This is paralleled by an experimental focus on recording from trained animals, with few studies following neural activity throughout training. In this work, we address this gap in the realm of artificial networks by analyzing networks that are trained to perform memory and pattern generation tasks. The functional aspect of these tasks corresponds to dynamical objects in the fully trained network—a line attractor or a set of limit cycles for the two respective tasks. We use these dynamical objects as anchors to study the effect of learning on their emergence. We find that the sequential nature of learning—one trial at a time—has major consequences for the learning trajectory and its final outcome. Specifically, we show that least mean squares (LMS), a simple gradient descent suggested as a biologically plausible version of the FORCE algorithm, is constantly obstructed by forgetting, which is manifested as the destruction of dynamical objects from previous trials. The degree of interference is determined by the correlation between different trials. We show which specific ingredients of FORCE avoid this phenomenon. Overall, this difference results in convergence that is orders of magnitude slower for LMS. Learning implies accumulating information across multiple trials to form the overall concept of the task. Our results show that interference between trials can greatly affect learning in a learning-rule-dependent manner. These insights can help design experimental protocols that minimize such interference, and possibly infer underlying learning rules by observing behavior and neural activity throughout learning.

2010 ◽  
Vol 365 (1551) ◽  
pp. 2347-2362 ◽  
Author(s):  
Dominique M. Durand ◽  
Eun-Hyoung Park ◽  
Alicia L. Jensen

Conventional neural networks are characterized by many neurons coupled together through synapses. The activity, synchronization, plasticity and excitability of the network are then controlled by its synaptic connectivity. Neurons are surrounded by an extracellular space whereby fluctuations in specific ionic concentration can modulate neuronal excitability. Extracellular concentrations of potassium ([K + ] o ) can generate neuronal hyperexcitability. Yet, after many years of research, it is still unknown whether an elevation of potassium is the cause or the result of the generation, propagation and synchronization of epileptiform activity. An elevation of potassium in neural tissue can be characterized by dispersion (global elevation of potassium) and lateral diffusion (local spatial gradients). Both experimental and computational studies have shown that lateral diffusion is involved in the generation and the propagation of neural activity in diffusively coupled networks. Therefore, diffusion-based coupling by potassium can play an important role in neural networks and it is reviewed in four sections. Section 2 shows that potassium diffusion is responsible for the synchronization of activity across a mechanical cut in the tissue. A computer model of diffusive coupling shows that potassium diffusion can mediate communication between cells and generate abnormal and/or periodic activity in small (§3) and in large networks of cells (§4). Finally, in §5, a study of the role of extracellular potassium in the propagation of axonal signals shows that elevated potassium concentration can block the propagation of neural activity in axonal pathways. Taken together, these results indicate that potassium accumulation and diffusion can interfere with normal activity and generate abnormal activity in neural networks.


2019 ◽  
Author(s):  
Michael E. Rule ◽  
Adrianna R. Loback ◽  
Dhruva V. Raman ◽  
Laura Driscoll ◽  
Christopher D. Harvey ◽  
...  

AbstractOver days and weeks, neural activity representing an animal’s position and movement in sensorimotor cortex has been found to continually reconfigure or ‘drift’ during repeated trials of learned tasks, with no obvious change in behavior. This challenges classical theories which assume stable engrams underlie stable behavior. However, it is not known whether this drift occurs systematically, allowing downstream circuits to extract consistent information. We show that drift is systematically constrained far above chance, facilitating a linear weighted readout of behavioural variables. However, a significant component of drift continually degrades a fixed readout, implying that drift is not confined to a null coding space. We calculate the amount of plasticity required to compensate drift independently of any learning rule, and find that this is within physiologically achievable bounds. We demonstrate that a simple, biologically plausible local learning rule can achieve these bounds, accurately decoding behavior over many days.


Author(s):  
KW Scangos ◽  
AN Khambhati ◽  
PM Daly ◽  
LW Owen ◽  
JR Manning ◽  
...  

AbstractQuantitative biological substrates of depression remain elusive. We carried out this study to determine whether application of a novel computational approach to high spatiotemporal resolution direct neural recordings may unlock the functional organization and coordinated activity patterns of depression networks. We identified two subnetworks conserved across the majority of individuals studied. The first was characterized by left temporal lobe hypoconnectivity and pathological beta activity. The second was characterized by a hypoactive, but hyperconnected left frontal cortex. These findings identify distributed circuit activity associated with depression, link neural activity with functional connectivity profiles, and inform strategies for personalized targeted intervention.


2020 ◽  
Author(s):  
Victoria Ankel ◽  
Stella Pantopoulou ◽  
Matthew Weathered ◽  
Darius Lisowski ◽  
Anthonie Cilliers ◽  
...  

Author(s):  
Xiaohui Zou ◽  
Yejing Rong ◽  
Xiaojuan Guo ◽  
Wenzhe Hou ◽  
Bingyu Yan ◽  
...  

Fibre is the viral protein that mediates the attachment and infection of adenovirus to the host cell. Fowl adenovirus 4 (FAdV-4) possesses two different fibre trimers on each penton capsomere, and roles of the separate fibres remain elusive. Here, we attempted to investigate the function of FAdV-4 fibres by using reverse genetics approaches. Adenoviral plasmids carrying fiber1 or fiber2 mutant genes were constructed and used to transfect chicken LMH cells. Fiber1-mutated recombinant virus could not be rescued. Such defective phenotype was complemented when a fiber1-bearing helper plasmid was included for co-transfection. The infection of fiber-intact FAdV-4 (FAdV4-GFP) to LMH cells could be blocked with purified fiber1 knob protein in a dose-dependent manner, while purifed fiber2 knob had no such function. On the contrary, fiber2-mutated FAdV-4, FAdV4XF2-GFP, was successfully rescued. The results of one-step growth curves showed that proliferative capacity of FAdV4XF2-GFP was 10 times lower than that of the control FAdV4-GFP. FAdV4XF2-GFP also caused fewer deaths of infected chicken embryos than FAdV4-GFP did, which resulted from poorer virus replication in vivo. These data illustrated that fiber1 mediated virus adsorption and was essential for FAdV-4, while fiber2 was dispensable although it significantly contributed to the virulence.


Sign in / Sign up

Export Citation Format

Share Document