Networks with lateral connectivity. III. Plasticity and reorganization of somatosensory cortex

1996 ◽  
Vol 75 (1) ◽  
pp. 217-232 ◽  
Author(s):  
J. Xing ◽  
G. L. Gerstein

1. Mechanisms underlying cortical reorganizations were studied using a three-layered neural network model with neuronal groups already formed in the cortical layer. 2. Dynamic changes induced in cortex by behavioral training or intracortical microstimulation (ICMS) were simulated. Both manipulations resulted in reassembly of neuronal groups and formation of stimulus-dependent assemblies. Receptive fields of neurons and cortical representation of inputs also changed. Many neurons that had been weakly responsive or silent became active. 3. Several types of learning models were examined in simulating behavioral training, ICMS-induced dynamic changes, deafferentation, or cortical lesion. Each learning model most accurately reproduced features of experimental data from different manipulations, suggesting that more than one plasticity mechanism might be able to induce dynamic changes in cortex. 4. After skin or cortical stimulation ceased, as spontaneous activity continued, the stimulus-dependent assemblies gradually reverted into structure-dependent neuronal groups. However, relationships among individual neurons and identities of many neurons did not return to their original states. Thus a different set of neurons would be recruited by the same training stimulus sequence on its next presentation. 5. We also reproduced several typical long-term reorganizations caused by pathological manipulations such as cortical lesions, input loss, and digit fusion. 6. In summary, with Hebbian plasticity rules on lateral connections, the network model is capable of reproducing most characteristics of experiments on cortical reorganization. We propose that an important mechanism underlying cortical plastic changes is formation of temporary assemblies that are related to receipt of strongly synchronized localized input. Such stimulus-dependent assemblies can be dissolved by spontaneous activity after removal of the stimuli.

1996 ◽  
Vol 75 (1) ◽  
pp. 200-216 ◽  
Author(s):  
J. Xing ◽  
G. L. Gerstein

1. Using a three-layered network model defined in the previous paper, we studied the basic features of neurons in the cortical layer while the synaptic strengths of lateral excitatory connections were made modifiable by a Hebbian learning rule and a normalization process. 2. We found that neurons in the cortical layer formed groups through their lateral excitatory connections after the network was trained with sequential random dot stimulations. Neurons within a group connected tightly; neurons in different groups connected weakly. 3. The effects of model parameters and input parameters on the formation of neuronal groups were investigated. Results showed that the average size and rough shapes of groups were mainly determined by the spatial distribution of lateral connections within the cortical layer, irrespective of input parameters and training methods. Thus groups are structure dependent. 4. Lateral inhibition in the network is the only key factor that affects the grouping of neurons. Without an appropriate amount of distant inhibition, group formation does not occur. Group formation is very robust to all other parameters we tested. On the other hand, group locations are very easily disturbed by inputs or changes of parameters, suggesting that such neuronal groups are dynamically maintained. 5. With the development of neuronal groups, neurons can be divided into two response types. TN-1 neurons respond weakly to inputs and have small receptive fields or do not respond at all (silent); TN-II neurons, approximately 30-40% of all, respond strongly to inputs and have large receptive fields. The two types of neurons also differ with respect to response threshold and temporal firing patterns. After groups formed, receptive fields of TN-II neurons within the same group clustered spatially with high overlap, whereas receptive fields of TN-I neurons with detectable responses shifted systematically with the neuron's spatial location. 6. The two types of neurons are homogeneously distributed across the cortical layer. The population of each type of neuron produces a full representation of the input layer with weak or strong responses, respectively. 7. We concluded that neurons in the cortical network naturally assembled into functional groups. Such groups are dynamic and amenable to change by input stimuli. A fraction of neurons (30-40%) within the same group shares a similar receptive field and strongly respond together to stimuli, so that the network has more robust response to inputs. On the other hand, the responses of a large portion (60-70%) of neurons become weak or silent: these neurons are available for other (unknown) functional purposes.


2014 ◽  
Author(s):  
Christoph Hartmann ◽  
Andreea Lazar ◽  
Jochen Triesch

AbstractTrial-to-trial variability and spontaneous activity of cortical recordings have been suggested to reflect intrinsic noise. This view is currently challenged by mounting evidence for structure in these phenomena: Trial-to-trial variability decreases following stimulus onset and can be predicted by previous spontaneous activity. This spontaneous activity is similar in magnitude and structure to evoked activity and can predict decisions. Allof the observed neuronal properties described above can be accounted for, at an abstract computational level, by the sampling-hypothesis, according to which response variability reflects stimulus uncertainty. However, a mechanistic explanation at the level of neural circuit dynamics is still missing.In this study, we demonstrate that all of these phenomena can be accounted for by a noise-free self-organizing recurrent neural network model (SORN). It combines spike-timing dependent plasticity (STDP) and homeostatic mechanisms in a deterministic network of excitatory and inhibitory McCulloch-Pitts neurons. The network self-organizes to spatio-temporally varying input sequences.We find that the key properties of neural variability mentioned above develop in this model as the network learns to perform sampling-like inference. Importantly, the model shows high trial-to-trial variability although it is fully deterministic. This suggests that the trial-to-trial variability in neural recordings may not reflect intrinsic noise. Rather, it may reflect a deterministic approximation of sampling-like learning and inference. The simplicity of the model suggests that these correlates of the sampling theory are canonical properties of recurrent networks that learn with a combination of STDP and homeostatic plasticity mechanisms.Author SummaryNeural recordings seem very noisy. If the exact same stimulus is shown to an animal multiple times, the neural response will vary. In fact, the activity of a single neuron shows many features of a stochastic process. Furthermore, in the absence of a sensory stimulus, cortical spontaneous activity has a magnitude comparable to the activity observed during stimulus presentation. These findings have led to a widespread belief that neural activity is indeed very noisy. However, recent evidence indicates that individual neurons can operate very reliably and that the spontaneous activity in the brain is highly structured, suggesting that much of the noise may in fact be signal. One hypothesis regarding this putative signal is that it reflects a form of probabilistic inference through sampling. Here we show that the key features of neural variability can be accounted for in a completely deterministic network model through self-organization. As the network learns a model of its sensory inputs, the deterministic dynamics give rise to sampling-like inference. Our findings show that the notorious variability in neural recordings does not need to be seen as evidence for a noisy brain. Instead it may reflect sampling-like inference emerging from a self-organized learning process.


2019 ◽  
Vol 15 (5) ◽  
pp. e1006618 ◽  
Author(s):  
Monzilur Rahman ◽  
Ben D. B. Willmore ◽  
Andrew J. King ◽  
Nicol S. Harper

2011 ◽  
Vol 106 (2) ◽  
pp. 986-998 ◽  
Author(s):  
Julie Le Cam ◽  
Luc Estebanez ◽  
Vincent Jacob ◽  
Daniel E. Shulz

The tactile sensations mediated by the whisker-trigeminal system allow rodents to efficiently detect and discriminate objects. These capabilities rely strongly on the temporal and spatial structure of whisker deflections. Subthreshold but also spiking receptive fields in the barrel cortex encompass a large number of vibrissae, and it seems likely that the functional properties of these multiwhisker receptive fields reflect the multiple-whisker interactions encountered by the animal during exploration of its environment. The aim of this study was to examine the dependence of the spatial structure of cortical receptive fields on stimulus parameters. Using a newly developed 24-whisker stimulation matrix, we applied a forward correlation analysis of spiking activity to randomized whisker deflections (sparse noise) to characterize the receptive fields that result from caudal and rostral directions of whisker deflection. We observed that the functionally determined principal whisker, the whisker eliciting the strongest response with the shortest latency, differed according to the direction of whisker deflection. Thus, for a given neuron, maximal responses to opposite directions of whisker deflections could be spatially separated. This spatial separation resulted in a displacement of the center of mass between the rostral and caudal subfields and was accompanied by differences between response latencies in rostral and caudal directions of whisker deflection. Such direction-dependent receptive field organization was observed in every cortical layer. We conclude that the spatial structure of receptive fields in the barrel cortex is not an intrinsic property of the neuron but depends on the properties of sensory input.


2021 ◽  
pp. 1-29
Author(s):  
Justin D. Theiss ◽  
Joel D. Bowen ◽  
Michael A. Silver

Abstract Any visual system, biological or artificial, must make a trade-off between the number of units used to represent the visual environment and the spatial resolution of the sampling array. Humans and some other animals are able to allocate attention to spatial locations to reconfigure the sampling array of receptive fields (RFs), thereby enhancing the spatial resolution of representations without changing the overall number of sampling units. Here, we examine how representations of visual features in a fully convolutional neural network interact and interfere with each other in an eccentricity-dependent RF pooling array and how these interactions are influenced by dynamic changes in spatial resolution across the array. We study these feature interactions within the framework of visual crowding, a well-characterized perceptual phenomenon in which target objects in the visual periphery that are easily identified in isolation are much more difficult to identify when flanked by similar nearby objects. By separately simulating effects of spatial attention on RF size and on the density of the pooling array, we demonstrate that the increase in RF density due to attention is more beneficial than changes in RF size for enhancing target classification for crowded stimuli. Furthermore, by varying target and flanker spacing, as well as the spatial extent of attention, we find that feature redundancy across RFs has more influence on target classification than the fidelity of the feature representations themselves. Based on these findings, we propose a candidate mechanism by which spatial attention relieves visual crowding through enhanced feature redundancy that is mostly due to increased RF density.


2014 ◽  
Vol 602-605 ◽  
pp. 3213-3217
Author(s):  
Sheng Han Zhou ◽  
Wen Bing Chang

The paper aims to develop an updated network system reconfigurable model based on the Finite State Automation. Firstly, the paper reviews the concept of reconfigurable network systems and reveals its robustness, evolution and the basic attributes of survivability.Then the system robust behavior, evolution behavior and survival behavior are described with a hierarchical model. Secondly, the study builds the quantitative reconfigurable metric with network topology reconfigurable measurement example. Finally, the result of experiments shows that the proposed reconfigurable quantitative indicator of reconfigurable resistance model suggests that the network is an efficient reconfigurable network topology, which can effectively adapt the dynamic changes in the environment.


Sign in / Sign up

Export Citation Format

Share Document