Stimulus configuration, long-term potentiation, and the hippocampus

1997 ◽  
Vol 20 (4) ◽  
pp. 629-631 ◽  
Author(s):  
Nestor A. Schmajuk

Shors & Matzel propose that hippocampal LTP increases the effective salience of discrete external stimuli and thereby facilitates the induction of memories at distant places. In line with this suggestion, a neural network model of associative learning and hippocampal function assumes that LTP increases hippocampal error signals to the cortex, thereby facilitating stimulus configuration in association cortex. Computer simulations show that under these assumptions the model correctly describes the effect of LTP induction and blockade in classical discriminations and place learning.

2021 ◽  
Vol 5 (1) ◽  
Author(s):  
Mingxue Ma ◽  
Yao Ni ◽  
Zirong Chi ◽  
Wanqing Meng ◽  
Haiyang Yu ◽  
...  

AbstractThe ability to emulate multiplexed neurochemical transmission is an important step toward mimicking complex brain activities. Glutamate and dopamine are neurotransmitters that regulate thinking and impulse signals independently or synergistically. However, emulation of such simultaneous neurotransmission is still challenging. Here we report design and fabrication of synaptic transistor that emulates multiplexed neurochemical transmission of glutamate and dopamine. The device can perform glutamate-induced long-term potentiation, dopamine-induced short-term potentiation, or co-release-induced depression under particular stimulus patterns. More importantly, a balanced ternary system that uses our ambipolar synaptic device backtrack input ‘true’, ‘false’ and ‘unknown’ logic signals; this process is more similar to the information processing in human brains than a traditional binary neural network. This work provides new insight for neuromorphic systems to establish new principles to reproduce the complexity of a mammalian central nervous system from simple basic units.


2013 ◽  
Vol 110 (11) ◽  
pp. 2511-2519 ◽  
Author(s):  
Meyer B. Jackson

Nervous systems are thought to encode information as patterns of electrical activity distributed sparsely through networks of neurons. These networks then process information by transforming one pattern of electrical activity into another. To store information as a pattern, a neural network must strengthen synapses between designated neurons so that activation of some of these neurons corresponding to some features of an object can spread to activate the larger group representing the complete object. This operation of pattern completion endows a neural network with autoassociative memory. Pattern completion by neural networks has been modeled extensively with computers and invoked in behavioral studies, but experiments have yet to demonstrate pattern completion in an intact neural circuit. In the present study, imaging with voltage-sensitive dye in the CA3 region of a hippocampal slice revealed different spatial patterns of activity elicited by electrical stimulation of different sites. Stimulation of two separate sites individually, or both sites simultaneously, evoked “partial” or “complete” patterns, respectively. A complete pattern was then stored by applying theta burst stimulation to both sites simultaneously to induce long-term potentiation (LTP) of synapses between CA3 pyramidal cells. Subsequent stimulation of only one site then activated an extended pattern. Quantitative comparisons between response maps showed that the post-LTP single-site patterns more closely resembled the complete dual-site pattern. Thus, LTP induction enabled the CA3 region to complete a dual-site pattern upon stimulation of only one site. This experiment demonstrated that LTP induction can store information in the CA3 region of the hippocampus for subsequent retrieval.


2010 ◽  
Vol 298 (6) ◽  
pp. R1588-R1596 ◽  
Author(s):  
Eunyoung Kim ◽  
Lawrence M. Grover ◽  
Don Bertolotti ◽  
Todd L. Green

Sleep is required for, and sleep loss impairs, normal hippocampal synaptic N-methyl-d-aspartate (NMDA) glutamate receptor function and expression, hippocampal NMDA receptor-dependent synaptic plasticity, and hippocampal-dependent memory function. Although sleep is essential, the signals linking sleep to hippocampal function are not known. One potential signal is growth hormone. Growth hormone is released during sleep, and its release is suppressed during sleep deprivation. If growth hormone links sleep to hippocampal function, then restoration of growth hormone during sleep deprivation should prevent adverse consequences of sleep loss. To test this hypothesis, we examined rat hippocampus for spontaneous excitatory synaptic currents in CA1 pyramidal neurons, long-term potentiation in area CA1, and NMDA receptor subunit proteins in synaptic membranes. Three days of sleep deprivation caused a significant reduction in NMDA receptor-mediated synaptic currents compared with control treatments. When rats were injected with growth hormone once per day during sleep deprivation, the loss of NMDA receptor-mediated synaptic currents was prevented. Growth hormone injections also prevented the impairment of long-term potentiation that normally follows sleep deprivation. In addition, sleep deprivation led to a selective loss of NMDA receptor 2B (NR2B) from hippocampal synaptic membranes, but normal NR2B expression was restored by growth hormone injection. Our results identify growth hormone as a critical mediator linking sleep to normal synaptic function of the hippocampus.


2002 ◽  
Vol 14 (9) ◽  
pp. 2245-2268 ◽  
Author(s):  
Stephen José Hanson ◽  
Michiro Negishi

A simple associationist neural network learns to factor abstract rules (i.e., grammars) from sequences of arbitrary input symbols by inventing abstract representations that accommodate unseen symbol sets as well as unseen but similar grammars. The neural network is shown to have the ability to transfer grammatical knowledge to both new symbol vocabularies and new grammars. Analysis of the state-space shows that the network learns generalized abstract structures of the input and is not simply memorizing the input strings. These representations are context sensitive, hierarchical, and based on the state variable of the finite-state machines that the neural network has learned. Generalization to new symbol sets or grammars arises from the spatial nature of the internal representations used by the network, allowing new symbol sets to be encoded close to symbol sets that have already been learned in the hidden unit space of the network. The results are counter to the arguments that learning algorithms based on weight adaptation after each exemplar presentation (such as the long term potentiation found in the mammalian nervous system) cannot in principle extract symbolic knowledge from positive examples as prescribed by prevailing human linguistic theory and evolutionary psychology.


Sign in / Sign up

Export Citation Format

Share Document