A Neural Model of Schemas and Memory Consolidation
AbstractThe ability to behave differently according to the situation is essential for survival in a dynamic environment. This requires past experiences to be encoded and retrieved alongside the contextual schemas in which they occurred. The complementary learning systems theory suggests that these schemas are acquired through gradual learning via the neocortex and rapid learning via the hippocampus. However, it has also been shown that new information matching a preexisting schema can bypass the gradual learning process and be acquired rapidly, suggesting that the separation of memories into schemas is useful for flexible learning. While there are theories of the role of schemas in memory consolidation, we lack a full understanding of the mechanisms underlying this function. For this reason, we created a biologically plausible neural network model of schema consolidation that studies several brain areas and their interactions. The model uses a rate-coded multilayer neural network with contrastive Hebbian learning to learn context-specific tasks. Our model suggests that the medial prefrontal cortex supports context-dependent behaviors by learning representations of schemas. Additionally, sparse random connections in the model from the ventral hippocampus to the hidden layers of the network gate neuronal activity depending on their involvement within the current schema, thus separating the representations of new and prior schemas. Contrastive Hebbian learning may function similarly to oscillations in the hippocampus, alternating between clamping and unclamping the output layer of the network to drive learning. Lastly, the model shows the vital role of neuromodulation, as a neuromodulatory area detects the certainty of whether new information is consistent with prior schemas and modulates the speed of memory encoding accordingly. Along with the insights that this model brings to the neurobiology of memory, it further provides a basis for creating context-dependent memories while preventing catastrophic forgetting in artificial neural networks.