scholarly journals The formation and use of hierarchical cognitive maps in the brain: A neural network model

2020 ◽  
Vol 31 (1-4) ◽  
pp. 37-141
Author(s):  
Henry O C Jordan ◽  
Daniel M Navarro ◽  
Simon M Stringer
1989 ◽  
Vol 1 (4) ◽  
pp. 317-326 ◽  
Author(s):  
Sabrina J. Goodman ◽  
Richard A. Andersen

Microstimulation of many saccadic centers in the brain produces eye movements that are not consistent with either a strictly retinal or strictly head-centered coordinate coding of eye movements. Rather, stimulation produces some features of both types of coordinate coding. Recently we demonstrated a neural network model that was trained to localize the position of visual stimuli in head-centered coordinates at the output using inputs of eye and retinal position similar to those converging on area 7a of the posterior parietal cortex of monkeys (Zipser & Andersen 1988; Andersen & Zipser 1988). Here we show that microstimulation of this trained network, achieved by fully activating single units in the middle layer, produces “saccades” that are very much like the saccades produced by stimulating the brain. The activity of the middle-layer units can be considered to code the desired location of the eyes in head-centered coordinates; however, stimulation of these units does not produce the saccades predicted by a classical head-centered coordinate coding because the location in space appears to be coded in a distributed fashion among a population of units rather than explicitly at the level of single cells.


2019 ◽  
Author(s):  
Zhewei Zhang ◽  
Huzi Cheng ◽  
Tianming Yang

AbstractThe brain makes flexible and adaptive responses in the complicated and ever-changing environment for the organism’s survival. To achieve this, the brain needs to choose appropriate actions flexibly in response to sensory inputs. Moreover, the brain also has to understand how its actions affect future sensory inputs and what reward outcomes should be expected, and adapts its behavior based on the actual outcomes. A modeling approach that takes into account of the combined contingencies between sensory inputs, actions, and reward outcomes may be the key to understanding the underlying neural computation. Here, we train a recurrent neural network model based on sequence learning to predict future events based on the past event sequences that combine sensory, action, and reward events. We use four exemplary tasks that have been used in previous animal and human experiments to study different aspects of decision making and learning. We first show that the model reproduces the animals’ choice and reaction time pattern in a probabilistic reasoning task, and its units’ activities mimics the classical findings of the ramping pattern of the parietal neurons that reflects the evidence accumulation process during decision making. We further demonstrate that the model carries out Bayesian inference and may support meta-cognition such as confidence with additional tasks. Finally, we show how the network model achieves adaptive behavior with an approach distinct from reinforcement learning. Our work pieces together many experimental findings in decision making and reinforcement learning and provides a unified framework for the flexible and adaptive behavior of the brain.


2019 ◽  
Author(s):  
Brian Maniscalco ◽  
Brian Odegaard ◽  
Piercesare Grimaldi ◽  
Seong Hah Cho ◽  
Michele A. Basso ◽  
...  

AbstractCurrent dominant views hold that perceptual confidence reflects the probability that a decision is correct. Although these views have enjoyed some empirical support, recent behavioral results indicate that confidence and the probability of being correct can be dissociated. An alternative hypothesis suggests that confidence instead reflects the magnitude of evidence in favor of a decision while being relatively insensitive to the evidence opposing the decision. We considered how this alternative hypothesis might be biologically instantiated by developing a simple leaky competing accumulator neural network model incorporating a known property of sensory neurons: tuned normalization. The key idea of the model is that each accumulator neuron’s normalization ‘tuning’ dictates its contribution to perceptual decisions versus confidence judgments. We demonstrate that this biologically plausible model can account for several counterintuitive findings reported in the literature, where confidence and decision accuracy were shown to dissociate -- and that the differential contribution a neuron makes to decisions versus confidence judgments based on its normalization tuning is vital to capturing some of these effects. One critical prediction of the model is that systematic variability in normalization tuning exists not only in sensory cortices but also in the decision-making circuitry. We tested and validated this prediction in macaque superior colliculus (SC; a region implicated in decision-making). The confirmation of this novel prediction provides direct support for our model. These findings suggest that the brain has developed and implements this alternative, heuristic theory of perceptual confidence computation by capitalizing on the diversity of neural resources available.SignificanceThe dominant view of perceptual confidence proposes that confidence optimally reflects the probability that a decision is correct. But recent empirical evidence suggests that perceptual confidence exhibits a suboptimal ‘confirmation bias’, just as in human decision-making in general. We tested how this ‘bias’ might be neurally implemented by building a biologically plausible neural network model, and showed that the ‘bias’ emerges when each neuron’s degree of divisive normalization dictates how it drives decisions versus confidence judgments. We confirmed the model’s biological substrate using electrophysiological recordings in monkeys. These results challenge the dominant model, suggesting that the brain instead capitalizes on the diversity of available machinery (i.e., neuronal resources) to implement heuristic -- not optimal -- strategies to compute subjective confidence.


2018 ◽  
Author(s):  
Zhongqiao Lin ◽  
Chechang Nie ◽  
Yuanfeng Zhang ◽  
Yang Chen ◽  
Tianming Yang

AbstractValue-based decision making is a process in which humans or animals maximize their gain by selecting appropriate options and performing the corresponding actions to acquire them. Whether the evaluation process of the options in the brain can be independent from their action contingency has been hotly debated. To address the question, we trained rhesus monkeys to make decisions by integrating evidence and studied whether the integration occurred in the stimulus or the action domain in the brain. After the monkeys learned the task, we recorded both from the orbitofrontal (OFC) and dorsolateral prefrontal (DLPFC) cortices. We found that the OFC neurons encoded the value associated with the single piece of evidence in the stimulus domain. Importantly, the representations of the value in the OFC was transient and the information was not integrated across time for decisions. The integration of evidence was observed only in the DLPFC and only in the action domain. We further used a neural network model to show how the stimulus-to-action transition of value information may be computed in the DLPFC. Our results indicated that the decision making in the brain is computed in the action domain without an intermediate stimulus-based decision stage.


Sign in / Sign up

Export Citation Format

Share Document