Models of sociocultural evolution generally study the population dynamics of cultural traits given known biases in social learning. Cognitive agency, understood as the dynamics underlying a specific agent’s adoption of a given trait, is essentially irrelevant in this framework. This article argues that although implementing and instrumenting agency in computational models is fundamentally challenging, it is ultimately possible and would help us overcome major limitations in our understanding of sociocultural dynamics.Indeed, the behaviour of humans is not causally generated by a set of predefined behavioural laws, but by the situated activity of their cognitive architecture. Idealised models of biased transmission certainly help us understand specific features of population dynamics. However, they distract us from the deep intrication of the cognitive and ecological processes underlying sociocultural evolution, and erase their embodied, subjective nature.In line with the earlier “Thinking Through Other Minds” account of sociocultural evolution, this article highlights how the Active Inference framework can help us implement and instrument computational models that address these limitations. Such models would not only help ground our understanding of sociocultural evolution in the underlying cognitive dynamics, but also help solve (or frame) open questions in the study of ritual, relation between cultural transmission and innovation, as well as scales of cultural evolution.
AbstractThis work considers a class of canonical neural networks comprising rate coding models, wherein neural activity and plasticity minimise a common cost function—and plasticity is modulated with a certain delay. We show that such neural networks implicitly perform active inference and learning to minimise the risk associated with future outcomes. Mathematical analyses demonstrate that this biological optimisation can be cast as maximisation of model evidence, or equivalently minimisation of variational free energy, under the well-known form of a partially observed Markov decision process model. This equivalence indicates that the delayed modulation of Hebbian plasticity—accompanied with adaptation of firing thresholds—is a sufficient neuronal substrate to attain Bayes optimal inference and control. We corroborated this proposition using numerical analyses of maze tasks. This theory offers a universal characterisation of canonical neural networks in terms of Bayesian belief updating and provides insight into the neuronal mechanisms underlying planning and adaptive behavioural control.
The spread of ideas is a fundamental concern of today’s news ecology. Understanding the dynamics of the spread of information and its co-option by interested parties is of critical importance. Research on this topic has shown that individuals tend to cluster in echo-chambers and are driven by confirmation bias. In this paper, we leverage the active inference framework to provide an in silico model of confirmation bias and its effect on echo-chamber formation. We build a model based on active inference, where agents tend to sample information in order to justify their own view of reality, which eventually leads to them to have a high degree of certainty about their own beliefs. We show that, once agents have reached a certain level of certainty about their beliefs, it becomes very difficult to get them to change their views. This system of self-confirming beliefs is upheld and reinforced by the evolving relationship between agent's beliefs and its observations, which over time will continue to provide evidence for their ingrained ideas about the world. The epistemic communities that are consolidated by these shared beliefs, in turn, tend to produce perceptions of reality that reinforce those shared beliefs. We provide an active inference account of this community formation mechanism. We postulate that agents are driven by the epistemic value that they obtain from sampling or observing the behaviors of other agents. Inspired by digital social networks like Twitter, we build a generative model in which agents generate observable social claims or posts (e.g. `tweets') while reading the socially-observable claims of other agents, that lend support towards one of two mutually-exclusive abstract topics. Agents can choose which other agent they pay attention to at each timestep, and crucially who they attend to and what they choose to read influences their beliefs about the world. Agents also assess their local network’s perspective, influencing which kinds of posts they expect to see other agents making. The model was built and simulated simulated using the freely-available Python package pymdp. The proposed active inference model can reproduce the formation of echo-chambers over social networks, and gives us insight into the cognitive processes that lead to this phenomenon.
The gap between the Markov blanket and ontological boundaries arises from the former’s inability to capture the dynamic process through which biological and cognitive agents actively generate their own boundaries with the environment. Active inference in the FEP framework presupposes the existence of a Markov blanket, but it is not a process that actively generates the latter.
This article considers the evolution of brain architectures for predictive processing. We argue that brain mechanisms for predictive perception and action are not late evolutionary additions of advanced creatures like us. Rather, they emerged gradually from simpler predictive loops (e.g. autonomic and motor reflexes) that were a legacy from our earlier evolutionary ancestors—and were key to solving their fundamental problems of adaptive regulation. We characterize simpler-to-more-complex brains formally, in terms of
that include predictive loops of increasing hierarchical breadth and depth. These may start from a simple homeostatic motif and be elaborated during evolution in four main ways: these include the
of predictive control into an allostatic loop; its
to form multiple sensorimotor loops that expand an animal's behavioural repertoire; and the gradual endowment of generative models with
(to deal with aspects of the world that unfold at different spatial scales) and
(to select plans in a future-oriented manner). In turn, these elaborations underwrite the solution to biological regulation problems faced by increasingly sophisticated animals. Our proposal aligns neuroscientific theorising—about predictive processing—with evolutionary and comparative data on brain architectures in different animal species.
This article is part of the theme issue ‘Systems neuroscience through the lens of evolutionary theory’.
The field of motor control has long focused on the achievement of external goals through action (e.g., reaching and grasping objects). However, recent studies in conditions of multisensory conflict, such as when a subject experiences the rubber hand illusion or embodies an avatar in virtual reality, reveal the presence of unconscious movements that are not goal-directed, but rather aim at resolving multisensory conflicts; for example, by aligning the position of a person’s arm with that of an embodied avatar. This second, conflict-resolution imperative of movement control did not emerge in classical studies of motor adaptation and online corrections, which did not allow movements to reduce the conflicts; and has been largely ignored so far in formal theories. Here, we propose a model of movement control grounded in the theory of active inference that integrates intentional and conflict-resolution imperatives. We present three simulations showing that the active inference model is able to characterize movements guided by the intention to achieve an external goal, by the necessity to resolve multisensory conflict, or both. Furthermore, our simulations reveal a fundamental difference between the (active) inference underlying intentional and conflict-resolution imperatives, respectively, by showing that it is driven by two different (model and sensory) kinds of prediction errors. Finally, our simulations show that when movement is only guided by conflict-resolution, the model incorrectly infers that is velocity is zero, as if it was not moving. This result suggests a novel speculative explanation for the fact that people are unaware of their subtle compensatory movements to avoid multisensory conflict. Furthermore, it can potentially help shed light on deficits of motor awareness that arise in psychopathological conditions.
The Air Force research programs envision developing AI technologies that will ensure battlespace dominance, by radical increases in the speed of battlespace understanding and decision-making. In the last half century, advances in AI have been concentrated in the area of machine learning. Recent experimental findings and insights in systems neuroscience, the biophysics of cognition, and other disciplines provide converging results that set the stage for technologies of machine understanding and machine-augmented Situational Understanding. This paper will review some of the key ideas and results in the literature, and outline new suggestions. We define situational understanding and the distinctions between understanding and awareness, consider examples of how understanding—or lack of it—manifest in performance, and review hypotheses concerning the underlying neuronal mechanisms. Suggestions for further R&D are motivated by these hypotheses and are centered on the notions of Active Inference and Virtual Associative Networks.
We advance a novel computational model of the acquisition of a hierarchical action repertoire and its use for observation, understanding and motor control. The model is grounded in a principled framework to understand brain and cognition: active inference. We exemplify the functioning of the model by presenting four simulations of a tennis learner who observes a teacher performing tennis shots and forms hierarchical representations of the observed actions - including both actions that are already in her repertoire and novel actions - and finally imitates them. Our simulations that show that the agent’s oculomotor activity implements an active information sampling strategy that permits inferring the kinematics aspects of the observed movement, which lie at the lowest level of the action hierarchy. In turn, this low-level kinematic inference supports higher-level inferences about deeper aspects of the observed actions, such as their proximal goals and intentions. Finally, the inferred action representations can steer imitative motor responses, but interfere with the execution of different actions. Taken together, our simulations show that the same hierarchical active inference model provides a unified account of action observation, understanding, learning and imitation. Finally, our model provides a computational rationale to explain the neurobiological underpinnings of visuomotor cognition, including the multiple routes for action understanding in the dorsal and ventral streams and mirror mechanisms.
Grammar acquisition by non-native learners (L2) is typically less successful and may produce fundamentally different grammatical systems than that by native speakers (L1). The neural representation of grammatical processing between L1 and L2 speakers remains controversial. We hypothesized that working memory is the primary source of L1/L2 differences, and operationalized working memory is an active inference within the predictive coding account, which models grammatical processes as higher-level neuronal representations of cortical hierarchies, generating predictions (forward model) of lower-level representations. A functional MRI study was conducted with L1 Japanese speakers and highly proficient Japanese learners requiring oral production of grammatically correct Japanese particles. Selecting proper particles requires forward model-dependent active inference as their functions are highly context-dependent. As a control, participants read out a visually designated mora indicated by underlining. Particle selection by L1/L2 groups commonly activated the bilateral inferior frontal gyrus/insula, pre-supplementary motor area, left caudate, middle temporal gyrus, and right cerebellum, which constituted the core linguistic production system. In contrast, the left inferior frontal sulcus, known as the neural substrate of verbal working memory, showed more prominent activation in L2 than in L1. Thus, the active inference process causes L1/L2 differences even in highly proficient L2 learners.