Artificial Life
Latest Publications


TOTAL DOCUMENTS

760
(FIVE YEARS 82)

H-INDEX

54
(FIVE YEARS 5)

Published By Mit Press

1530-9185, 1064-5462

2021 ◽  
pp. 1-36
Author(s):  
Oskar Elek ◽  
Joseph N. Burchett ◽  
J. Xavier Prochaska ◽  
Angus G. Forbes

Abstract We present Monte Carlo Physarum Machine (MCPM): a computational model suitable for reconstructing continuous transport networks from sparse 2D and 3D data. MCPM is a probabilistic generalization of Jones's (2010) agent-based model for simulating the growth of Physarum polycephalum (slime mold). We compare MCPM to Jones's work on theoretical grounds, and describe a task-specific variant designed for reconstructing the large-scale distribution of gas and dark matter in the Universe known as the cosmic web. To analyze the new model, we first explore MCPM's self-patterning behavior, showing a wide range of continuous network-like morphologies—called polyphorms—that the model produces from geometrically intuitive parameters. Applying MCPM to both simulated and observational cosmological data sets, we then evaluate its ability to produce consistent 3D density maps of the cosmic web. Finally, we examine other possible tasks where MCPM could be useful, along with several examples of fitting to domain-specific data as proofs of concept.


2021 ◽  
pp. 1-36
Author(s):  
Julian Francis Miller

Abstract Artificial neural networks (ANNs) were originally inspired by the brain; however, very few models use evolution and development, both of which are fundamental to the construction of the brain. We describe a simple neural model, called IMPROBED, in which two neural programs construct an artificial brain that can simultaneously solve multiple computational problems. One program represents the neuron soma and the other the dendrite. The soma program decides whether neurons move, change, die, or replicate. The dendrite program decides whether dendrites extend, change, die, or replicate. Since developmental programs build networks that change over time, it is necessary to define new problem classes that are suitable to evaluate such approaches. We show that the pair of evolved programs can build a single network from which multiple conventional ANNs can be extracted, each of which can solve a different computational problem. Our approach is quite general and it could be applied to a much wider variety of problems.


2021 ◽  
pp. 1-31
Author(s):  
Marcus Krellner ◽  
The Anh Han

Abstract Indirect reciprocity is an important mechanism for promoting cooperation among self-interested agents. Simplified, it means “you help me; therefore somebody else will help you” (in contrast to direct reciprocity: “you help me; therefore I will help you”). Indirect reciprocity can be achieved via reputation and norms. Strategies, such as the so-called leading eight, relying on these principles can maintain high levels of cooperation and remain stable against invasion, even in the presence of errors. However, this is only the case if the reputation of an agent is modeled as a shared public opinion. If agents have private opinions and hence can disagree if somebody is good or bad, even rare errors can cause cooperation to break apart. Weshow that most strategies can overcome the private assessment problem by applying pleasing. A pleasing agent acts in accordance with others' expectations of their behavior (i.e., pleasing them) instead of being guided by their own, private assessment. As such, a pleasing agent can achieve a better reputation than previously considered strategies when there is disagreement in the population. Pleasing is effective even if the opinions of only a few other individuals are considered and when it bears additional costs. Finally, through a more exhaustive analysis of the parameter space than previous studies, we show that some of the leading eight still function under private assessment, i.e., that cooperation rates are well above an objective baseline. Yet, pleasing strategies supersede formerly described ones and enhance cooperation.


2021 ◽  
pp. 1-26
Author(s):  
Barbora Hudcová ◽  
Tomáš Mikolov

Abstract In order to develop systems capable of artificial evolution, we need to identify which systems can produce complex behavior. We present a novel classification method applicable to any class of deterministic discrete space and time dynamical systems. The method is based on classifying the asymptotic behavior of the average computation time in a given system before entering a loop. We were able to identify a critical region of behavior that corresponds to a phase transition from ordered behavior to chaos across various classes of dynamical systems. To show that our approach can be applied to many different computational systems, we demonstrate the results of classifying cellular automata, Turing machines, and random Boolean networks. Further, we use this method to classify 2D cellular automata to automatically find those with interesting, complex dynamics. We believe that our work can be used to design systems in which complex structures emerge. Also, it can be used to compare various versions of existing attempts to model open-ended evolution (Channon, 2006; Ofria & Wilke, 2004; Ray, 1991).


2021 ◽  
pp. 1-21
Author(s):  
Chloe M. Barnes ◽  
Abida Ghouri ◽  
Peter R. Lewis

Abstract Understanding how evolutionary agents behave in complex environments is a challenging problem. Agents can be faced with complex fitness landscapes derived from multi-stage tasks, interaction with others, and limited environmental feedback. Agents that evolve to overcome these can sometimes access greater fitness, as a result of factors such as cooperation and tool use. However, it is often difficult to explain why evolutionary agents behave in certain ways, and what specific elements of the environment or task may influence the ability of evolution to find goal-achieving behaviours; even seemingly simple environments or tasks may contain features that affect agent evolution in unexpected ways. We explore principled simplification of evolutionary agent-based models, as a possible route to aiding their explainability. Using the River Crossing Task (RCT) as a case study, we draw on analysis in the Minimal River Crossing (RC-) Task testbed, which was designed to simplify the original task while keeping its key features. Using this method, we present new analysis concerning when agents evolve to successfully complete the RCT. We demonstrate that the RC- environment can be used to understand the effect that a cost to movement has on agent evolution, and that these findings can be generalised back to the original RCT. Then, we present new insight into the use of principled simplification in understanding evolutionary agents. We find evidence that behaviour dependent on features that survive simplification, such as problem structure, are amenable to prediction; while predicting behaviour dependent on features that are typically reduced in simplification, such as scale, can be invalid.


2021 ◽  
pp. 1-21
Author(s):  
Thomas Helmuth ◽  
Lee Spector

Abstract In genetic programming, an evolutionary method for producing computer programs that solve specified computational problems, parent selection is ordinarily based on aggregate measures of performance across an entire training set. Lexicase selection, by contrast, selects on the basis of performance on random sequences of training cases; this has been shown to enhance problem-solving power in many circumstances. Lexicase selection can also be seen as better reflecting biological evolution, by modeling sequences of challenges that organisms face over their lifetimes. Recent work has demonstrated that the advantages of lexicase selection can be amplified by down-sampling, meaning that only a random subsample of the training cases is used each generation. This can be seen as modeling the fact that individual organisms encounter only subsets of the possible environments and that environments change over time. Here we provide the most extensive benchmarking of down-sampled lexicase selection to date, showing that its benefits hold up to increased scrutiny. The reasons that down-sampling helps, however, are not yet fully understood. Hypotheses include that down-sampling allows for more generations to be processed with the same budget of program evaluations; that the variation of training data across generations acts as a changing environment, encouraging adaptation; or that it reduces overfitting, leading to more general solutions. We systematically evaluate these hypotheses, finding evidence against all three, and instead draw the conclusion that down-sampled lexicase selection's main benefit stems from the fact that it allows the evolutionary process to examine more individuals within the same computational budget, even though each individual is examined less completely.


2021 ◽  
pp. 1-25
Author(s):  
Tran Nguyen Minh-Thai ◽  
Sandhya Samarasinghe ◽  
Michael Levin

Abstract Many biological organisms regenerate structure and function after damage. Despite the long history of research on molecular mechanisms, many questions remain about algorithms by which cells can cooperate towards the same invariant morphogenetic outcomes. Therefore, conceptual frameworks are needed not only for motivating hypotheses for advancing the understanding of regeneration processes in living organisms, but also for regenerative medicine and synthetic biology. Inspired by planarian regeneration, this study offers a novel generic conceptual framework that hypothesizes mechanisms and algorithms by which cell collectives may internally represent an anatomical target morphology towards which they build after damage. Further, the framework contributes a novel nature-inspired computing method for self-repair in engineering and robotics. Our framework, based on past in vivo and in silico studies on planaria, hypothesizes efficient novel mechanisms and algorithms to achieve complete and accurate regeneration of a simple in silico flatwormlike organism from any damage, much like the body-wide immortality of planaria, with minimal information and algorithmic complexity. This framework that extends our previous circular tissue repair model integrates two levels of organization: tissue and organism. In Level 1, three individual in silico tissues (head, body, and tail—each with a large number of tissue cells and a single stem cell at the centre) repair themselves through efficient local communications. Here, the contribution extends our circular tissue model to other shapes and invests them with tissue-wide immortality through an information field holding the minimum body plan. In Level 2, individual tissues combine to form a simple organism. Specifically, the three stem cells form a network that coordinates organism-wide regeneration with the help of Level 1. Here we contribute novel concepts for collective decision-making by stem cells for stem cell regeneration and large-scale recovery. Both levels (tissue cells and stem cells) represent networks that perform simple neural computations and form a feedback control system. With simple and limited cellular computations, our framework minimises computation and algorithmic complexity to achieve complete recovery. We report results from computer simulations of the framework to demonstrate its robustness in recovering the organism after any injury. This comprehensive hypothetical framework that significantly extends the existing biological regeneration models offers a new way to conceptualise the information-processing aspects of regeneration, which may also help design living and non-living self-repairing agents.


2021 ◽  
pp. 1-16
Author(s):  
Toby Howison ◽  
Josie Hughes ◽  
Fumiya Iida

Abstract Behavioral diversity seen in biological systems is, at the most basic level, driven by interactions between physical materials and their environment. In this context we are interested in falling paper systems, specifically the V-shaped falling paper (VSFP) system that exhibits a set of discrete falling behaviors across the morphological parameter space. Our previous work has investigated how morphology influences dominant falling behaviors in the VSFP system. In this article we build on this analysis to investigate the nature of behavioral transitions in the same system. First, we investigate stochastic behavior transitions. We demonstrate how morphology influences the likelihood of different transitions, with certain morphologies leading to a wide range of possible paths through the behavior-space. Second, we investigate deterministic transitions. To investigate behaviors over longer time periods than available in falling experiments we introduce a new experimental platform. We demonstrate how we can induce behavior transitions by modulating the energy input to the system. Certain behavior transitions are found to be irreversible, exhibiting a form of hysteresis, while others are fully reversible. Certain morphologies are shown to behave like simplistic sequential logic circuits, indicating that the system has a form of memory encoded into the morphology–environment interactions. Investigating the limits of how morphology–environment interactions induce non-trivial behaviors is a key step for the design of embodied artificial life-forms.


Sign in / Sign up

Export Citation Format

Share Document