Epilogue

Author(s):  
Eric Bonabeau ◽  
Marco Dorigo ◽  
Guy Theraulaz

After seven chapters of swarm-based approaches, where do we stand? First of all, it is clear that social insects and, more generally, natural systems, can bring much insight into the design of algorithms and artificial problem-solving systems. In particular, artificial swarm-intelligent systems are expected to exhibit the features that may have made social insects so successful in the biosphere: flexibility, robustness, decentralized control, and self-organization. The examples that have been described throughout this book provide illustrations of these features, either explicitly or implicitly. The swarm-based approach, therefore, looks promising, in face of a world that continually becomes more complex, dynamic, and overloaded with information than ever. There remain some issues, however, as to the application of swarm intelligence to solving problems. . . . 1. First, it would be very useful to define methodologies to “program” a swarm or multiagent system so that it performs a given task. There is a similarity here with the problem of training neural networks [167]: how can one tune interaction weights so that the network performs a given task, such as classification, recognition, etc. The fact that (potentially mobile) agents in a swarm can take actions asynchronously and at any spatial location generally makes the problem extremely hard. In order to solve this “inverse” problem and find the appropriate individual algorithm that generates the desired collective pattern, one can either systematically explore the behaviors of billions of different swarms, or search this huge space of possible swarms with some kind of cost function, assuming a reasonable continuity of the mapping from individual algorithms to collective productions. This latter solution can be based, for example, on artificial evolutionary techniques such as genetic algorithms [152, 171] if individual behavior is adequately coded and if a cost function can be defined. 2. Second, and perhaps even more fundamental than the issue of programming the system, is that of defining it: How complex should individual agents be? Should they be all identical? Should they have the ability to learn? Should they be able to make logical inferences? Should they be purely reactive? How local should their knowledge of the environment be?

2021 ◽  
Author(s):  
M.I. Shimelevich ◽  
I.E. Obornev ◽  
E.A. Obornev ◽  
E.A. Rodionov

2017 ◽  
Vol 26 (3) ◽  
pp. 433-437
Author(s):  
Mark Dougherty

AbstractForgetting is an oft-forgotten art. Many artificial intelligence (AI) systems deliver good performance when first implemented; however, as the contextual environment changes, they become out of date and their performance degrades. Learning new knowledge is part of the solution, but forgetting outdated facts and information is a vital part of the process of renewal. However, forgetting proves to be a surprisingly difficult concept to either understand or implement. Much of AI is based on analogies with natural systems, and although all of us have plenty of experiences with having forgotten something, as yet we have only an incomplete picture of how this process occurs in the brain. A recent judgment by the European Court concerns the “right to be forgotten” by web index services such as Google. This has made debate and research into the concept of forgetting very urgent. Given the rapid growth in requests for pages to be forgotten, it is clear that the process will have to be automated and that intelligent systems of forgetting are required in order to meet this challenge.


2006 ◽  
Vol 6 ◽  
pp. 992-997 ◽  
Author(s):  
Alison M. Kerr

More than 20 years of clinical and research experience with affected people in the British Isles has provided insight into particular challenges for therapists, educators, or parents wishing to facilitate learning and to support the development of skills in people with Rett syndrome. This paper considers the challenges in two groups: those due to constraints imposed by the disabilities associated with the disorder and those stemming from the opportunities, often masked by the disorder, allowing the development of skills that depend on less-affected areas of the brain. Because the disorder interferes with the synaptic links between neurones, the functions of the brain that are most dependent on complex neural networks are the most profoundly affected. These functions include speech, memory, learning, generation of ideas, and the planning of fine movements, especially those of the hands. In contrast, spontaneous emotional and hormonal responses appear relatively intact. Whereas failure to appreciate the physical limitations of the disease leads to frustration for therapist and client alike, a clear understanding of the better-preserved areas of competence offers avenues for real progress in learning, the building of satisfying relationships, and achievement of a quality of life.


2002 ◽  
Vol 12 (01) ◽  
pp. 31-43 ◽  
Author(s):  
GARY YEN ◽  
HAIMING LU

In this paper, we propose a genetic algorithm based design procedure for a multi-layer feed-forward neural network. A hierarchical genetic algorithm is used to evolve both the neural network's topology and weighting parameters. Compared with traditional genetic algorithm based designs for neural networks, the hierarchical approach addresses several deficiencies, including a feasibility check highlighted in literature. A multi-objective cost function is used herein to optimize the performance and topology of the evolved neural network simultaneously. In the prediction of Mackey–Glass chaotic time series, the networks designed by the proposed approach prove to be competitive, or even superior, to traditional learning algorithms for the multi-layer Perceptron networks and radial-basis function networks. Based upon the chosen cost function, a linear weight combination decision-making approach has been applied to derive an approximated Pareto-optimal solution set. Therefore, designing a set of neural networks can be considered as solving a two-objective optimization problem.


Entropy ◽  
2020 ◽  
Vol 22 (12) ◽  
pp. 1365
Author(s):  
Bogdan Muşat ◽  
Răzvan Andonie

Convolutional neural networks utilize a hierarchy of neural network layers. The statistical aspects of information concentration in successive layers can bring an insight into the feature abstraction process. We analyze the saliency maps of these layers from the perspective of semiotics, also known as the study of signs and sign-using behavior. In computational semiotics, this aggregation operation (known as superization) is accompanied by a decrease of spatial entropy: signs are aggregated into supersign. Using spatial entropy, we compute the information content of the saliency maps and study the superization processes which take place between successive layers of the network. In our experiments, we visualize the superization process and show how the obtained knowledge can be used to explain the neural decision model. In addition, we attempt to optimize the architecture of the neural model employing a semiotic greedy technique. To the extent of our knowledge, this is the first application of computational semiotics in the analysis and interpretation of deep neural networks.


2021 ◽  
Vol 2 (4) ◽  
pp. 1-8
Author(s):  
Lingjie Fan ◽  
◽  
Ang Chen ◽  
Tongyu Li ◽  
Jiao Chu ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document