scholarly journals Emergent Intelligence and Distributed Stochastic Optimization

2022 ◽  
Vol 1215 (1) ◽  
pp. 012001
Author(s):  
O.N. Granichin ◽  
O.A. Granichina ◽  
V.A. Erofeeva ◽  
A.V. Leonova ◽  
A.A. Senov

Abstract Emergent intelligence is a property of a system of elements that is not inherent in each element individually. This behavior is based on local communications. This behavior helps to adapt to emerging uncertainties and achieve a global goal. This behavior exists in the natural world. A simplified example of emergent intelligence from the natural world is given. The repetition of natural behavior with the help of simple technical devices, which are limited in resources and cheap in construction, and the use of multi-agent approaches is considered. Distributed algorithms using local communications are considered. Such algorithms are more robust to noise.

Author(s):  
Stuart P. Wilson

Self-organization describes a dynamic in a system whereby local interactions between individuals collectively yield global order, i.e. spatial patterns unobservable in their entirety to the individuals. By this working definition, self-organization is intimately related to chaos, i.e. global order in the dynamics of deterministic systems that are locally unpredictable. A useful distinction is that a small perturbation to a chaotic system causes a large deviation in its trajectory, i.e. the butterfly effect, whereas self-organizing patterns are robust to noise and perturbation. For many, self-organization is as important to the understanding of biological processes as natural selection. For some, self-organization explains where the complex forms that compete for survival in the natural world originate from. This chapter outlines some fundamental ideas from the study of simulated self-organizing systems, before suggesting how self-organizing principles could be applied through biohybrid societies to establish new theories of living systems.


2011 ◽  
Vol 20 (02) ◽  
pp. 271-295 ◽  
Author(s):  
VÍCTOR SÁNCHEZ-ANGUIX ◽  
SOLEDAD VALERO ◽  
ANA GARCÍA-FORNES

An agent-based Virtual Organization is a complex entity where dynamic collections of agents agree to share resources in order to accomplish a global goal or offer a complex service. An important problem for the performance of the Virtual Organization is the distribution of the agents across the computational resources. The final distribution should provide a good load balancing for the organization. In this article, a genetic algorithm is applied to calculate a proper distribution across hosts in an agent-based Virtual Organization. Additionally, an abstract multi-agent system architecture which provides infrastructure for Virtual Organization distribution is introduced. The developed genetic solution employs an elitist crossover operator where one of the children inherits the most promising genetic material from the parents with higher probability. In order to validate the genetic proposal, the designed genetic algorithm has been successfully compared to several heuristics in different scenarios.


Author(s):  
GAO-WEI YAN ◽  
ZHAN-JU HAO

This paper introduces a novel numerical stochastic optimization algorithm inspired from the behaviors of cloud in the natural world, which is designated as atmosphere clouds model optimization (ACMO) algorithm. It is tried to simulate the generation behavior, move behavior and spread behavior of cloud in a simple way. The ACMO algorithm has been tested on a set of benchmark functions in comparison with two other evolutionary-based algorithms: particle swarm optimization (PSO) algorithm and genetic algorithm (GA). The results demonstrate that the proposed algorithm has certain advantages in solving multimodal functions, while the PSO algorithm has a better result in terms of convergence accuracy. In conclusion, the ACMO algorithm is an effective method in solving optimization problems.


2018 ◽  
Vol 21 (62) ◽  
pp. 53
Author(s):  
Anastasios Alexiadis ◽  
Ioannis Refanidis ◽  
Ilias Sakellariou

Automated meeting scheduling is the task of reaching an agreement on a time slot to schedule a new meeting, taking into account the participants’ preferences over various aspects of the problem. Such a negotiation is commonly performed in a non-automated manner, that is, the users decide whether they can reschedule existing individual activities and, in some cases, already scheduled meetings in order to accommodate the new meeting request in a particular time slot, by inspecting their schedules. In this work, we take advantage of SelfPlanner, an automated system that employs greedy stochastic optimization algorithms to schedule individual activities under a rich model of preferences and constraints, and we extend that work to accommodate meetings. For each new meeting request, participants decide whether they can accommodate the meeting in a particular time slot by employing SelfPlanner’s underlying algorithms to automatically reschedule existing individual activities. Time slots are prioritized in terms of the number of users that need to reschedule existing activities. An agreement is reached as soon as all agents can schedule the meeting at a particular time slot, without anyone of them experiencing an overall utility loss, that is, taking into account also the utility gain from the meeting. This dynamic multi-agent meeting scheduling approach has been tested on a variety of test problems with very promising results.


Author(s):  
Kaushik Das Sharma

Multi-agent optimization or population based search techniques are increasingly become popular compared to its single-agent counterpart. The single-agent gradient based search algorithms are very prone to be trapped in local optima and also the computational cost is higher. Multi-Agent Stochastic Optimization (MASO) algorithms are much powerful to overcome most of the drawbacks. This chapter presents a comparison of five MASO algorithms, namely genetic algorithm, particle swarm optimization, differential evolution, harmony search algorithm, and gravitational search algorithm. These MASO algorithms are utilized here to design the state feedback regulator for a Twin Rotor MIMO System (TRMS). TRMS is a multi-modal process and the design of its state feedback regulator is quite difficult using conventional methods available. MASO algorithms are typically suitable for such complex process optimizations. The performances of different MASO algorithms are presented and discussed in light of designing the state regulator for TRMS.


Water ◽  
2020 ◽  
Vol 12 (10) ◽  
pp. 2688 ◽  
Author(s):  
Milad Hooshyar ◽  
S. Jamshid Mousavi ◽  
Masoud Mahootchi ◽  
Kumaraswamy Ponnambalam

Stochastic dynamic programming (SDP) is a widely-used method for reservoir operations optimization under uncertainty but suffers from the dual curses of dimensionality and modeling. Reinforcement learning (RL), a simulation-based stochastic optimization approach, can nullify the curse of modeling that arises from the need for calculating a very large transition probability matrix. RL mitigates the curse of the dimensionality problem, but cannot solve it completely as it remains computationally intensive in complex multi-reservoir systems. This paper presents a multi-agent RL approach combined with an aggregation/decomposition (AD-RL) method for reducing the curse of dimensionality in multi-reservoir operation optimization problems. In this model, each reservoir is individually managed by a specific operator (agent) while co-operating with other agents systematically on finding a near-optimal operating policy for the whole system. Each agent makes a decision (release) based on its current state and the feedback it receives from the states of all upstream and downstream reservoirs. The method, along with an efficient artificial neural network-based robust procedure for the task of tuning Q-learning parameters, has been applied to a real-world five-reservoir problem, i.e., the Parambikulam–Aliyar Project (PAP) in India. We demonstrate that the proposed AD-RL approach helps to derive operating policies that are better than or comparable with the policies obtained by other stochastic optimization methods with less computational burden.


Sign in / Sign up

Export Citation Format

Share Document