multirobot coordination
Recently Published Documents


TOTAL DOCUMENTS

29
(FIVE YEARS 5)

H-INDEX

12
(FIVE YEARS 2)

2021 ◽  
pp. 1-19
Author(s):  
Anna Mannucci ◽  
Lucia Pallottino ◽  
Federico Pecora

2020 ◽  
Vol 36 (4) ◽  
pp. 1189-1206 ◽  
Author(s):  
Yunus Emre Sahin ◽  
Petter Nilsson ◽  
Necmiye Ozay

Author(s):  
Floriano De Rango ◽  
Nunzia Palmieri ◽  
Mauro Tropea

2019 ◽  
Vol 2019 ◽  
pp. 1-16 ◽  
Author(s):  
P. Paniagua-Contro ◽  
E. G. Hernandez-Martinez ◽  
O. González-Medina ◽  
J. González-Sierra ◽  
J. J. Flores-Godoy ◽  
...  

This paper presents the extension of leader-follower behaviours, for the case of a combined set of kinematic models of omnidirectional and differential-drive wheeled mobile robots. The control strategies are based on the decentralized measurements of distance and heading angles. Combining the kinematic models, the control strategies produce the standard and new mechanical behaviours related to rigid body or n-trailer approaches. The analysis is given in pairs of robots and extended to the case of multiple robots with a directed tree-shaped communication topology. Combining these behaviours, it is possible to make platoons of robots, as obtained from cluster space or virtual structure approaches, but now defined by local measurements and communication of robots. Numerical simulations and real-time experiments show the performance of the approach and the possibilities to be applied in multirobot tasks.


AI Magazine ◽  
2014 ◽  
Vol 35 (4) ◽  
pp. 61-74 ◽  
Author(s):  
Logan Yliniemi ◽  
Adrian K. Agogino ◽  
Kagan Tumer

Teams of artificially intelligent planetary rovers have tremendous potential for space exploration, allowing for reduced cost, increased flexibility and increased reliability. However, having these multiple autonomous devices acting simultaneously leads to a problem of coordination: to achieve the best results, the they should work together. This is not a simple task. Due to the large distances and harsh environments, a rover must be able to perform a wide variety of tasks with a wide variety of potential teammates in uncertain and unsafe environments. Directly coding all the necessary rules that can reliably handle all of this coordination and uncertainty is problematic. Instead, this article examines tackling this problem through the use of coordinated reinforcement learning: rather than being programmed what to do, the rovers iteratively learn through trial and error to take take actions that lead to high overall system return. To allow for coordination, yet allow each agent to learn and act independently, we employ state-of-the-art reward shaping techniques. This article uses visualization techniques to break down complex performance indicators into an accessible form, and identifies key future research directions.


Sign in / Sign up

Export Citation Format

Share Document