scholarly journals Applications of DEC-MDPs in Multi-Robot Systems

Robotics ◽  
2013 ◽  
pp. 143-165
Author(s):  
Aurélie Beynier ◽  
Abdel-Illah Mouaddib

Optimizing the operation of cooperative multi-robot systems that can cooperatively act in large and complex environments has become an important focal area of research. This issue is motivated by many applications involving a set of cooperative robots that have to decide in a decentralized way how to execute a large set of tasks in partially observable and uncertain environments. Such decision problems are encountered while developing exploration rovers, teams of patrolling robots, rescue-robot colonies, mine-clearance robots, et cetera. In this chapter, we introduce problematics related to the decentralized control of multi-robot systems. We first describe some applicative domains and review the main characteristics of the decision problems the robots must deal with. Then, we review some existing approaches to solve problems of multiagent decentralized control in stochastic environments. We present the Decentralized Markov Decision Processes and discuss their applicability to real-world multi-robot applications. Then, we introduce OC-DEC-MDPs and 2V-DEC-MDPs which have been developed to increase the applicability of DEC-MDPs.

Author(s):  
Aurélie Beynier ◽  
Abdel-Illah Mouaddib

In this chapter, we introduce problematics related to the decentralized control of multi-robot systems. We first describe some applicative domains and review the main characteristics of the decision problems the robots must deal with. Then, we review some existing approaches to solve problems of multiagent decentralized control in stochastic environments. We present the Decentralized Markov Decision Processes and discuss their applicability to real-world multi-robot applications. Then, we introduce OC-DEC-MDPs and 2V-DEC-MDPs which have been developed to increase the applicability of DEC-MDPs.


2018 ◽  
Vol 62 (9) ◽  
pp. 1284-1300 ◽  
Author(s):  
Khalil Mohamed ◽  
Ayman El Shenawy ◽  
Hany Harb

Abstract Exploring the environment using multi-robot systems is a fundamental process that most automated applications depend on. This paper presents a hybrid decentralized task assignment approach based on Partially Observable Semi-Markov Decision Processes called HDec-POSMDPs, which are general models for multi-robot coordination and exploration problems in which robots can make their own decisions according to its local data with limited communication between the robot team. In this paper, a variety of multi-robot exploration algorithms and their comparison have been tackled. These algorithms, which have been taken into consideration, are dependent on different parameters. Collectively, there are five metrics maximize the total exploration percentage, minimize overall mission time, reduce the number of hops in the networked robots, reduce the energy consumed by each robot and minimize the number of turns in the path from the start pose cells to the target cells. Therefore, a team of identical mobile robots is used to perform coordination and exploration process in an unknown cell-based environment. The performance of the task depends on the strategy of coordination among the robots involved in the team. Therefore, the proposed approach is implemented, tested and evaluated in MRESim computer simulator, and its performance is compared with different coordinated exploration strategies for different environments and different team sizes. The experimental results demonstrate a good performance of the proposed approach compared to the four existing approaches.


2017 ◽  
Vol 36 (2) ◽  
pp. 231-258 ◽  
Author(s):  
Shayegan Omidshafiei ◽  
Ali–Akbar Agha–Mohammadi ◽  
Christopher Amato ◽  
Shih–Yuan Liu ◽  
Jonathan P How ◽  
...  

This work focuses on solving general multi-robot planning problems in continuous spaces with partial observability given a high-level domain description. Decentralized Partially Observable Markov Decision Processes (Dec-POMDPs) are general models for multi-robot coordination problems. However, representing and solving Dec-POMDPs is often intractable for large problems. This work extends the Dec-POMDP model to the Decentralized Partially Observable Semi-Markov Decision Process (Dec-POSMDP) to take advantage of the high-level representations that are natural for multi-robot problems and to facilitate scalable solutions to large discrete and continuous problems. The Dec-POSMDP formulation uses task macro-actions created from lower-level local actions that allow for asynchronous decision-making by the robots, which is crucial in multi-robot domains. This transformation from Dec-POMDPs to Dec-POSMDPs with a finite set of automatically-generated macro-actions allows use of efficient discrete-space search algorithms to solve them. The paper presents algorithms for solving Dec-POSMDPs, which are more scalable than previous methods since they can incorporate closed-loop belief space macro-actions in planning. These macro-actions are automatically constructed to produce robust solutions. The proposed algorithms are then evaluated on a complex multi-robot package delivery problem under uncertainty, showing that our approach can naturally represent realistic problems and provide high-quality solutions for large-scale problems.


Sign in / Sign up

Export Citation Format

Share Document