pursuit evasion
Recently Published Documents


TOTAL DOCUMENTS

732
(FIVE YEARS 133)

H-INDEX

34
(FIVE YEARS 4)

2022 ◽  
Author(s):  
Venkata Ramana Makkapati ◽  
Jack Ridderhof ◽  
Panagiotis Tsiotras

2021 ◽  
Vol 7 (2) ◽  
pp. 94
Author(s):  
Bahrom T. Samatov ◽  
Ulmasjon B. Soyibboev

In this paper, we study the well-known problem of Isaacs called the "Life line" game when movements of players occur by acceleration vectors, that is, by inertia in Euclidean space. To solve this problem, we investigate the dynamics of the attainability domain of an evader through finding solvability conditions of the pursuit-evasion problems in favor of a pursuer or an evader. Here a pursuit problem is solved by a parallel pursuit strategy. To solve an evasion problem, we propose a strategy for the evader and show that the evasion is possible from given initial positions of players. Note that this work develops and continues studies of Isaacs, Petrosjan, Pshenichnii, Azamov, and others performed for the case of players' movements without inertia.


Nonlinearity ◽  
2021 ◽  
Vol 35 (1) ◽  
pp. 608-657
Author(s):  
Mario Fuest

Abstract Systems of the type u t = ∇ ⋅ ( D 1 ( u ) ∇ u − S 1 ( u ) ∇ v ) + f 1 ( u , v ) , v t = ∇ ⋅ ( D 2 ( v ) ∇ v + S 2 ( v ) ∇ u ) + f 2 ( u , v ) ( ⋆ ) can be used to model pursuit-evasion relationships between predators and prey. Apart from local kinetics given by f 1 and f 2, the key components in this system are the taxis terms −∇ ⋅ (S 1(u)∇v) and +∇ ⋅ (S 2(v)∇u); that is, the species are not only assumed to move around randomly in space but are also able to partially direct their movement depending on the nearby presence of the other species. In the present article, we construct global weak solutions of (⋆) for certain prototypical nonlinear functions D i , S i and f i , i ∈ {1, 2}. To that end, we first make use of a fourth-order regularisation to obtain global solutions to approximate systems and then rely on an entropy-like identity associated with (⋆) for obtaining various a priori estimates.


2021 ◽  
Author(s):  
Jhanani Selvakumar ◽  
Efstathios Bakolas
Keyword(s):  

2021 ◽  
pp. 4065-4074
Author(s):  
Weixiang Shi ◽  
Chunyan Wang ◽  
Jianan Wang ◽  
Jiayuan Shan

Entropy ◽  
2021 ◽  
Vol 23 (11) ◽  
pp. 1433
Author(s):  
Kaifang Wan ◽  
Dingwei Wu ◽  
Yiwei Zhai ◽  
Bo Li ◽  
Xiaoguang Gao ◽  
...  

A pursuit–evasion game is a classical maneuver confrontation problem in the multi-agent systems (MASs) domain. An online decision technique based on deep reinforcement learning (DRL) was developed in this paper to address the problem of environment sensing and decision-making in pursuit–evasion games. A control-oriented framework developed from the DRL-based multi-agent deep deterministic policy gradient (MADDPG) algorithm was built to implement multi-agent cooperative decision-making to overcome the limitation of the tedious state variables required for the traditionally complicated modeling process. To address the effects of errors between a model and a real scenario, this paper introduces adversarial disturbances. It also proposes a novel adversarial attack trick and adversarial learning MADDPG (A2-MADDPG) algorithm. By introducing an adversarial attack trick for the agents themselves, uncertainties of the real world are modeled, thereby optimizing robust training. During the training process, adversarial learning was incorporated into our algorithm to preprocess the actions of multiple agents, which enabled them to properly respond to uncertain dynamic changes in MASs. Experimental results verified that the proposed approach provides superior performance and effectiveness for pursuers and evaders, and both can learn the corresponding confrontational strategy during training.


Sign in / Sign up

Export Citation Format

Share Document