Vision-based estimation of ground moving target by multiple unmanned aerial vehicles

Author(s):  
Mingfeng Zhang ◽  
H. H. T. Liu
2021 ◽  
pp. 1-10
Author(s):  
Camilla Tabasso ◽  
Calvin‘ Kielas-Jensen ◽  
Venanzio Cichella ◽  
Satyanarayana Manyam ◽  
David W. Casbeer ◽  
...  

Author(s):  
Shaoming He ◽  
Jiang Wang ◽  
Defu Lin

This paper investigates the problem of robust guidance law design for multiple unmanned aerial vehicles to achieve desired formation pattern for standoff tracking of an unknown ground moving target. The proposed guidance law consists of two main parts: relative range regulation and space angle control. For the first mission, a novel control law is proposed to regulate the relative distance between the unmanned aerial vehicle and the ground moving target to zero asymptotically based on adaptive sliding mode control approach. Considering the discontinuous property of the sign function, which is often used in traditional sliding mode control and will result in high-frequency chattering in the control channel, the proposed controller adopts the continuous saturation function for chattering elimination. Besides the continuous property, convergence to the origin asymptotically can be guaranteed theoretically with the proposed controller, which is quite different from traditional boundary layer technique, where only bounded motion around the sliding manifold can be ensured. For asymptotic stability, it is only required that the lumped uncertainty is bounded, but the upper bound may be unknown by virtue of the designed adaptive methodology. For space angle control, a new multiple leader–follower information architecture is introduced and an acceleration command is then derived for each unmanned aerial vehicle to space them about the loiter circle defined by the first controller. Simulation results with different conditions clearly demonstrate the superiority of the proposed formulation.


Robotics ◽  
2021 ◽  
Vol 10 (1) ◽  
pp. 12
Author(s):  
Yixiang Lim ◽  
Nichakorn Pongsarkornsathien ◽  
Alessandro Gardi ◽  
Roberto Sabatini ◽  
Trevor Kistan ◽  
...  

Advances in unmanned aircraft systems (UAS) have paved the way for progressively higher levels of intelligence and autonomy, supporting new modes of operation, such as the one-to-many (OTM) concept, where a single human operator is responsible for monitoring and coordinating the tasks of multiple unmanned aerial vehicles (UAVs). This paper presents the development and evaluation of cognitive human-machine interfaces and interactions (CHMI2) supporting adaptive automation in OTM applications. A CHMI2 system comprises a network of neurophysiological sensors and machine-learning based models for inferring user cognitive states, as well as the adaptation engine containing a set of transition logics for control/display functions and discrete autonomy levels. Models of the user’s cognitive states are trained on past performance and neurophysiological data during an offline calibration phase, and subsequently used in the online adaptation phase for real-time inference of these cognitive states. To investigate adaptive automation in OTM applications, a scenario involving bushfire detection was developed where a single human operator is responsible for tasking multiple UAV platforms to search for and localize bushfires over a wide area. We present the architecture and design of the UAS simulation environment that was developed, together with various human-machine interface (HMI) formats and functions, to evaluate the CHMI2 system’s feasibility through human-in-the-loop (HITL) experiments. The CHMI2 module was subsequently integrated into the simulation environment, providing the sensing, inference, and adaptation capabilities needed to realise adaptive automation. HITL experiments were performed to verify the CHMI2 module’s functionalities in the offline calibration and online adaptation phases. In particular, results from the online adaptation phase showed that the system was able to support real-time inference and human-machine interface and interaction (HMI2) adaptation. However, the accuracy of the inferred workload was variable across the different participants (with a root mean squared error (RMSE) ranging from 0.2 to 0.6), partly due to the reduced number of neurophysiological features available as real-time inputs and also due to limited training stages in the offline calibration phase. To improve the performance of the system, future work will investigate the use of alternative machine learning techniques, additional neurophysiological input features, and a more extensive training stage.


Sign in / Sign up

Export Citation Format

Share Document