Mobile Manipulation–based Deployment of Micro Aerial Robot Scouts through Constricted Aperture-like Ingress Points

Author(s):  
Prateek Arora ◽  
Christos Papachristos
Author(s):  
Zain Anwar Ali ◽  
Dao Bo Wang ◽  
Muhammad Aamir

<span>Research on the tri-rotor aerial robot is due to extra efficiency<span> over other UAV’s regarding stability, power and size<span> requirements. We require a controller to achieve 6-Degree<span> Of Freedom (DOF), for such purpose, we propose the RST<span> controller to operate our tri-copter model. A MIMO model<span> of a tri-copter aerial robot is challenged in the area of control<span> engineering. Ninestates of output control dynamics are treated<span> individually. We designed dynamic controllers to stabilize the<span> parameters of an UAV. The resulting system control algorithm<span> is capable of stabilizing our UAV to perform numerous<span> operations autonomously. The estimation and simulation<span> implemented inMATLAB, Simulink to verify the results. All<span> real flight test results are presented to prove the success of<span> the planned control structure.<br /><br class="Apple-interchange-newline" /></span></span></span></span></span></span></span></span></span></span></span></span></span></span>


2013 ◽  
Vol 2013 (0) ◽  
pp. _1A2-F08_1-_1A2-F08_4
Author(s):  
Yi YANG ◽  
Daisuke IWAKURA ◽  
Yuze SONG ◽  
Kenzo NONAMI
Keyword(s):  

2021 ◽  
Vol 6 (2) ◽  
pp. 1367-1374
Author(s):  
Moju Zhao ◽  
Tomoki Anzai ◽  
Kei Okada ◽  
Koji Kawasaki ◽  
Masayuki Inaba
Keyword(s):  

Author(s):  
Maximo A. Roa ◽  
Mehmet R. Dogar ◽  
Jordi Pages ◽  
Carlos Vivas ◽  
Antonio Morales ◽  
...  

2021 ◽  
Author(s):  
Srivatsan Krishnan ◽  
Behzad Boroujerdian ◽  
William Fu ◽  
Aleksandra Faust ◽  
Vijay Janapa Reddi

AbstractWe introduce Air Learning, an open-source simulator, and a gym environment for deep reinforcement learning research on resource-constrained aerial robots. Equipped with domain randomization, Air Learning exposes a UAV agent to a diverse set of challenging scenarios. We seed the toolset with point-to-point obstacle avoidance tasks in three different environments and Deep Q Networks (DQN) and Proximal Policy Optimization (PPO) trainers. Air Learning assesses the policies’ performance under various quality-of-flight (QoF) metrics, such as the energy consumed, endurance, and the average trajectory length, on resource-constrained embedded platforms like a Raspberry Pi. We find that the trajectories on an embedded Ras-Pi are vastly different from those predicted on a high-end desktop system, resulting in up to $$40\%$$ 40 % longer trajectories in one of the environments. To understand the source of such discrepancies, we use Air Learning to artificially degrade high-end desktop performance to mimic what happens on a low-end embedded system. We then propose a mitigation technique that uses the hardware-in-the-loop to determine the latency distribution of running the policy on the target platform (onboard compute on aerial robot). A randomly sampled latency from the latency distribution is then added as an artificial delay within the training loop. Training the policy with artificial delays allows us to minimize the hardware gap (discrepancy in the flight time metric reduced from 37.73% to 0.5%). Thus, Air Learning with hardware-in-the-loop characterizes those differences and exposes how the onboard compute’s choice affects the aerial robot’s performance. We also conduct reliability studies to assess the effect of sensor failures on the learned policies. All put together, Air Learning enables a broad class of deep RL research on UAVs. The source code is available at: https://github.com/harvard-edge/AirLearning.


Mechatronics ◽  
2021 ◽  
Vol 74 ◽  
pp. 102483
Author(s):  
Radu Ionut Popescu ◽  
Maxime Raison ◽  
George Marian Popescu ◽  
David Saussié ◽  
Sofiane Achiche

Electronics ◽  
2021 ◽  
Vol 10 (7) ◽  
pp. 831
Author(s):  
Izzat Al-Darraji ◽  
Dimitrios Piromalis ◽  
Ayad A. Kakei ◽  
Fazal Qudus Khan ◽  
Milos Stojemnovic ◽  
...  

Aerial Robot Arms (ARAs) enable aerial drones to interact and influence objects in various environments. Traditional ARA controllers need the availability of a high-precision model to avoid high control chattering. Furthermore, in practical applications of aerial object manipulation, the payloads that ARAs can handle vary, depending on the nature of the task. The high uncertainties due to modeling errors and an unknown payload are inversely proportional to the stability of ARAs. To address the issue of stability, a new adaptive robust controller, based on the Radial Basis Function (RBF) neural network, is proposed. A three-tier approach is also followed. Firstly, a detailed new model for the ARA is derived using the Lagrange–d'Alembert principle. Secondly, an adaptive robust controller, based on a sliding mode, is designed to manipulate the problem of uncertainties, including modeling errors. Last, a higher stability controller, based on the RBF neural network, is implemented with the adaptive robust controller to stabilize the ARAs, avoiding modeling errors and unknown payload issues. The novelty of the proposed design is that it takes into account high nonlinearities, coupling control loops, high modeling errors, and disturbances due to payloads and environmental conditions. The model was evaluated by the simulation of a case study that includes the two proposed controllers and ARA trajectory tracking. The simulation results show the validation and notability of the presented control algorithm.


Sign in / Sign up

Export Citation Format

Share Document