A Visual Navigation Strategy for Express Carrying Robot Based on Improved Faster R-CNN

Author(s):  
Hongtao Zhang ◽  
Xingxing Tian
2021 ◽  
Vol 55 (4) ◽  
pp. 24-32
Author(s):  
Nare Karapetyan ◽  
James V. Johnson ◽  
Ioannis Rekleitis

Abstract This work proposes vision-only navigation strategies for an autonomous underwater robot. This approach is a step towards solving the coverage path planning problem in a 3-D environment for surveying underwater structures. Given the challenging conditions of the underwater domain, it is very complicated to obtain accurate state estimates reliably. Consequently, it is a great challenge to extend known path planning or coverage techniques developed for aerial or ground robot controls. In this work, we are investigating a navigation strategy utilizing only vision to assist in covering a complex underwater structure. We propose to use a navigation strategy akin to what a human diver will execute when circumnavigating around a region of interest, in particular when collecting data from a shipwreck. The focus of this article is a step towards enabling the autonomous operation of lightweight robots near underwater wrecks in order to collect data for creating photo-realistic maps and volumetric 3-D models while at the same time avoiding collisions. The proposed method uses convolutional neural networks to learn the control commands based on the visual input. We have demonstrated the feasibility of using a system based only on vision to learn specific strategies of navigation with 80% accuracy on the prediction of control command changes. Experimental results and a detailed overview of the proposed method are discussed.


2014 ◽  
Author(s):  
Chi Ngo ◽  
Nora Newcombe ◽  
Ingrid Olson ◽  
Steven Weisberg

ROBOT ◽  
2011 ◽  
Vol 33 (4) ◽  
pp. 490-501 ◽  
Author(s):  
Xinde LI ◽  
Xuejian WU ◽  
Bo ZHU ◽  
Xianzhong DAI

2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Abdallah Daddi-Moussa-Ider ◽  
Hartmut Löwen ◽  
Benno Liebchen

AbstractAs compared to the well explored problem of how to steer a macroscopic agent, like an airplane or a moon lander, to optimally reach a target, optimal navigation strategies for microswimmers experiencing hydrodynamic interactions with walls and obstacles are far-less understood. Here, we systematically explore this problem and show that the characteristic microswimmer-flow-field crucially influences the navigation strategy required to reach a target in the fastest way. The resulting optimal trajectories can have remarkable and non-intuitive shapes, which qualitatively differ from those of dry active particles or motile macroagents. Our results provide insights into the role of hydrodynamics and fluctuations on optimal navigation at the microscale, and suggest that microorganisms might have survival advantages when strategically controlling their distance to remote walls.


Author(s):  
Zhenhuan Rao ◽  
Yuechen Wu ◽  
Zifei Yang ◽  
Wei Zhang ◽  
Shijian Lu ◽  
...  

2020 ◽  
Vol 2 (1) ◽  
pp. 90-105
Author(s):  
Jimmy Y. Zhong

AbstractFocusing on 12 allocentric/survey-based strategy items of the Navigation Strategy Questionnaire (Zhong & Kozhevnikov, 2016), the current study applied item response theory-based analysis to determine whether a bidimensional model could better describe the latent structure of the survey-based strategy. Results from item and model fit diagnostics, categorical response and item information curves showed that an item with the lowest rotated component loading (.27) [SURVEY12], could be considered for exclusion in future studies; and that a bidimensional model with three preference-related items constituting a content factor offered a better representation of the latent structure than a unidimensional model per se. Mean scores from these three items also correlated significantly with a pointing-to-landmarks task to the same relative magnitude as the mean scores from all items, and all items excluding SURVEY12. These findings gave early evidence suggesting that the three preference-related items could constitute a subscale for deriving quick estimates of large-scale allocentric spatial processing in healthy adults in both experimental and clinical settings. Potential cognitive and brain mechanisms were discussed, followed by calls for future studies to gather greater evidence confirming the predictive validity of the full and sub scales, along with the design of new items focusing on environmental familiarity.


2021 ◽  
Author(s):  
Srivatsan Krishnan ◽  
Behzad Boroujerdian ◽  
William Fu ◽  
Aleksandra Faust ◽  
Vijay Janapa Reddi

AbstractWe introduce Air Learning, an open-source simulator, and a gym environment for deep reinforcement learning research on resource-constrained aerial robots. Equipped with domain randomization, Air Learning exposes a UAV agent to a diverse set of challenging scenarios. We seed the toolset with point-to-point obstacle avoidance tasks in three different environments and Deep Q Networks (DQN) and Proximal Policy Optimization (PPO) trainers. Air Learning assesses the policies’ performance under various quality-of-flight (QoF) metrics, such as the energy consumed, endurance, and the average trajectory length, on resource-constrained embedded platforms like a Raspberry Pi. We find that the trajectories on an embedded Ras-Pi are vastly different from those predicted on a high-end desktop system, resulting in up to $$40\%$$ 40 % longer trajectories in one of the environments. To understand the source of such discrepancies, we use Air Learning to artificially degrade high-end desktop performance to mimic what happens on a low-end embedded system. We then propose a mitigation technique that uses the hardware-in-the-loop to determine the latency distribution of running the policy on the target platform (onboard compute on aerial robot). A randomly sampled latency from the latency distribution is then added as an artificial delay within the training loop. Training the policy with artificial delays allows us to minimize the hardware gap (discrepancy in the flight time metric reduced from 37.73% to 0.5%). Thus, Air Learning with hardware-in-the-loop characterizes those differences and exposes how the onboard compute’s choice affects the aerial robot’s performance. We also conduct reliability studies to assess the effect of sensor failures on the learned policies. All put together, Air Learning enables a broad class of deep RL research on UAVs. The source code is available at: https://github.com/harvard-edge/AirLearning.


2020 ◽  
Vol 12 ◽  
pp. 175682932092452
Author(s):  
Liang Lu ◽  
Alexander Yunda ◽  
Adrian Carrio ◽  
Pascual Campoy

This paper presents a novel collision-free navigation system for the unmanned aerial vehicle based on point clouds that outperform compared to baseline methods, enabling high-speed flights in cluttered environments, such as forests or many indoor industrial plants. The algorithm takes the point cloud information from physical sensors (e.g. lidar, depth camera) and then converts it to an occupied map using Voxblox, which is then used by a rapid-exploring random tree to generate finite path candidates. A modified Covariant Hamiltonian Optimization for Motion Planning objective function is used to select the best candidate and update it. Finally, the best candidate trajectory is generated and sent to a Model Predictive Control controller. The proposed navigation strategy is evaluated in four different simulation environments; the results show that the proposed method has a better success rate and a shorter goal-reaching distance than the baseline method.


Sign in / Sign up

Export Citation Format

Share Document