visual navigation
Recently Published Documents


TOTAL DOCUMENTS

790
(FIVE YEARS 254)

H-INDEX

34
(FIVE YEARS 7)

2022 ◽  
Vol 27 (1) ◽  
pp. 1-20
Author(s):  
Jingyu He ◽  
Yao Xiao ◽  
Corina Bogdan ◽  
Shahin Nazarian ◽  
Paul Bogdan

Unmanned Aerial Vehicles (UAVs) have rapidly become popular for monitoring, delivery, and actuation in many application domains such as environmental management, disaster mitigation, homeland security, energy, transportation, and manufacturing. However, the UAV perception and navigation intelligence (PNI) designs are still in their infancy and demand fundamental performance and energy optimizations to be eligible for mass adoption. In this article, we present a generalizable three-stage optimization framework for PNI systems that (i) abstracts the high-level programs representing the perception, mining, processing, and decision making of UAVs into complex weighted networks tracking the interdependencies between universal low-level intermediate representations; (ii) exploits a differential geometry approach to schedule and map the discovered PNI tasks onto an underlying manycore architecture. To mine the complexity of optimal parallelization of perception and decision modules in UAVs, this proposed design methodology relies on an Ollivier-Ricci curvature-based load-balancing strategy that detects the parallel communities of the PNI applications for maximum parallel execution, while minimizing the inter-core communication; and (iii) relies on an energy-aware mapping scheme to minimize the energy dissipation when assigning the communities onto tile-based networks-on-chip. We validate this approach based on various drone PNI designs including flight controller, path planning, and visual navigation. The experimental results confirm that the proposed framework achieves 23% flight time reduction and up to 34% energy savings for the flight controller application. In addition, the optimization on a 16-core platform improves the on-time visit rate of the path planning algorithm by 14% while reducing 81% of run time for ConvNet visual navigation.


2021 ◽  
pp. 111-118
Author(s):  
XiaoDan Ren ◽  
Haichao Wang ◽  
Xin Shi

Aiming at the field management of plum grove in Inner Mongolia of China, taking the dense planting plum groves in Bikeqi town of Hohhot City as the research object, this paper proposed a visual navigation path detection algorithm for plum grove. By processing the video image information of plum grove, comparing RGB and HSV color space model, HSV color model was selected to separate the plant and background in V channel. Homomorphic filtering was used to highlight the region of interest in the image, Otsu was selected to segment the image, the intersection of plum trunk and ground was extracted as feature points, and the least square method was used to fit the navigation path. Through the comparative analysis of detection rate under different detection conditions in one day, the verification test of route accuracy was carried out. The experimental results show that: for dense planting plum grove, the average path detection accuracy of the algorithm is 70% and 73.3% under the condition of front light and weak light, respectively. The detection accuracy and real-time meet the requirements of plum grove field management, and the navigation baseline can be generated more accurately, which provides a preliminary basis for the realization of mechanical vision navigation in plum grove field management.


2021 ◽  
Vol 9 (12) ◽  
pp. 1432
Author(s):  
Zhizun Xu ◽  
Maryam Haroutunian ◽  
Alan J. Murphy ◽  
Jeff Neasham ◽  
Rose Norman

Underwater navigation presents crucial issues because of the rapid attenuation of electronic magnetic waves. The conventional underwater navigation methods are achieved by acoustic equipment, such as the ultra-short-baseline localisation systems and Doppler velocity logs, etc. However, they suffer from low fresh rate, low bandwidth, environmental disturbance and high cost. In the paper, a novel underwater visual navigation is investigated based on the multiple ArUco markers. Unlike other underwater navigation approaches based on the artificial markers, the noise model of the pose estimation of a single marker and an optimal algorithm of the multiple markers are developed to increase the precision of the method. The experimental tests are conducted in the towing tank. The results show that the proposed method is able to localise the underwater vehicle accurately.


eLife ◽  
2021 ◽  
Vol 10 ◽  
Author(s):  
Xuelong Sun ◽  
Shigang Yue ◽  
Michael Mangan

The central complex of the insect midbrain is thought to coordinate insect guidance strategies. Computational models can account for specific behaviours but their applicability across sensory and task domains remains untested. Here we assess the capacity of our previous model (Sun et al., 2020) of visual navigation to generalise to olfactory navigation and its coordination with other guidance in flies and ants. We show that fundamental to this capacity is the use of a biologically-plausible neural copy-and-shift mechanism that ensures sensory information is presented in a format compatible with the insect steering circuit regardless of its source. Moreover, the same mechanism is shown to allow the transfer cues from unstable/egocentric to stable/geocentric frames of reference providing a first account of the mechanism by which foraging insects robustly recover from environmental disturbances. We propose that these circuits can be flexibly repurposed by different insect navigators to address their unique ecological needs.


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Tianfang Xue ◽  
Haibin Yu

As deep reinforcement learning methods have made great progress in the visual navigation field, metalearning-based algorithms are gaining more attention since they greatly improve the expansibility of moving agents. According to metatraining mechanism, typically an initial model is trained as a metalearner by existing navigation tasks and becomes well performed in new scenes through relatively few recursive trials. However, if a metalearner is overtrained on the former tasks, it may hardly achieve generalization on navigating in unfamiliar environments as the initial model turns out to be quite biased towards former ambient configuration. In order to train an impartial navigation model and enhance its generalization capability, we propose an Unbiased Model-Agnostic Metalearning (UMAML) algorithm towards target-driven visual navigation. Inspired by entropy-based methods, maximizing the uncertainty over output labels in classification tasks, we adopt inequality measures used in Economics as a concise metric to calculate the loss deviation across unfamiliar tasks. With succinctly minimizing the inequality of task losses, an unbiased navigation model without overperforming in particular scene types can be learnt based on Model-Agnostic Metalearning mechanism. The exploring agent complies with a more balanced update rule, able to gather navigation experience from training environments. Several experiments have been conducted, and results demonstrate that our approach outperforms other state-of-the-art metalearning navigation methods in generalization ability.


Mathematics ◽  
2021 ◽  
Vol 9 (23) ◽  
pp. 3048
Author(s):  
Boyu Kuang ◽  
Mariusz Wisniewski ◽  
Zeeshan A. Rana ◽  
Yifan Zhao

Visual navigation is an essential part of planetary rover autonomy. Rock segmentation emerged as an important interdisciplinary topic among image processing, robotics, and mathematical modeling. Rock segmentation is a challenging topic for rover autonomy because of the high computational consumption, real-time requirement, and annotation difficulty. This research proposes a rock segmentation framework and a rock segmentation network (NI-U-Net++) to aid with the visual navigation of rovers. The framework consists of two stages: the pre-training process and the transfer-training process. The pre-training process applies the synthetic algorithm to generate the synthetic images; then, it uses the generated images to pre-train NI-U-Net++. The synthetic algorithm increases the size of the image dataset and provides pixel-level masks—both of which are challenges with machine learning tasks. The pre-training process accomplishes the state-of-the-art compared with the related studies, which achieved an accuracy, intersection over union (IoU), Dice score, and root mean squared error (RMSE) of 99.41%, 0.8991, 0.9459, and 0.0775, respectively. The transfer-training process fine-tunes the pre-trained NI-U-Net++ using the real-life images, which achieved an accuracy, IoU, Dice score, and RMSE of 99.58%, 0.7476, 0.8556, and 0.0557, respectively. Finally, the transfer-trained NI-U-Net++ is integrated into a planetary rover navigation vision and achieves a real-time performance of 32.57 frames per second (or the inference time is 0.0307 s per frame). The framework only manually annotates about 8% (183 images) of the 2250 images in the navigation vision, which is a labor-saving solution for rock segmentation tasks. The proposed rock segmentation framework and NI-U-Net++ improve the performance of the state-of-the-art models. The synthetic algorithm improves the process of creating valid data for the challenge of rock segmentation. All source codes, datasets, and trained models of this research are openly available in Cranfield Online Research Data (CORD).


Sensors ◽  
2021 ◽  
Vol 21 (22) ◽  
pp. 7603
Author(s):  
Yonhon Ng ◽  
Hongdong Li ◽  
Jonghyuk Kim

This paper presents a novel dense optical-flow algorithm to solve the monocular simultaneous localisation and mapping (SLAM) problem for ground or aerial robots. Dense optical flow can effectively provide the ego-motion of the vehicle while enabling collision avoidance with the potential obstacles. Existing research has not fully utilised the uncertainty of the optical flow—at most, an isotropic Gaussian density model has been used. We estimate the full uncertainty of the optical flow and propose a new eight-point algorithm based on the statistical Mahalanobis distance. Combined with the pose-graph optimisation, the proposed method demonstrates enhanced robustness and accuracy for the public autonomous car dataset (KITTI) and aerial monocular dataset.


Plant Methods ◽  
2021 ◽  
Vol 17 (1) ◽  
Author(s):  
Yukun Yang ◽  
Jing Nie ◽  
Za Kan ◽  
Shuo Yang ◽  
Hangxing Zhao ◽  
...  

Abstract Background At present, the residual film pollution in cotton fields is crucial. The commonly used recycling method is the manual-driven recycling machine, which is heavy and time-consuming. The development of a visual navigation system for the recovery of residual film is conducive, in order to improve the work efficiency. The key technology in the visual navigation system is the cotton stubble detection. A successful cotton stubble detection can ensure the stability and reliability of the visual navigation system. Methods Firstly, it extracts the three types of texture features of GLCM, GLRLM and LBP, from the three types of images of stubbles, residual films and broken leaves between rows. It then builds three classifiers: Random Forest, Back Propagation Neural Network and Support Vector Machine in order to classify the sample images. Finally, the possibility of improving the classification accuracy using the texture features extracted from the wavelet decomposition coefficients, is discussed. Results The experiment proves that the GLCM texture feature of the original image has the best performance under the Back Propagation Neural Network classifier. As for the different wavelet bases, the vertical coefficient texture feature of coif3 wavelet decomposition, combined with the texture feature of the original image, is the feature having the best classification effect. Compared with the original image texture features, the classification accuracy is increased by 3.8%, the sensitivity is increased by 4.8%, and the specificity is increased by 1.2%. Conclusions The algorithm can complete the task of stubble detection in different locations, different periods and abnormal driving conditions, which shows that the wavelet coefficient texture feature combined with the original image texture feature is a useful fusion feature for detecting stubble and can provide a reference for different crop stubble detection.


2021 ◽  
Author(s):  
Zuojun Fu ◽  
Yi Hou ◽  
Cangjian Liu ◽  
Yi Zhang ◽  
Shilin Zhou

Sign in / Sign up

Export Citation Format

Share Document