Evaluating a Visual Navigation System for a Digital Library

Author(s):  
A. Leouski ◽  
J. Allan
1989 ◽  
Author(s):  
Juha Roning ◽  
Matti Pietikainen ◽  
Mikko Lindholm ◽  
Tapio Taipale

Author(s):  
Tomas Krajnik ◽  
Matias Nitsche ◽  
Sol Pedre ◽  
Libor Preucil ◽  
Marta E. Mejail

1989 ◽  
Vol 5 (4) ◽  
pp. iii
Author(s):  
Virge W McClure ◽  
Donald J Christian

2019 ◽  
Vol 4 (62) ◽  
Author(s):  
V. M. Sineglazov ◽  
V. S. Ischenko

Author(s):  
A. Volkova ◽  
P. W Gibbens

There is a growing demand for unmanned aerial systems as autonomous surveillance, exploration and remote sensing solutions. Among the key concerns for robust operation of these systems is the need to reliably navigate the environment without reliance on global navigation satellite system (GNSS). This is of particular concern in Defence circles, but is also a major safety issue for commercial operations. In these circumstances, the aircraft needs to navigate relying only on information from on-board passive sensors such as digital cameras. An autonomous feature-based visual system presented in this work offers a novel integral approach to the modelling and registration of visual features that responds to the specific needs of the navigation system. It detects visual features from Google Earth<sup>*</sup> build a feature database. The same algorithm then detects features in an on-board cameras video stream. On one level this serves to localise the vehicle relative to the environment using Simultaneous Localisation and Mapping (SLAM). On a second level it correlates them with the database to localise the vehicle with respect to the inertial frame. <br><br> The performance of the presented visual navigation system was compared using the satellite imagery from different years. Based on comparison results, an analysis of the effects of seasonal, structural and qualitative changes of the imagery source on the performance of the navigation algorithm is presented. <br><br> <sup>*</sup> The algorithm is independent of the source of satellite imagery and another provider can be used


Plant Methods ◽  
2021 ◽  
Vol 17 (1) ◽  
Author(s):  
Yukun Yang ◽  
Jing Nie ◽  
Za Kan ◽  
Shuo Yang ◽  
Hangxing Zhao ◽  
...  

Abstract Background At present, the residual film pollution in cotton fields is crucial. The commonly used recycling method is the manual-driven recycling machine, which is heavy and time-consuming. The development of a visual navigation system for the recovery of residual film is conducive, in order to improve the work efficiency. The key technology in the visual navigation system is the cotton stubble detection. A successful cotton stubble detection can ensure the stability and reliability of the visual navigation system. Methods Firstly, it extracts the three types of texture features of GLCM, GLRLM and LBP, from the three types of images of stubbles, residual films and broken leaves between rows. It then builds three classifiers: Random Forest, Back Propagation Neural Network and Support Vector Machine in order to classify the sample images. Finally, the possibility of improving the classification accuracy using the texture features extracted from the wavelet decomposition coefficients, is discussed. Results The experiment proves that the GLCM texture feature of the original image has the best performance under the Back Propagation Neural Network classifier. As for the different wavelet bases, the vertical coefficient texture feature of coif3 wavelet decomposition, combined with the texture feature of the original image, is the feature having the best classification effect. Compared with the original image texture features, the classification accuracy is increased by 3.8%, the sensitivity is increased by 4.8%, and the specificity is increased by 1.2%. Conclusions The algorithm can complete the task of stubble detection in different locations, different periods and abnormal driving conditions, which shows that the wavelet coefficient texture feature combined with the original image texture feature is a useful fusion feature for detecting stubble and can provide a reference for different crop stubble detection.


2021 ◽  
Vol 2021 ◽  
pp. 1-14
Author(s):  
Peichen Huang ◽  
Lixue Zhu ◽  
Zhigang Zhang ◽  
Chenyu Yang

A row-following system based on end-to-end learning for an agricultural robot in an apple orchard was developed in this study. Instead of dividing the navigation into multiple traditional subtasks, the designed end-to-end learning method maps images from the camera directly to driving commands, which reduces the complexity of the navigation system. A sample collection method for network training was also proposed, by which the robot could automatically drive and collect data without an operator or remote control. No hand labeling of training samples is required. To improve the network generalization, methods such as batch normalization, dropout, data augmentation, and 10-fold cross-validation were adopted. In addition, internal representations of the network were analyzed, and row-following tests were carried out. Test results showed that the visual navigation system based on end-to-end learning could guide the robot by adjusting its posture according to different scenarios and successfully passing through the tree rows.


Sign in / Sign up

Export Citation Format

Share Document