scholarly journals Adaptive Ultrasound-Based Tractor Localization for Semi-Autonomous Vineyard Operations

Agronomy ◽  
2021 ◽  
Vol 11 (2) ◽  
pp. 287
Author(s):  
Matteo Corno ◽  
Sara Furioli ◽  
Paolo Cesana ◽  
Sergio M. Savaresi

Autonomous driving is greatly impacting intensive and precise agriculture. Matter-of-factly, the first commercial applications of autonomous driving were in autonomous navigation of agricultural tractors in open fields. As the technology improves, the possibility of using autonomous or semi-autonomous tractors in orchards and vineyards is becoming commercially profitable. These scenarios offer more challenges as the vehicle needs to position itself with respect to a more cluttered environment. This paper presents an adaptive localization system for (semi-) autonomous navigation of agricultural tractors in vineyards that is based on ultrasonic automotive sensors. The system estimates the distance from the left vineyard row and the incidence angle. The paper shows that a single tuning of the localization algorithm does not provide robust performance in all vegetation scenarios. We solve this issue by implementing an Extended Kalman Filter (EKF) and by introducing an adaptive data selection stage that automatically adapts to the vegetation conditions and discards invalid measurements. An extensive experimental campaign validates the main features of the localization algorithm. In particular, we show that the Root Mean Square Error (RMSE) of the distance is 16 cm, while the angular RMSE is 2.6 degrees.

2021 ◽  
Vol 13 (8) ◽  
pp. 1487
Author(s):  
Peter Lanz ◽  
Armando Marino ◽  
Thomas Brinkhoff ◽  
Frank Köster ◽  
Matthias Möller

Countless numbers of people lost their lives at Europe’s southern borders in recent years in the attempt to cross to Europe in small rubber inflatables. This work examines satellite-based approaches to build up future systems that can automatically detect those boats. We compare the performance of several automatic vessel detectors using real synthetic aperture radar (SAR) data from X-band and C-band sensors on TerraSAR-X and Sentinel-1. The data was collected in an experimental campaign where an empty boat lies on a lake’s surface to analyse the influence of main sensor parameters (incidence angle, polarization mode, spatial resolution) on the detectability of our inflatable. All detectors are implemented with a moving window and use local clutter statistics from the adjacent water surface. Among tested detectors are well-known intensity-based (CA-CFAR), sublook-based (sublook correlation) and polarimetric-based (PWF, PMF, PNF, entropy, symmetry and iDPolRAD) approaches. Additionally, we introduced a new version of the volume detecting iDPolRAD aimed at detecting surface anomalies and compare two approaches to combine the volume and the surface in one algorithm, producing two new highly performing detectors. The results are compared with receiver operating characteristic (ROC) curves, enabling us to compare detectors independently of threshold selection.


2020 ◽  
Vol 34 (07) ◽  
pp. 10901-10908 ◽  
Author(s):  
Abdullah Hamdi ◽  
Matthias Mueller ◽  
Bernard Ghanem

One major factor impeding more widespread adoption of deep neural networks (DNNs) is their lack of robustness, which is essential for safety-critical applications such as autonomous driving. This has motivated much recent work on adversarial attacks for DNNs, which mostly focus on pixel-level perturbations void of semantic meaning. In contrast, we present a general framework for adversarial attacks on trained agents, which covers semantic perturbations to the environment of the agent performing the task as well as pixel-level attacks. To do this, we re-frame the adversarial attack problem as learning a distribution of parameters that always fools the agent. In the semantic case, our proposed adversary (denoted as BBGAN) is trained to sample parameters that describe the environment with which the black-box agent interacts, such that the agent performs its dedicated task poorly in this environment. We apply BBGAN on three different tasks, primarily targeting aspects of autonomous navigation: object detection, self-driving, and autonomous UAV racing. On these tasks, BBGAN can generate failure cases that consistently fool a trained agent.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Chittaranjan Paital ◽  
Saroj Kumar ◽  
Manoj Kumar Muni ◽  
Dayal R. Parhi ◽  
Prasant Ranjan Dhal

PurposeSmooth and autonomous navigation of mobile robot in a cluttered environment is the main purpose of proposed technique. That includes localization and path planning of mobile robot. These are important aspects of the mobile robot during autonomous navigation in any workspace. Navigation of mobile robots includes reaching the target from the start point by avoiding obstacles in a static or dynamic environment. Several techniques have already been proposed by the researchers concerning navigational problems of the mobile robot still no one confirms the navigating path is optimal.Design/methodology/approachTherefore, the modified grey wolf optimization (GWO) controller is designed for autonomous navigation, which is one of the intelligent techniques for autonomous navigation of wheeled mobile robot (WMR). GWO is a nature-inspired algorithm, which mainly mimics the social hierarchy and hunting behavior of wolf in nature. It is modified to define the optimal positions and better control over the robot. The motion from the source to target in the highly cluttered environment by negotiating obstacles. The controller is authenticated by the approach of V-REP simulation software platform coupled with real-time experiment in the laboratory by using Khepera-III robot.FindingsDuring experiments, it is observed that the proposed technique is much efficient in motion control and path planning as the robot reaches its target position without any collision during its movement. Further the simulation through V-REP and real-time experimental results are recorded and compared against each corresponding results, and it can be seen that the results have good agreement as the deviation in the results is approximately 5% which is an acceptable range of deviation in motion planning. Both the results such as path length and time taken to reach the target is recorded and shown in respective tables.Originality/valueAfter literature survey, it may be said that most of the approach is implemented on either mathematical convergence or in mobile robot, but real-time experimental authentication is not obtained. With a lack of clear evidence regarding use of MGWO (modified grey wolf optimization) controller for navigation of mobile robots in both the environment, such as in simulation platform and real-time experimental platforms, this work would serve as a guiding link for use of similar approaches in other forms of robots.


2020 ◽  
Vol 10 (18) ◽  
pp. 6152 ◽  
Author(s):  
Zhen Xu ◽  
Shuai Guo ◽  
Tao Song ◽  
Lingdong Zeng

Aiming at the localization problem of mobile robot in construction scenes, a hybrid localization algorithm with the adaptive weights is proposed, which can effectively improve the robust localization of mobile robot. Firstly, two indicators of localization accuracy and calculation efficiency are set to reflect the robustness of localization. Secondly, the construction scene is defined as an ongoing scene, and the robust localization of mobile robot is achieved by using the measurement of artificial landmarks and matching based on generated features. Finally, the experimental results show that the accuracy of localization is up to 8.22 mm and the most matching efficiency is controlled within 0.027 s. The hybrid localization algorithm that based on adaptive weights can realize a good robustness for tasks such as autonomous navigation and path planning in construction scenes.


Drones ◽  
2021 ◽  
Vol 5 (4) ◽  
pp. 107
Author(s):  
Xishuang Zhao ◽  
Jingzheng Chong ◽  
Xiaohan Qi ◽  
Zhihua Yang

Autonomous navigation of micro aerial vehicles in unknown environments not only requires exploring their time-varying surroundings, but also ensuring the complete safety of flights at all times. The current research addresses estimation of the potential exploration value neglect of safety issues, especially in situations with a cluttered environment and no prior knowledge. To address this issue, we propose a vision object-oriented autonomous navigation method for environment exploration, which develops a B-spline function-based local trajectory re-planning algorithm by extracting spatial-structure information and selecting temporary target points. The proposed method is evaluated in a variety of cluttered environments, such as forests, building areas, and mines. The experimental results show that the proposed autonomous navigation system can effectively complete the global trajectory, during which an appropriate safe distance could always be maintained from multiple obstacles in the environment.


Electronics ◽  
2020 ◽  
Vol 9 (4) ◽  
pp. 560 ◽  
Author(s):  
Amira Mimouna ◽  
Ihsen Alouani ◽  
Anouar Ben Khalifa ◽  
Yassin El Hillali ◽  
Abdelmalik Taleb-Ahmed ◽  
...  

A reliable environment perception is a crucial task for autonomous driving, especially in dense traffic areas. Recent improvements and breakthroughs in scene understanding for intelligent transportation systems are mainly based on deep learning and the fusion of different modalities. In this context, we introduce OLIMP: A heterOgeneous Multimodal Dataset for Advanced EnvIronMent Perception. This is the first public, multimodal and synchronized dataset that includes UWB radar data, acoustic data, narrow-band radar data and images. OLIMP comprises 407 scenes and 47,354 synchronized frames, presenting four categories: pedestrian, cyclist, car and tram. The dataset includes various challenges related to dense urban traffic such as cluttered environment and different weather conditions. To demonstrate the usefulness of the introduced dataset, we propose a fusion framework that combines the four modalities for multi object detection. The obtained results are promising and spur for future research.


2020 ◽  
Vol 10 (14) ◽  
pp. 4924
Author(s):  
Donghoon Shin ◽  
Kang-moon Park ◽  
Manbok Park

This paper presents high definition (HD) map-based localization using advanced driver assistance system (ADAS) environment sensors for application to automated driving vehicles. A variety of autonomous driving technologies are being developed using expensive and high-performance sensors, but limitations exist due to several practical issues. In respect of the application of autonomous driving cars in the near future, it is necessary to ensure autonomous driving performance by effectively utilizing sensors that are already installed for ADAS purposes. Additionally, the most common localization algorithm, which is usually used lane information only, has a highly unstable disadvantage in the absence of that information. Therefore, it is essential to ensure localization performance with other road features such as guardrails when there are no lane markings. In this study, we would like to propose a localization algorithm that could be implemented in the near future by using low-cost sensors and HD maps. The proposed localization algorithm consists of several sections: environment feature representation with low-cost sensors, digital map analysis and application, position correction based on map-matching, designated validation gates, and extended Kalman filter (EKF)-based localization filtering and fusion. Lane information is detected by monocular vision in front of the vehicle. A guardrail is perceived by radar by distinguishing low-speed object measurements and by accumulating several steps to extract wall features. These lane and guardrail information are able to correct the host vehicle position by using the iterative closest point (ICP) algorithm. The rigid transformation between the digital high definition map (HD map) and environment features is calculated through ICP matching. Each corrected vehicle position by map-matching is selected and merged based on EKF with double updating. The proposed algorithm was verified through simulation based on actual driving log data.


Author(s):  
Ziyuan Zhong ◽  
Yuchi Tian ◽  
Baishakhi Ray

AbstractDeep Neural Networks (DNNs) are being deployed in a wide range of settings today, from safety-critical applications like autonomous driving to commercial applications involving image classifications. However, recent research has shown that DNNs can be brittle to even slight variations of the input data. Therefore, rigorous testing of DNNs has gained widespread attention.While DNN robustness under norm-bound perturbation got significant attention over the past few years, our knowledge is still limited when natural variants of the input images come. These natural variants, e.g., a rotated or a rainy version of the original input, are especially concerning as they can occur naturally in the field without any active adversary and may lead to undesirable consequences. Thus, it is important to identify the inputs whose small variations may lead to erroneous DNN behaviors. The very few studies that looked at DNN’s robustness under natural variants, however, focus on estimating the overall robustness of DNNs across all the test data rather than localizing such error-producing points. This work aims to bridge this gap.To this end, we study the local per-input robustness properties of the DNNs and leverage those properties to build a white-box (DeepRobust-W) and a black-box (DeepRobust-B) tool to automatically identify the non-robust points. Our evaluation of these methods on three DNN models spanning three widely used image classification datasets shows that they are effective in flagging points of poor robustness. In particular, DeepRobust-W and DeepRobust-B are able to achieve an F1 score of up to 91.4% and 99.1%, respectively. We further show that DeepRobust-W can be applied to a regression problem in a domain beyond image classification. Our evaluation on three self-driving car models demonstrates that DeepRobust-W is effective in identifying points of poor robustness with F1 score up to 78.9%.


2015 ◽  
Vol 27 (4) ◽  
pp. 401-409 ◽  
Author(s):  
Yusuke Fujino ◽  
◽  
Kentaro Kiuchi ◽  
Shogo Shimizu ◽  
Takayuki Yokota ◽  
...  

<div class=""abs_img""> <img src=""[disp_template_path]/JRM/abst-image/00270004/10.jpg"" width=""300"" /> Constructed large-scale 3D map</div> The method we propose for constructing a large three-dimensional (3D) map uses an autonomous mobile robot whose navigation system enables the map to be constructed. Maps are vital to autonomous navigation, but constructing and updating them while ensuring that they are accurate is challenging because the navigation system usually requires accurate maps. We propose a navigation system that explores areas not explored before. The proposed system mainly uses LIDARs for determining its own position – a process known as localization – or the environment around the robot – a process known as environment recognition – for creating local maps and for avoiding mobile objects – a process known as motion planning. We constructed a detailed 3D map automatically using autonomous driving data to improve navigation accuracy without increasing the operator’s workload, confirming the feasibility of the proposed method through experiments. </span>


Author(s):  
Sandra Boric ◽  
Edgar Schiebel ◽  
Christian Schlögl ◽  
Michaela Hildebrandt ◽  
Christina Hofer ◽  
...  

Autonomous driving has become an increasingly relevant issue for policymakers, the industry, service providers, infrastructure companies, and science. This study shows how bibliometrics can be used to identify the major technological aspects of an emerging research field such as autonomous driving. We examine the most influential publications and identify research fronts of scientific activities until 2017 based on a bibliometric literature analysis. Using the science mapping approach, publications in the research field of autonomous driving were retrieved from Web of Science and then structured using the bibliometric software BibTechMon by the AIT (Austrian Institute of Technology). At the time of our analysis, we identified four research fronts in the field of autonomous driving: (I) Autonomous Vehicles and Infrastructure, (II) Driver Assistance Systems, (III) Autonomous Mobile Robots, and (IV) IntraFace, i.e., automated facial image analysis. Researchers were working extensively on technologies that support the navigation and collection of data. Our analysis indicates that research was moving towards autonomous navigation and infrastructure in the urban environment. A noticeable number of publications focused on technologies for environment detection in automated vehicles. Still, research pointed at the technological challenges to make automated driving safe.


Sign in / Sign up

Export Citation Format

Share Document