Unsettled Topics in Obstacle Detection for Autonomous Agricultural Vehicles

2021 ◽  
Author(s):  
Stewart Moorehead ◽  

Agricultural vehicles often drive along the same terrain day after day or year after year. Yet, they still must detect if a moveable object, such as another vehicle or an animal, happens to be on their path or if environmental conditions have caused muddy spots or washouts. Obstacle detection is one of the major missing pieces that can remove humans from highly automated agricultural machines today and enable the autonomous vehicles of the future. Unsettled Topics in Obstacle Detection for Autonomous Agricultural Vehicles examines the challenges of environmental object detection and collision prevention, including air obscurants, holes and soft spots, prior maps, vehicle geometry, standards, and close contact with large objects.

2021 ◽  
Vol 1 (3) ◽  
pp. 672-685
Author(s):  
Shreya Lohar ◽  
Lei Zhu ◽  
Stanley Young ◽  
Peter Graf ◽  
Michael Blanton

This study reviews obstacle detection technologies in vegetation for autonomous vehicles or robots. Autonomous vehicles used in agriculture and as lawn mowers face many environmental obstacles that are difficult to recognize for the vehicle sensor. This review provides information on choosing appropriate sensors to detect obstacles through vegetation, based on experiments carried out in different agricultural fields. The experimental setup from the literature consists of sensors placed in front of obstacles, including a thermal camera; red, green, blue (RGB) camera; 360° camera; light detection and ranging (LiDAR); and radar. These sensors were used either in combination or single-handedly on agricultural vehicles to detect objects hidden inside the agricultural field. The thermal camera successfully detected hidden objects, such as barrels, human mannequins, and humans, as did LiDAR in one experiment. The RGB camera and stereo camera were less efficient at detecting hidden objects compared with protruding objects. Radar detects hidden objects easily but lacks resolution. Hyperspectral sensing systems can identify and classify objects, but they consume a lot of storage. To obtain clearer and more robust data of hidden objects in vegetation and extreme weather conditions, further experiments should be performed for various climatic conditions combining active and passive sensors.


Author(s):  
Stefano Feraco ◽  
Angelo Bonfitto ◽  
Nicola Amati ◽  
Andrea Tonoli

This paper presents a redundant multi-object detection method for autonomous driving, exploiting a combination of Light Detection and Ranging (LiDAR) and stereocamera sensors to detect different obstacles. These sensors are used for distinct perception pipelines considering a custom hardware/software architecture deployed on a self-driving electric racing vehicle. Consequently, the creation of a local map with respect to the vehicle position enables development of further local trajectory planning algorithms. The LiDAR-based algorithm exploits segmentation of point clouds for the ground filtering and obstacle detection. The stereocamerabased perception pipeline is based on a Single Shot Detector using a deep learning neural network. The presented algorithm is experimentally validated on the instrumented vehicle during different driving maneuvers.


2021 ◽  
Vol 23 (06) ◽  
pp. 1288-1293
Author(s):  
Dr. S. Rajkumar ◽  
◽  
Aklilu Teklemariam ◽  
Addisalem Mekonnen ◽  
◽  
...  

Autonomous Vehicles (AV) reduces human intervention by perceiving the vehicle’s location with respect to the environment. In this regard, utilization of multiple sensors corresponding to various features of environment perception yields not only detection but also enables tracking and classification of the object leading to high security and reliability. Therefore, we propose to deploy hybrid multi-sensors such as Radar, LiDAR, and camera sensors. However, the data acquired with these hybrid sensors overlaps with the wide viewing angles of the individual sensors, and hence convolutional neural network and Kalman Filter (KF) based data fusion framework was implemented with a goal to facilitate a robust object detection system to avoid collisions inroads. The complete system tested over 1000 road scenarios for real-time environment perception showed that our hardware and software configurations outperformed numerous other conventional systems. Hence, this system could potentially find its application in object detection, tracking, and classification in a real-time environment.


2019 ◽  
Vol 8 (2S8) ◽  
pp. 1598-1601

Autonomous vehicles are the future of transport and also it is expected to become a fully-fledged reality within a decade. All the major giants in the automotive industry are hard pressing their transition from conventional vehicle to autonomous vehicles. The state of Karnataka, for instance, had approximately 205,200 registered taxis higher than Madhya Pradesh 174,900 registered cabs from 2014 to 2015. This presents a great deal of opportunities for autonomous cars and need for technologies. Autonomous cars reduces the accidents rate, stress free parking, saves time, reduces traffic congestion, improve fuel economy etc. It is so sophisticated to the level of easy prediction of physical objects, behavioural elements such as driving speed limits and driving rules between the physical world and its map. Autonomous vehicle have grown to an extent of updating its own information and also based on the cloud, benefitting the systems of all other cars on the network. Machine vision is the most crucial aspect which gives the autonomous vehicles the knowledge of its surrounding. This paper deals with the different approaches of machine vision that helps the vehicle in lane and obstacle detections. Few methods of obstacle detection like Single Object Detection and tracking (SODT) and Multiple Object Detection and tracking (MODT) are compared and contrasted in this paper. Despite the enormous advantages, there are still some challenges of autonomous which needs to be addressed. The challenges that the field will face, especially in relevance with India, along with the suggestion for improvement is also discussed.


Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2140
Author(s):  
De Jong Yeong ◽  
Gustavo Velasco-Hernandez ◽  
John Barry ◽  
Joseph Walsh

With the significant advancement of sensor and communication technology and the reliable application of obstacle detection techniques and algorithms, automated driving is becoming a pivotal technology that can revolutionize the future of transportation and mobility. Sensors are fundamental to the perception of vehicle surroundings in an automated driving system, and the use and performance of multiple integrated sensors can directly determine the safety and feasibility of automated driving vehicles. Sensor calibration is the foundation block of any autonomous system and its constituent sensors and must be performed correctly before sensor fusion and obstacle detection processes may be implemented. This paper evaluates the capabilities and the technical performance of sensors which are commonly employed in autonomous vehicles, primarily focusing on a large selection of vision cameras, LiDAR sensors, and radar sensors and the various conditions in which such sensors may operate in practice. We present an overview of the three primary categories of sensor calibration and review existing open-source calibration packages for multi-sensor calibration and their compatibility with numerous commercial sensors. We also summarize the three main approaches to sensor fusion and review current state-of-the-art multi-sensor fusion techniques and algorithms for object detection in autonomous driving applications. The current paper, therefore, provides an end-to-end review of the hardware and software methods required for sensor fusion object detection. We conclude by highlighting some of the challenges in the sensor fusion field and propose possible future research directions for automated driving systems.


2020 ◽  
Vol 14 (11) ◽  
pp. 1410-1417 ◽  
Author(s):  
Alfred Daniel ◽  
Karthik Subburathinam ◽  
Bala Anand Muthu ◽  
Newlin Rajkumar ◽  
Seifedine Kadry ◽  
...  

Author(s):  
Mhafuzul Islam ◽  
Mashrur Chowdhury ◽  
Hongda Li ◽  
Hongxin Hu

Vision-based navigation of autonomous vehicles primarily depends on the deep neural network (DNN) based systems in which the controller obtains input from sensors/detectors, such as cameras, and produces a vehicle control output, such as a steering wheel angle to navigate the vehicle safely in a roadway traffic environment. Typically, these DNN-based systems in the autonomous vehicle are trained through supervised learning; however, recent studies show that a trained DNN-based system can be compromised by perturbation or adverse inputs. Similarly, this perturbation can be introduced into the DNN-based systems of autonomous vehicles by unexpected roadway hazards, such as debris or roadblocks. In this study, we first introduce a hazardous roadway environment that can compromise the DNN-based navigational system of an autonomous vehicle, and produce an incorrect steering wheel angle, which could cause crashes resulting in fatality or injury. Then, we develop a DNN-based autonomous vehicle driving system using object detection and semantic segmentation to mitigate the adverse effect of this type of hazard, which helps the autonomous vehicle to navigate safely around such hazards. We find that our developed DNN-based autonomous vehicle driving system, including hazardous object detection and semantic segmentation, improves the navigational ability of an autonomous vehicle to avoid a potential hazard by 21% compared with the traditional DNN-based autonomous vehicle driving system.


Author(s):  
Xing Xu ◽  
Minglei Li ◽  
Feng Wang ◽  
Ju Xie ◽  
Xiaohan Wu ◽  
...  

A human-like trajectory could give a safe and comfortable feeling for the occupants in an autonomous vehicle especially in corners. The research of this paper focuses on planning a human-like trajectory along a section road on a test track using optimal control method that could reflect natural driving behaviour considering the sense of natural and comfortable for the passengers, which could improve the acceptability of driverless vehicles in the future. A mass point vehicle dynamic model is modelled in the curvilinear coordinate system, then an optimal trajectory is generated by using an optimal control method. The optimal control problem is formulated and then solved by using the Matlab tool GPOPS-II. Trials are carried out on a test track, and the tested data are collected and processed, then the trajectory data in different corners are obtained. Different TLCs calculations are derived and applied to different track sections. After that, the human driver’s trajectories and the optimal line are compared to see the correlation using TLC methods. The results show that the optimal trajectory shows a similar trend with human’s trajectories to some extent when driving through a corner although it is not so perfectly aligned with the tested trajectories, which could conform with people’s driving intuition and improve the occupants’ comfort when driving in a corner. This could improve the acceptability of AVs in the automotive market in the future. The driver tends to move to the outside of the lane gradually after passing the apex when driving in corners on the road with hard-lines on both sides.


2021 ◽  
Vol 13 (4) ◽  
pp. 1962
Author(s):  
Timo Liljamo ◽  
Heikki Liimatainen ◽  
Markus Pöllänen ◽  
Riku Viri

Car ownership is one of the key factors affecting travel behaviour and thus also essential in terms of sustainable mobility. This study examines car ownership and how people’s willingness to own a car may change in the future, when considering the effects of public transport, Mobility as a Service (MaaS) and automated vehicles (AVs). Results of two citizen surveys conducted with representative samples (NAV-survey = 2036; NMaaS-survey = 1176) of Finns aged 18–64 are presented. The results show that 39% of respondents would not want or need to own a car if public transport connections were good enough, 58% if the described mobility service was available and 65% if all vehicles in traffic were automated. Hence, car ownership can decrease as a result of the implementation of AVs and MaaS, and higher public transport quality of service. Current mobility behaviour has a strong correlation to car ownership, as respondents who use public transport frequently feel less of a will or need to own a car than others. Generally, women and younger people feel less of a will or need to own a car, but factors such as educational level and residential location seem to have a relatively low effect.


2021 ◽  
Vol 7 (4) ◽  
pp. 61
Author(s):  
David Urban ◽  
Alice Caplier

As difficult vision-based tasks like object detection and monocular depth estimation are making their way in real-time applications and as more light weighted solutions for autonomous vehicles navigation systems are emerging, obstacle detection and collision prediction are two very challenging tasks for small embedded devices like drones. We propose a novel light weighted and time-efficient vision-based solution to predict Time-to-Collision from a monocular video camera embedded in a smartglasses device as a module of a navigation system for visually impaired pedestrians. It consists of two modules: a static data extractor made of a convolutional neural network to predict the obstacle position and distance and a dynamic data extractor that stacks the obstacle data from multiple frames and predicts the Time-to-Collision with a simple fully connected neural network. This paper focuses on the Time-to-Collision network’s ability to adapt to new sceneries with different types of obstacles with supervised learning.


Sign in / Sign up

Export Citation Format

Share Document