scholarly journals Free Space Detection Algorithm Using Object Tracking for Autonomous Vehicles

Sensors ◽  
2021 ◽  
Vol 22 (1) ◽  
pp. 315
Author(s):  
Yeongwon Lee ◽  
Byungyong You

In this paper, we propose a new free space detection algorithm for autonomous vehicle driving. Previous free space detection algorithms often use only the location information of every frame, without information on the speed of the obstacle. In this case, there is a possibility of creating an inefficient path because the behavior of the obstacle cannot be predicted. In order to compensate for the shortcomings of the previous algorithm, the proposed algorithm uses the speed information of the obstacle. Through object tracking, the dynamic behavior of obstacles around the vehicle is identified and predicted, and free space is detected based on this. In the free space, it is possible to classify an area in which driving is possible and an area in which it is not possible, and a route is created according to the classification result. By comparing and evaluating the path generated by the previous algorithm and the path generated by the proposed algorithm, it is confirmed that the proposed algorithm is more efficient in generating the vehicle driving path.

2020 ◽  
Vol 10 (7) ◽  
pp. 2372
Author(s):  
Byambaa Dorj ◽  
Sabir Hossain ◽  
Deok-Jin Lee

The purpose of the self-driving car is to minimize the number casualties of traffic accidents. One of the effects of traffic accidents is an improper speed of a car, especially at the road turn. If we can make the anticipation of the road turn, it is possible to avoid traffic accidents. This paper presents a cutting edge curve lane detection algorithm based on the Kalman filter for the self-driving car. It uses parabola equation and circle equation models inside the Kalman filter to estimate parameters of a using curve lane. The proposed algorithm was tested with a self-driving vehicle. Experiment results show that the curve lane detection algorithm has a high success rate. The paper also presents simulation results of the autonomous vehicle with the feature to control steering and speed using the results of the full curve lane detection algorithm.


Author(s):  
Liang Peng ◽  
Hong Wang ◽  
Jun Li

AbstractThe safety of the intended functionality (SOTIF) has become one of the hottest topics in the field of autonomous driving. However, no testing and evaluating system for SOTIF performance has been proposed yet. Therefore, this paper proposes a framework based on the advanced You Only Look Once (YOLO) algorithm and the mean Average Precision (mAP) method to evaluate the object detection performance of the camera under SOTIF-related scenarios. First, a dataset is established, which contains road images with extreme weather and adverse lighting conditions. Second, the Monte Carlo dropout (MCD) method is used to analyze the uncertainty of the algorithm and draw the uncertainty region of the predicted bounding box. Then, the confidence of the algorithm is calibrated based on uncertainty results so that the average confidence after calibration can better reflect the real accuracy. The uncertainty results and the calibrated confidence are proposed to be used for online risk identification. Finally, the confusion matrix is extended according to the several possible mistakes that the object detection algorithm may make, and then the mAP is calculated as an index for offline evaluation and comparison. This paper offers suggestions to apply the MCD method to complex object detection algorithms and to find the relationship between the uncertainty and the confidence of the algorithm. The experimental results verified by specific SOTIF scenarios proof the feasibility and effectiveness of the proposed uncertainty acquisition approach for object detection algorithm, which provides potential practical implementation chance to address perceptual related SOTIF risk for autonomous vehicles.


2020 ◽  
Vol 17 (2) ◽  
pp. 172988142090257
Author(s):  
Dan Xiong ◽  
Huimin Lu ◽  
Qinghua Yu ◽  
Junhao Xiao ◽  
Wei Han ◽  
...  

High tracking frame rates have been achieved based on traditional tracking methods which however would fail due to drifts of the object template or model, especially when the object disappears from the camera’s field of view. To deal with it, tracking-and-detection-combination has become more and more popular for long-term unknown object tracking, whose detector almost does not drift and can regain the disappeared object when it comes back. However, for online machine learning and multiscale object detection, expensive computing resources and time are required. So it is not a good idea to combine tracking and detection sequentially like Tracking-Learning-Detection algorithm. Inspired from parallel tracking and mapping, this article proposes a framework of parallel tracking and detection for unknown object tracking. The object tracking algorithm is split into two separate tasks—tracking and detection which can be processed in two different threads, respectively. One thread is used to deal with the tracking between consecutive frames with a high processing speed. The other thread runs online learning algorithms to construct a discriminative model for object detection. Using our proposed framework, high tracking frame rates and the ability of correcting and recovering the failed tracker can be combined effectively. Furthermore, our framework provides open interfaces to integrate state-of-the-art object tracking and detection algorithms. We carry out an evaluation of several popular tracking and detection algorithms using the proposed framework. The experimental results show that different tracking and detection algorithms can be integrated and compared effectively by our proposed framework, and robust and fast long-term object tracking can be realized.


Author(s):  
Balasriram Kodi ◽  
Manimozhi M

In the field of autonomous vehicles, lane detection and control plays an important role. In autonomous driving the vehicle has to follow the path to avoid the collision. A deep learning technique is used to detect the curved path in autonomous vehicles. In this paper a customized lane detection algorithm was implemented to detect the curvature of the lane. A ground truth labelling tool box for deep learning is used to detect the curved path in autonomous vehicle. By mapping point to point in each frame 80-90% computing efficiency and accuracy is achieved in detecting path.


Sensors ◽  
2021 ◽  
Vol 21 (22) ◽  
pp. 7769
Author(s):  
Wansik Choi ◽  
Jun Heo ◽  
Changsun Ahn

Road surface detection is important for safely driving autonomous vehicles. This is because the knowledge of road surface conditions, in particular, dry, wet, and snowy surfaces, should be considered for driving control of autonomous vehicles. With the rise of deep learning technology, road surface detection methods using deep neural networks (DNN) have been widely used for developing road surface detection algorithms. To apply DNN in road surface detection, the dataset should be large and well-balanced for accurate and robust performance. However, most of the images of road surfaces obtained through usual data collection processes are not well-balanced. Most of the collected surface images tend to be of dry surfaces because road surface conditions are highly correlated with weather conditions. This could be a challenge in developing road surface detection algorithms. This paper proposes a method to balance the imbalanced dataset using CycleGAN to improve the performance of a road surface detection algorithm. CycleGAN was used to artificially generate images of wet and snow-covered roads. The road surface detection algorithm trained using the CycleGAN-augmented dataset had a better IoU than the method using imbalanced basic datasets. This result shows that CycleGAN-generated images can be used as datasets for road surface detection to improve the performance of DNN, and this method can help make the data acquisition process easy.


Author(s):  
Mhafuzul Islam ◽  
Mashrur Chowdhury ◽  
Hongda Li ◽  
Hongxin Hu

Vision-based navigation of autonomous vehicles primarily depends on the deep neural network (DNN) based systems in which the controller obtains input from sensors/detectors, such as cameras, and produces a vehicle control output, such as a steering wheel angle to navigate the vehicle safely in a roadway traffic environment. Typically, these DNN-based systems in the autonomous vehicle are trained through supervised learning; however, recent studies show that a trained DNN-based system can be compromised by perturbation or adverse inputs. Similarly, this perturbation can be introduced into the DNN-based systems of autonomous vehicles by unexpected roadway hazards, such as debris or roadblocks. In this study, we first introduce a hazardous roadway environment that can compromise the DNN-based navigational system of an autonomous vehicle, and produce an incorrect steering wheel angle, which could cause crashes resulting in fatality or injury. Then, we develop a DNN-based autonomous vehicle driving system using object detection and semantic segmentation to mitigate the adverse effect of this type of hazard, which helps the autonomous vehicle to navigate safely around such hazards. We find that our developed DNN-based autonomous vehicle driving system, including hazardous object detection and semantic segmentation, improves the navigational ability of an autonomous vehicle to avoid a potential hazard by 21% compared with the traditional DNN-based autonomous vehicle driving system.


Author(s):  
Xing Xu ◽  
Minglei Li ◽  
Feng Wang ◽  
Ju Xie ◽  
Xiaohan Wu ◽  
...  

A human-like trajectory could give a safe and comfortable feeling for the occupants in an autonomous vehicle especially in corners. The research of this paper focuses on planning a human-like trajectory along a section road on a test track using optimal control method that could reflect natural driving behaviour considering the sense of natural and comfortable for the passengers, which could improve the acceptability of driverless vehicles in the future. A mass point vehicle dynamic model is modelled in the curvilinear coordinate system, then an optimal trajectory is generated by using an optimal control method. The optimal control problem is formulated and then solved by using the Matlab tool GPOPS-II. Trials are carried out on a test track, and the tested data are collected and processed, then the trajectory data in different corners are obtained. Different TLCs calculations are derived and applied to different track sections. After that, the human driver’s trajectories and the optimal line are compared to see the correlation using TLC methods. The results show that the optimal trajectory shows a similar trend with human’s trajectories to some extent when driving through a corner although it is not so perfectly aligned with the tested trajectories, which could conform with people’s driving intuition and improve the occupants’ comfort when driving in a corner. This could improve the acceptability of AVs in the automotive market in the future. The driver tends to move to the outside of the lane gradually after passing the apex when driving in corners on the road with hard-lines on both sides.


Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2244
Author(s):  
S. M. Yang ◽  
Y. A. Lin

Safe path planning for obstacle avoidance in autonomous vehicles has been developed. Based on the Rapidly Exploring Random Trees (RRT) algorithm, an improved algorithm integrating path pruning, smoothing, and optimization with geometric collision detection is shown to improve planning efficiency. Path pruning, a prerequisite to path smoothing, is performed to remove the redundant points generated by the random trees for a new path, without colliding with the obstacles. Path smoothing is performed to modify the path so that it becomes continuously differentiable with curvature implementable by the vehicle. Optimization is performed to select a “near”-optimal path of the shortest distance among the feasible paths for motion efficiency. In the experimental verification, both a pure pursuit steering controller and a proportional–integral speed controller are applied to keep an autonomous vehicle tracking the planned path predicted by the improved RRT algorithm. It is shown that the vehicle can successfully track the path efficiently and reach the destination safely, with an average tracking control deviation of 5.2% of the vehicle width. The path planning is also applied to lane changes, and the average deviation from the lane during and after lane changes remains within 8.3% of the vehicle width.


Author(s):  
Samuel Humphries ◽  
Trevor Parker ◽  
Bryan Jonas ◽  
Bryan Adams ◽  
Nicholas J Clark

Quick identification of building and roads is critical for execution of tactical US military operations in an urban environment. To this end, a gridded, referenced, satellite images of an objective, often referred to as a gridded reference graphic or GRG, has become a standard product developed during intelligence preparation of the environment. At present, operational units identify key infrastructure by hand through the work of individual intelligence officers. Recent advances in Convolutional Neural Networks, however, allows for this process to be streamlined through the use of object detection algorithms. In this paper, we describe an object detection algorithm designed to quickly identify and label both buildings and road intersections present in an image. Our work leverages both the U-Net architecture as well the SpaceNet data corpus to produce an algorithm that accurately identifies a large breadth of buildings and different types of roads. In addition to predicting buildings and roads, our model numerically labels each building by means of a contour finding algorithm. Most importantly, the dual U-Net model is capable of predicting buildings and roads on a diverse set of test images and using these predictions to produce clean GRGs.


Sign in / Sign up

Export Citation Format

Share Document