Real-Time Object Detection Algorithm of Autonomous Vehicles Based on Improved YOLOv5s

Author(s):  
Baoping Xiao ◽  
Jinghua Guo ◽  
Zhifei He
2021 ◽  
Vol 11 (13) ◽  
pp. 6016
Author(s):  
Jinsoo Kim ◽  
Jeongho Cho

For autonomous vehicles, it is critical to be aware of the driving environment to avoid collisions and drive safely. The recent evolution of convolutional neural networks has contributed significantly to accelerating the development of object detection techniques that enable autonomous vehicles to handle rapid changes in various driving environments. However, collisions in an autonomous driving environment can still occur due to undetected obstacles and various perception problems, particularly occlusion. Thus, we propose a robust object detection algorithm for environments in which objects are truncated or occluded by employing RGB image and light detection and ranging (LiDAR) bird’s eye view (BEV) representations. This structure combines independent detection results obtained in parallel through “you only look once” networks using an RGB image and a height map converted from the BEV representations of LiDAR’s point cloud data (PCD). The region proposal of an object is determined via non-maximum suppression, which suppresses the bounding boxes of adjacent regions. A performance evaluation of the proposed scheme was performed using the KITTI vision benchmark suite dataset. The results demonstrate the detection accuracy in the case of integration of PCD BEV representations is superior to when only an RGB camera is used. In addition, robustness is improved by significantly enhancing detection accuracy even when the target objects are partially occluded when viewed from the front, which demonstrates that the proposed algorithm outperforms the conventional RGB-based model.


2019 ◽  
Vol 2019 ◽  
pp. 1-9 ◽  
Author(s):  
Hai Wang ◽  
Xinyu Lou ◽  
Yingfeng Cai ◽  
Yicheng Li ◽  
Long Chen

Vehicle detection is one of the most important environment perception tasks for autonomous vehicles. The traditional vision-based vehicle detection methods are not accurate enough especially for small and occluded targets, while the light detection and ranging- (lidar-) based methods are good in detecting obstacles but they are time-consuming and have a low classification rate for different target types. Focusing on these shortcomings to make the full use of the advantages of the depth information of lidar and the obstacle classification ability of vision, this work proposes a real-time vehicle detection algorithm which fuses vision and lidar point cloud information. Firstly, the obstacles are detected by the grid projection method using the lidar point cloud information. Then, the obstacles are mapped to the image to get several separated regions of interest (ROIs). After that, the ROIs are expanded based on the dynamic threshold and merged to generate the final ROI. Finally, a deep learning method named You Only Look Once (YOLO) is applied on the ROI to detect vehicles. The experimental results on the KITTI dataset demonstrate that the proposed algorithm has high detection accuracy and good real-time performance. Compared with the detection method based only on the YOLO deep learning, the mean average precision (mAP) is increased by 17%.


2020 ◽  
Vol 2020 ◽  
pp. 1-12
Author(s):  
Zuopeng Zhao ◽  
Zhongxin Zhang ◽  
Xinzheng Xu ◽  
Yi Xu ◽  
Hualin Yan ◽  
...  

It is necessary to improve the performance of the object detection algorithm in resource-constrained embedded devices by lightweight improvement. In order to further improve the recognition accuracy of the algorithm for small target objects, this paper integrates 5 × 5 deep detachable convolution kernel on the basis of MobileNetV2-SSDLite model, extracts features of two special convolutional layers in addition to detecting the target, and designs a new lightweight object detection network—Lightweight Microscopic Detection Network (LMS-DN). The network can be implemented on embedded devices such as NVIDIA Jetson TX2. The experimental results show that LMS-DN only needs fewer parameters and calculation costs to obtain higher identification accuracy and stronger anti-interference than other popular object detection models.


2021 ◽  
Vol 7 (8) ◽  
pp. 145
Author(s):  
Antoine Mauri ◽  
Redouane Khemmar ◽  
Benoit Decoux ◽  
Madjid Haddad ◽  
Rémi Boutteau

For smart mobility, autonomous vehicles, and advanced driver-assistance systems (ADASs), perception of the environment is an important task in scene analysis and understanding. Better perception of the environment allows for enhanced decision making, which, in turn, enables very high-precision actions. To this end, we introduce in this work a new real-time deep learning approach for 3D multi-object detection for smart mobility not only on roads, but also on railways. To obtain the 3D bounding boxes of the objects, we modified a proven real-time 2D detector, YOLOv3, to predict 3D object localization, object dimensions, and object orientation. Our method has been evaluated on KITTI’s road dataset as well as on our own hybrid virtual road/rail dataset acquired from the video game Grand Theft Auto (GTA) V. The evaluation of our method on these two datasets shows good accuracy, but more importantly that it can be used in real-time conditions, in road and rail traffic environments. Through our experimental results, we also show the importance of the accuracy of prediction of the regions of interest (RoIs) used in the estimation of 3D bounding box parameters.


2019 ◽  
Vol 77 ◽  
pp. 398-408 ◽  
Author(s):  
Shengyu Lu ◽  
Beizhan Wang ◽  
Hongji Wang ◽  
Lihao Chen ◽  
Ma Linjian ◽  
...  

Author(s):  
Garv Modwel ◽  
Anu Mehra ◽  
Nitin Rakesh ◽  
K K Mishra

Background: Object detection algorithm scans every frame in the video to detect the objects present which is time consuming. This process becomes undesirable while dealing with real time system, which needs to act with in a predefined time constraint. To have quick response we need reliable detection and recognition for objects. Methods: To deal with the above problem a hybrid method is being implemented. This hybrid method combines three important algorithms to reduce scanning task for every frame. Recursive Density Estimation (RDE) algorithm decides which frame need to be scanned. You Look at Once (YOLO) algorithm does the detection and recognition in the selected frame. Detected objects are being tracked through Speed-up Robust Feature (SURF) algorithm to track the objects in subsequent frames. Results: Through the experimental study, we demonstrate that hybrid algorithm is more efficient compared to two different algorithm of same level. The algorithm is having high accuracy and low time latency (which is necessary for real time processing). Conclusion: The hybrid algorithm is able to detect with a minimum accuracy of 97 percent for all the conducted experiments and time lag experienced is also negligible, which makes it considerably efficient for real time application.


Sign in / Sign up

Export Citation Format

Share Document