scholarly journals Vehicle Detection and Ranging Using Two Different Focal Length Cameras

2020 ◽  
Vol 2020 ◽  
pp. 1-14 ◽  
Author(s):  
Jun Liu ◽  
Rui Zhang

Vehicle detection is a crucial task for autonomous driving and demands high accuracy and real-time speed. Considering that the current deep learning object detection model size is too large to be deployed on the vehicle, this paper introduces the lightweight network to modify the feature extraction layer of YOLOv3 and improve the remaining convolution structure, and the improved Lightweight YOLO network reduces the number of network parameters to a quarter. Then, the license plate is detected to calculate the actual vehicle width and the distance between the vehicles is estimated by the width. This paper proposes a detection and ranging fusion method based on two different focal length cameras to solve the problem of difficult detection and low accuracy caused by a small license plate when the distance is far away. The experimental results show that the average precision and recall of the Lightweight YOLO trained on the self-built dataset is 4.43% and 3.54% lower than YOLOv3, respectively, but the computing speed of the network decreases 49 ms per frame. The road experiments in different scenes also show that the long and short focal length camera fusion ranging method dramatically improves the accuracy and stability of ranging. The mean error of ranging results is less than 4%, and the range of stable ranging can reach 100 m. The proposed method can realize real-time vehicle detection and ranging on the on-board embedded platform Jetson Xavier, which satisfies the requirements of automatic driving environment perception.

2021 ◽  
Vol 11 (8) ◽  
pp. 3531
Author(s):  
Hesham M. Eraqi ◽  
Karim Soliman ◽  
Dalia Said ◽  
Omar R. Elezaby ◽  
Mohamed N. Moustafa ◽  
...  

Extensive research efforts have been devoted to identify and improve roadway features that impact safety. Maintaining roadway safety features relies on costly manual operations of regular road surveying and data analysis. This paper introduces an automatic roadway safety features detection approach, which harnesses the potential of artificial intelligence (AI) computer vision to make the process more efficient and less costly. Given a front-facing camera and a global positioning system (GPS) sensor, the proposed system automatically evaluates ten roadway safety features. The system is composed of an oriented (or rotated) object detection model, which solves an orientation encoding discontinuity problem to improve detection accuracy, and a rule-based roadway safety evaluation module. To train and validate the proposed model, a fully-annotated dataset for roadway safety features extraction was collected covering 473 km of roads. The proposed method baseline results are found encouraging when compared to the state-of-the-art models. Different oriented object detection strategies are presented and discussed, and the developed model resulted in improving the mean average precision (mAP) by 16.9% when compared with the literature. The roadway safety feature average prediction accuracy is 84.39% and ranges between 91.11% and 63.12%. The introduced model can pervasively enable/disable autonomous driving (AD) based on safety features of the road; and empower connected vehicles (CV) to send and receive estimated safety features, alerting drivers about black spots or relatively less-safe segments or roads.


Electronics ◽  
2020 ◽  
Vol 9 (3) ◽  
pp. 451 ◽  
Author(s):  
Limin Guan ◽  
Yi Chen ◽  
Guiping Wang ◽  
Xu Lei

Vehicle detection is essential for driverless systems. However, the current single sensor detection mode is no longer sufficient in complex and changing traffic environments. Therefore, this paper combines camera and light detection and ranging (LiDAR) to build a vehicle-detection framework that has the characteristics of multi adaptability, high real-time capacity, and robustness. First, a multi-adaptive high-precision depth-completion method was proposed to convert the 2D LiDAR sparse depth map into a dense depth map, so that the two sensors are aligned with each other at the data level. Then, the You Only Look Once Version 3 (YOLOv3) real-time object detection model was used to detect the color image and the dense depth map. Finally, a decision-level fusion method based on bounding box fusion and improved Dempster–Shafer (D–S) evidence theory was proposed to merge the two results of the previous step and obtain the final vehicle position and distance information, which not only improves the detection accuracy but also improves the robustness of the whole framework. We evaluated our method using the KITTI dataset and the Waymo Open Dataset, and the results show the effectiveness of the proposed depth completion method and multi-sensor fusion strategy.


2020 ◽  
Vol 8 (2) ◽  
pp. 270-279 ◽  
Author(s):  
Luke Munn

From self-driving cars to smart city sensors, billions of devices will be connected to networks in the next few years. These devices will collect vast amounts of data which needs to be processed in real-time, overwhelming centralized cloud architectures. To address this need, the industry seeks to process data closer to the source, driving a major shift from the cloud to the ‘edge.’ This article critically investigates the privacy implications of edge computing. It outlines the abilities introduced by the edge by drawing on two recently published scenarios, an automated license plate reader and an ethnic facial detection model. Based on these affordances, three key questions arise: what kind of data will be collected, how will this data be processed at the edge, and how will this data be ‘completed’ in the cloud? As a site of intermediation between user and cloud, the edge allows data to be extracted from individuals, acted on in real-time, and then abstracted or sterilized, removing identifying information before being stored in conventional data centers. The article thus argues that edge affordances establish a fundamental new ‘privacy condition’ while sidestepping the safeguards associated with the ‘privacy proper’ of personal data use. Responding effectively to these challenges will mean rethinking person-based approaches to privacy at both regulatory and citizen-led levels.


Sensors ◽  
2019 ◽  
Vol 19 (18) ◽  
pp. 3958
Author(s):  
Seongkyun Han ◽  
Jisang Yoo ◽  
Soonchul Kwon

Vehicle detection is an important research area that provides background information for the diversity of unmanned-aerial-vehicle (UAV) applications. In this paper, we propose a vehicle-detection method using a convolutional-neural-network (CNN)-based object detector. We design our method, DRFBNet300, with a Deeper Receptive Field Block (DRFB) module that enhances the expressiveness of feature maps to detect small objects in the UAV imagery. We also propose the UAV-cars dataset that includes the composition and angular distortion of vehicles in UAV imagery to train our DRFBNet300. Lastly, we propose a Split Image Processing (SIP) method to improve the accuracy of the detection model. Our DRFBNet300 achieves 21 mAP with 45 FPS in the MS COCO metric, which is the highest score compared to other lightweight single-stage methods running in real time. In addition, DRFBNet300, trained on the UAV-cars dataset, obtains the highest AP score at altitudes of 20–50 m. The gap of accuracy improvement by applying the SIP method became larger when the altitude increases. The DRFBNet300 trained on the UAV-cars dataset with SIP method operates at 33 FPS, enabling real-time vehicle detection.


2022 ◽  
Vol 2022 ◽  
pp. 1-11
Author(s):  
Ying Zhuo ◽  
Lan Yan ◽  
Wenbo Zheng ◽  
Yutian Zhang ◽  
Chao Gou

Autonomous driving has become a prevalent research topic in recent years, arousing the attention of many academic universities and commercial companies. As human drivers rely on visual information to discern road conditions and make driving decisions, autonomous driving calls for vision systems such as vehicle detection models. These vision models require a large amount of labeled data while collecting and annotating the real traffic data are time-consuming and costly. Therefore, we present a novel vehicle detection framework based on the parallel vision to tackle the above issue, using the specially designed virtual data to help train the vehicle detection model. We also propose a method to construct large-scale artificial scenes and generate the virtual data for the vision-based autonomous driving schemes. Experimental results verify the effectiveness of our proposed framework, demonstrating that the combination of virtual and real data has better performance for training the vehicle detection model than the only use of real data.


Sensors ◽  
2020 ◽  
Vol 20 (4) ◽  
pp. 1121
Author(s):  
Xiaowei Lu ◽  
Yunfeng Ai ◽  
Bin Tian

Road boundary detection is an important part of the perception of the autonomous driving. It is difficult to detect road boundaries of unstructured roads because there are no curbs. There are no clear boundaries on mine roads to distinguish areas within the road boundary line and areas outside the road boundary line. This paper proposes a real-time road boundary detection and tracking method by a 3D-LIDAR sensor. The road boundary points are extracted from the detected elevated point clouds above the ground point cloud according to the spatial distance characteristics and the angular features. Road tracking is to predict and update the boundary point information in real-time, in order to prevent false and missed detection. The experimental verification of mine road data shows the accuracy and robustness of the proposed algorithm.


2019 ◽  
Vol 1 ◽  
pp. 1-2
Author(s):  
Márton Pál ◽  
Fanni Vörös ◽  
István Elek ◽  
Béla Kovács

<p><strong>Abstract.</strong> A self-driving car is a vehicle that is able to perceive its surroundings and navigate in it without human action. Radar sensors, lasers, computer vision and GPS technologies help it to drive individually (Figure 1). They interpret the sensed information to calculate routes and navigate between obstacles and traffic elements.</p><p>Sufficiently accurate navigation and information about the current position of the vehicle are indispensable for transport. These expectations are fulfilled in the case of a human driver: the knowledge on traffic rules and signs make possible to navigate through even difficult situations. Self-driving systems substitute humans by monitoring and evaluating the surrounding environment and its objects without the background information of the driver. This analysing process is vulnerable. Sudden or unexpected situations may occur but high precision navigation and background GPS databases can complement sensor-detected data.</p><p>The assistance of global navigation has been used in cars for decades. Drivers can easily plan their routes and reach their destination by using car GPS units. However, these devices do not provide accurate positioning: there may be a difference of several metres from the real location. Self-driving cars also use navigation to complement sensor data. Although there are already autonomous system tests on motorways and countryside roads, in densely built-in areas this technology faces complications due to accuracy problems. The dilution of precision (DOP) values can be extremely high in larger settlements because high buildings may hide southern sky (where satellite signs are sensed from on our latitude).</p><p>We can achieve centimetre-level accuracy (if the conditions are ideal) with geodesic RTK (real-time kinematic) GPS systems. This high-precision position data is derived from satellite-based positioning systems. Measurements of the phase of the signal’s carrier wave are real-time corrected by a single reference or an interpolated virtual station.</p><p>In this research we use RTK GPS technology in order to work out a spatial database. These measurements can also be less precise in dense cities, but there is time during fieldwork to try to eliminate inaccuracy. We have chosen a sample area in the inner city of Budapest, Hungary where we located all traffic signs, pedestrian crossings and other important elements. As self-driving cars need precise position data of these terrain objects, we have tried to work with a maximum error of a few decimetres.</p><p>We have examined online map providers if they have feasible data structure and some base data. The implemented structure is similar to OpenStreetMap DB, in which there are already some traffic lights in important crossings. With this preliminary test database, we would like to filter out dangerous situations. If the camera of the car does not see a traffic sign because of a tree or a truck, information about it will be available from the database. If a pedestrian crossing is hardly visible and the sensor does not recognize it, the background GIS data will warn the car that there may be inattentive people on the road.</p><p>A test application has also been developed (Figure 2.), in which our Postgres/Postgis database records have been inserted. In the next phase of the project we try to test our database in the traffic. We plan to drive through the sample area and observe the GPS accuracy in the recognition of the located signs.</p><p>This research aims to achieve higher safety in the field of autonomous driving. By having a refreshable cartographic GIS database in the memory of a self-driving car, there is a smaller chance of risking human life. However, the maintenance demands a high amount of work. Because of this we should concentrate only on the most important signs. Even the cars can be able to supervise the content of the database if there is a large number of them on the road. The frequent production and analysis of point clouds is also an option to get nearer to safe automatized traffic.</p>


2018 ◽  
Vol 3 (4) ◽  
pp. 3434-3440 ◽  
Author(s):  
Yiming Zeng ◽  
Yu Hu ◽  
Shice Liu ◽  
Jing Ye ◽  
Yinhe Han ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document