Environmental Perception in Autonomous Vehicles Using Edge Level Situational Awareness

Author(s):  
Nima Ghafoorianfar ◽  
Mehdi Roopaei
2021 ◽  
Vol 13 (6) ◽  
pp. 1064
Author(s):  
Zhangjing Wang ◽  
Xianhan Miao ◽  
Zhen Huang ◽  
Haoran Luo

The development of autonomous vehicles and unmanned aerial vehicles has led to a current research focus on improving the environmental perception of automation equipment. The unmanned platform detects its surroundings and then makes a decision based on environmental information. The major challenge of environmental perception is to detect and classify objects precisely; thus, it is necessary to perform fusion of different heterogeneous data to achieve complementary advantages. In this paper, a robust object detection and classification algorithm based on millimeter-wave (MMW) radar and camera fusion is proposed. The corresponding regions of interest (ROIs) are accurately calculated from the approximate position of the target detected by radar and cameras. A joint classification network is used to extract micro-Doppler features from the time-frequency spectrum and texture features from images in the ROIs. A fusion dataset between radar and camera is established using a fusion data acquisition platform and includes intersections, highways, roads, and playgrounds in schools during the day and at night. The traditional radar signal algorithm, the Faster R-CNN model and our proposed fusion network model, called RCF-Faster R-CNN, are evaluated in this dataset. The experimental results indicate that the mAP(mean Average Precision) of our network is up to 89.42% more accurate than the traditional radar signal algorithm and up to 32.76% higher than Faster R-CNN, especially in the environment of low light and strong electromagnetic clutter.


2021 ◽  
Vol 13 (16) ◽  
pp. 3234
Author(s):  
Jingwei Cao ◽  
Chuanxue Song ◽  
Shixin Song ◽  
Feng Xiao ◽  
Xu Zhang ◽  
...  

Object tracking is an essential aspect of environmental perception technology for autonomous vehicles. The existing object tracking algorithms can only be applied well to simple scenes. When the scenes become complex, the algorithms have poor tracking performance and insufficient robustness, and the problems of tracking drift and object loss are prone to occur. Therefore, a robust object tracking algorithm for autonomous vehicles in complex scenes is proposed. Firstly, we study the Siam-FC network and related algorithms, and analyze the problems that need to be addressed in object tracking. Secondly, the construction of a double-template Siamese network model based on multi-feature fusion is described, as is the use of the improved MobileNet V2 as the feature extraction backbone network, and the attention mechanism and template online update mechanism are introduced. Finally, relevant experiments were carried out based on public datasets and actual driving videos, with the aim of fully testing the tracking performance of the proposed algorithm on different objects in a variety of complex scenes. The results showed that, compared with other algorithms, the proposed algorithm had high tracking accuracy and speed, demonstrated stronger robustness and anti-interference abilities, and could still accurately track the object in real time without the introduction of complex structures. This algorithm can be effectively applied in intelligent vehicle driving assistance, and it will help to promote the further development and improvement of computer vision technology in the field of environmental perception.


Author(s):  
Hiroaki Hayashi ◽  
Naoki Oka ◽  
Mitsuhiro Kamezaki ◽  
Shigeki Sugano

Abstract In semi-autonomous vehicles (SAE level 3) that requires drivers to takeover (TO) the control in critical situations, a system needs to judge if the driver have enough situational awareness (SA) for manual driving. We previously developed a SA estimation system that only used driver’s glance data. For deeper understanding of driver’s SA, the system needs to evaluate the relevancy between driver’s glance and surrounding vehicle and obstacles. In this study, we thus developed a new SA estimation model considering driving-relevant objects and investigated the relationship between parameters. We performed TO experiments in a driving simulator to observe driver’s behavior in different position of surrounding vehicles and TO performance such as the smoothness of steering control. We adopted support vector machine to classify obtained dataset into safe and dangerous TO, and the result showed 83% accuracy in leave-one-out cross validation. We found that unscheduled TO led to maneuver error and glance behavior differed from individuals.


2020 ◽  
Author(s):  
Huihui Pan ◽  
Weichao Sun ◽  
Qiming Sun ◽  
Huijun Gao

Abstract Environmental perception is one of the key technologies to realize autonomous vehicles. Autonomous vehicles are often equipped with multiple sensors to form a multi-source environmental perception system. Those sensors are very sensitive to light or background conditions, which will introduce a variety of global and local fault signals that bring great safety risks to autonomous driving system during long-term running. In this paper, a real-time data fusion network with fault diagnosis and fault tolerance mechanism is designed. By introducing prior features to realize the lightweight of the backbone network, the features of the input data can be extracted in real time accurately. Through the temporal and spatial correlation between sensor data, the sensor redundancy is utilized to diagnose the local and global condence of sensor data in real time, eliminate the fault data, and ensure the accuracy and reliability of data fusion. Experiments show that the network achieves the state-of-the-art results in speed and accuracy, and can accurately detect the location of the target when some sensors are out of focus or out of order.


Author(s):  
Chaojun Lin ◽  
Ying Shi ◽  
Jian Zhang ◽  
Changjun Xie ◽  
Wei Chen ◽  
...  

Environmental perception of urban roads is a critical research goal in intelligent transportation technology and autonomous vehicles, and pedestrian location is key to many relevant algorithms. Because anchor-free detectors are faster and region-based convolutional neural networks have a higher accuracy in object detection and classification, we propose an integrated convolutional networking architecture combining an anchor-free detector with a region-based convolutional neural network in the environmental perception task. The proposed network achieves higher precision and increases inference speed by up to 30%. To acquire more accurate region boundaries than a coarse bounding box method, a semantic segmentation sub-network is adopted to predict an instance segmentation mask for each object, and more accurate segmentation results are obtained by using the Dice loss. Moreover, we present an assignment strategy using a modified feature pyramid structure and show that it improves mean average precision of pedestrian detection by 2% on average. Finally, we verify that the pretrained neural network is beneficial for small datasets. Overall, the results show that our model achieves higher precision than the approaches used for comparison.


2021 ◽  
Vol 2093 (1) ◽  
pp. 012032
Author(s):  
Peide Wang

Abstract With the improvement of vehicles automation, autonomous vehicles become one of the research hotspots. Key technologies of autonomous vehicles mainly include perception, decision-making, and control. Among them, the environmental perception system, which can convert the physical world’s information collection into digital signals, is the basis of the hardware architecture of autonomous vehicles. At present, there are two major schools in the field of environmental perception: camera which is dominated by computer vision and LiDAR. This paper analyzes and compares the two majors schools in the field of environmental perception and concludes that multi-sensor fusion is the solution for future autonomous driving.


Sign in / Sign up

Export Citation Format

Share Document