Visual Computing-based Perception System for Small Autonomous Vehicles: Development on a Lighter Computing Platform

Author(s):  
Edgar Zhe Qian Koh ◽  
Abakar Yousif Abdalla ◽  
Hermawan Nugroho
Author(s):  
Keke Geng ◽  
Wei Zou ◽  
Guodong Yin ◽  
Yang Li ◽  
Zihao Zhou ◽  
...  

Environment perception is a basic and necessary technology for autonomous vehicles to ensure safety and reliable driving. A lot of studies have focused on the ideal environment, while much less work has been done on the perception of low-observable targets, features of which may not be obvious in a complex environment. However, it is inevitable for autonomous vehicles to drive in environmental conditions such as rain, snow and night-time, during which the features of the targets are not obvious and detection models trained by images with significant features fail to detect low-observable target. This article mainly studies the efficient and intelligent recognition algorithm of low-observable targets in complex environments, focuses on the development of engineering method to dual-modal image (color–infrared images) low-observable target recognition and explores the applications of infrared imaging and color imaging for an intelligent perception system in autonomous vehicles. A dual-modal deep neural network is established to fuse the color and infrared images and detect low-observable targets in dual-modal images. A manually labeled color–infrared image dataset of low-observable targets is built. The deep learning neural network is trained to optimize internal parameters to make the system capable for both pedestrians and vehicle recognition in complex environments. The experimental results indicate that the dual-modal deep neural network has a better performance on the low-observable target detection and recognition in complex environments than traditional methods.


2020 ◽  
Author(s):  
Huihui Pan ◽  
Weichao Sun ◽  
Qiming Sun ◽  
Huijun Gao

Abstract Environmental perception is one of the key technologies to realize autonomous vehicles. Autonomous vehicles are often equipped with multiple sensors to form a multi-source environmental perception system. Those sensors are very sensitive to light or background conditions, which will introduce a variety of global and local fault signals that bring great safety risks to autonomous driving system during long-term running. In this paper, a real-time data fusion network with fault diagnosis and fault tolerance mechanism is designed. By introducing prior features to realize the lightweight of the backbone network, the features of the input data can be extracted in real time accurately. Through the temporal and spatial correlation between sensor data, the sensor redundancy is utilized to diagnose the local and global condence of sensor data in real time, eliminate the fault data, and ensure the accuracy and reliability of data fusion. Experiments show that the network achieves the state-of-the-art results in speed and accuracy, and can accurately detect the location of the target when some sensors are out of focus or out of order.


Electronics ◽  
2021 ◽  
Vol 10 (11) ◽  
pp. 1266
Author(s):  
Pedro J. Navarro ◽  
Leanne Miller ◽  
Francisca Rosique ◽  
Carlos Fernández-Isla ◽  
Alberto Gila-Navarro

The complex decision-making systems used for autonomous vehicles or advanced driver-assistance systems (ADAS) are being replaced by end-to-end (e2e) architectures based on deep-neural-networks (DNN). DNNs can learn complex driving actions from datasets containing thousands of images and data obtained from the vehicle perception system. This work presents the classification, design and implementation of six e2e architectures capable of generating the driving actions of speed and steering wheel angle directly on the vehicle control elements. The work details the design stages and optimization process of the convolutional networks to develop six e2e architectures. In the metric analysis the architectures have been tested with different data sources from the vehicle, such as images, XYZ accelerations and XYZ angular speeds. The best results were obtained with a mixed data e2e architecture that used front images from the vehicle and angular speeds to predict the speed and steering wheel angle with a mean error of 1.06%. An exhaustive optimization process of the convolutional blocks has demonstrated that it is possible to design lightweight e2e architectures with high performance more suitable for the final implementation in autonomous driving.


2021 ◽  
Vol 2093 (1) ◽  
pp. 012032
Author(s):  
Peide Wang

Abstract With the improvement of vehicles automation, autonomous vehicles become one of the research hotspots. Key technologies of autonomous vehicles mainly include perception, decision-making, and control. Among them, the environmental perception system, which can convert the physical world’s information collection into digital signals, is the basis of the hardware architecture of autonomous vehicles. At present, there are two major schools in the field of environmental perception: camera which is dominated by computer vision and LiDAR. This paper analyzes and compares the two majors schools in the field of environmental perception and concludes that multi-sensor fusion is the solution for future autonomous driving.


Sensors ◽  
2021 ◽  
Vol 21 (21) ◽  
pp. 7267
Author(s):  
Luiz G. Galvao ◽  
Maysam Abbod ◽  
Tatiana Kalganova ◽  
Vasile Palade ◽  
Md Nazmul Huda

Autonomous Vehicles (AVs) have the potential to solve many traffic problems, such as accidents, congestion and pollution. However, there are still challenges to overcome, for instance, AVs need to accurately perceive their environment to safely navigate in busy urban scenarios. The aim of this paper is to review recent articles on computer vision techniques that can be used to build an AV perception system. AV perception systems need to accurately detect non-static objects and predict their behaviour, as well as to detect static objects and recognise the information they are providing. This paper, in particular, focuses on the computer vision techniques used to detect pedestrians and vehicles. There have been many papers and reviews on pedestrians and vehicles detection so far. However, most of the past papers only reviewed pedestrian or vehicle detection separately. This review aims to present an overview of the AV systems in general, and then review and investigate several detection computer vision techniques for pedestrians and vehicles. The review concludes that both traditional and Deep Learning (DL) techniques have been used for pedestrian and vehicle detection; however, DL techniques have shown the best results. Although good detection results have been achieved for pedestrians and vehicles, the current algorithms still struggle to detect small, occluded, and truncated objects. In addition, there is limited research on how to improve detection performance in difficult light and weather conditions. Most of the algorithms have been tested on well-recognised datasets such as Caltech and KITTI; however, these datasets have their own limitations. Therefore, this paper recommends that future works should be implemented on more new challenging datasets, such as PIE and BDD100K.


Sensors ◽  
2020 ◽  
Vol 20 (24) ◽  
pp. 7221
Author(s):  
Baifan Chen ◽  
Hong Chen ◽  
Dian Yuan ◽  
Lingli Yu

The object detection algorithm based on vehicle-mounted lidar is a key component of the perception system on autonomous vehicles. It can provide high-precision and highly robust obstacle information for the safe driving of autonomous vehicles. However, most algorithms are often based on a large amount of point cloud data, which makes real-time detection difficult. To solve this problem, this paper proposes a 3D fast object detection method based on three main steps: First, the ground segmentation by discriminant image (GSDI) method is used to convert point cloud data into discriminant images for ground points segmentation, which avoids the direct computing of the point cloud data and improves the efficiency of ground points segmentation. Second, the image detector is used to generate the region of interest of the three-dimensional object, which effectively narrows the search range. Finally, the dynamic distance threshold clustering (DDTC) method is designed for different density of the point cloud data, which improves the detection effect of long-distance objects and avoids the over-segmentation phenomenon generated by the traditional algorithm. Experiments have showed that this algorithm can meet the real-time requirements of autonomous driving while maintaining high accuracy.


Sign in / Sign up

Export Citation Format

Share Document