scholarly journals Safe Deep Neural Network-Driven Autonomous Vehicles Using Software Safety Cages

Author(s):  
Sampo Kuutti ◽  
Richard Bowden ◽  
Harita Joshi ◽  
Robert de Temple ◽  
Saber Fallah
Author(s):  
Keke Geng ◽  
Wei Zou ◽  
Guodong Yin ◽  
Yang Li ◽  
Zihao Zhou ◽  
...  

Environment perception is a basic and necessary technology for autonomous vehicles to ensure safety and reliable driving. A lot of studies have focused on the ideal environment, while much less work has been done on the perception of low-observable targets, features of which may not be obvious in a complex environment. However, it is inevitable for autonomous vehicles to drive in environmental conditions such as rain, snow and night-time, during which the features of the targets are not obvious and detection models trained by images with significant features fail to detect low-observable target. This article mainly studies the efficient and intelligent recognition algorithm of low-observable targets in complex environments, focuses on the development of engineering method to dual-modal image (color–infrared images) low-observable target recognition and explores the applications of infrared imaging and color imaging for an intelligent perception system in autonomous vehicles. A dual-modal deep neural network is established to fuse the color and infrared images and detect low-observable targets in dual-modal images. A manually labeled color–infrared image dataset of low-observable targets is built. The deep learning neural network is trained to optimize internal parameters to make the system capable for both pedestrians and vehicle recognition in complex environments. The experimental results indicate that the dual-modal deep neural network has a better performance on the low-observable target detection and recognition in complex environments than traditional methods.


2019 ◽  
Vol 9 (15) ◽  
pp. 3174 ◽  
Author(s):  
Zhou ◽  
Li ◽  
Shen

The in-vehicle controller area network (CAN) bus is one of the essential components for autonomous vehicles, and its safety will be one of the greatest challenges in the field of intelligent vehicles in the future. In this paper, we propose a novel system that uses a deep neural network (DNN) to detect anomalous CAN bus messages. We treat anomaly detection as a cross-domain modelling problem, in which three CAN bus data packets as a group are directly imported into the DNN architecture for parallel training with shared weights. After that, three data packets are represented as three independent feature vectors, which corresponds to three different types of data sequences, namely anchor, positive and negative. The proposed DNN architecture is an embedded triplet loss network that optimizes the distance between the anchor example and the positive example, makes it smaller than the distance between the anchor example and the negative example, and realizes the similarity calculation of samples, which were originally used in face detection. Compared to traditional anomaly detection methods, the proposed method to learn the parameters with shared-weight could improve detection efficiency and detection accuracy. The whole detection system is composed of the front-end and the back-end, which correspond to deep network and triplet loss network, respectively, and are trainable in an end-to-end fashion. Experimental results demonstrate that the proposed technology can make real-time responses to anomalies and attacks to the CAN bus, and significantly improve the detection ratio. To the best of our knowledge, the proposed method is the first used for anomaly detection in the in-vehicle CAN bus.


Author(s):  
Di Wang ◽  
Hong Bao ◽  
Feifei Zhang

This paper proposed an algorithm for a deep learning network for identifying circular traffic lights (CTL-DNNet). The sample labeling process uses translation to increase the number of positive samples, and the similarity is calculated to reduce the number of negative samples, thereby reducing overfitting. We use a dataset of approximately 370[Formula: see text]000 samples, with approximately 20[Formula: see text]000 positive samples and approximately 350[Formula: see text]000 negative samples. The datasets are generated from images taken at the Beijing Garden Expo. To obtain a very robust method for the detection of traffic lights, we use different layers, different cost functions and different activation functions of the depth neural network for training and comparison. Our algorithm has evaluated autonomous vehicles in varying illumination and gets the result with high accuracy and robustness. The experimental results show that CTL-DNNet is effective at recognizing road traffic lights in the Beijing Garden Expo area.


Sensors ◽  
2021 ◽  
Vol 21 (7) ◽  
pp. 2536
Author(s):  
Jason Nataprawira ◽  
Yanlei Gu ◽  
Igor Goncharenko ◽  
Shunsuke Kamijo

Pedestrian fatalities and injuries most likely occur in vehicle-pedestrian crashes. Meanwhile, engineers have tried to reduce the problems by developing a pedestrian detection function in Advanced Driver-Assistance Systems (ADAS) and autonomous vehicles. However, the system is still not perfect. A remaining problem in pedestrian detection is the performance reduction at nighttime, although pedestrian detection should work well regardless of lighting conditions. This study presents an evaluation of pedestrian detection performance in different lighting conditions, then proposes to adopt multispectral image and deep neural network to improve the detection accuracy. In the evaluation, different image sources including RGB, thermal, and multispectral format are compared for the performance of the pedestrian detection. In addition, the optimizations of the architecture of the deep neural network are performed to achieve high accuracy and short processing time in the pedestrian detection task. The result implies that using multispectral images is the best solution for pedestrian detection at different lighting conditions. The proposed deep neural network accomplishes a 6.9% improvement in pedestrian detection accuracy compared to the baseline method. Moreover, the optimization for processing time indicates that it is possible to reduce 22.76% processing time by only sacrificing 2% detection accuracy.


Author(s):  
Dhairya Shah

Abstract: Vehicle positioning and classification is a vital technology in intelligent transportation and self-driving cars. This paper describes the experimentation for the classification of vehicle images by artificial vision using Keras and TensorFlow to construct a deep neural network model, Python modules, as well as a machine learning algorithm. Image classification finds its suitability in applications ranging from medical diagnostics to autonomous vehicles. The existing architectures are computationally exhaustive, complex, and less accurate. The outcomes are used to assess the best camera location for filming, the vehicular traffic to determine the highway occupancy. An accurate, simple, and hardware-efficient architecture is required to be developed for image classification. Keywords: Convolutional Neural Networks, Image Classification, deep neural network, Keras, Tensorflow, Python, machine learning, dataset


Sign in / Sign up

Export Citation Format

Share Document