scholarly journals Understanding the Perception of Road Segmentation and Traffic Light Detection using Machine Learning Techniques

2020 ◽  
Vol 9 (1) ◽  
pp. 2698-2704

Advanced Driving Assistance System (ADAS) has seen tremendous growth over the past 10 years. In recent times, luxury cars, as well as some newly emerging cars, come with ADAS application. From 2014, Because of the entry of the European new car assessment programme (EuroNCAP) [1] in the AEBS test, it helped gain momentum the introduction of ADAS in Europe [1]. Most OEMs and research institutes have already demonstrated on the self-driving cars [1]. So here, a focus is made on road segmentation where LiDAR sensor takes in the image of the surrounding and where the vehicle should know its path, it is fulfilled by processing a convolutional neural network called semantic segmentation on an FPGA board in 16.9ms [3]. Further, a traffic light detection model is also developed by using NVidia Jetson and 2 FPGA boards, collectively named as 'Driving brain' which acts as a super computer for such networks. The results are obtained at higher accuracy by processing the obtained traffic light images into the CNN classifier [5]. Overall, this paper gives a brief idea of the technical trend of autonomous driving which throws light on algorithms and for advanced driver-assistance systems used for road segmentation and traffic light detection

2021 ◽  
Vol 2021 ◽  
pp. 1-16
Author(s):  
Mohammed A. M. Elhassan ◽  
YuXuan Chen ◽  
Yunyi Chen ◽  
Chenxi Huang ◽  
Jane Yang ◽  
...  

In recent years, convolutional neural networks (CNNs) have been at the centre of the advances and progress of advanced driver assistance systems and autonomous driving. This paper presents a point-wise pyramid attention network, namely, PPANet, which employs an encoder-decoder approach for semantic segmentation. Specifically, the encoder adopts a novel squeeze nonbottleneck module as a base module to extract feature representations, where squeeze and expansion are utilized to obtain high segmentation accuracy. An upsampling module is designed to work as a decoder; its purpose is to recover the lost pixel-wise representations from the encoding part. The middle part consists of two parts point-wise pyramid attention (PPA) module and an attention-like module connected in parallel. The PPA module is proposed to utilize contextual information effectively. Furthermore, we developed a combined loss function from dice loss and binary cross-entropy to improve accuracy and get faster training convergence in KITTI road segmentation. The paper conducted the training and testing experiments on KITTI road segmentation and Camvid datasets, and the evaluation results show that the proposed method proved its effectiveness in road semantic segmentation.


Electronics ◽  
2021 ◽  
Vol 10 (19) ◽  
pp. 2405
Author(s):  
Heung-Gu Lee ◽  
Dong-Hyun Kang ◽  
Deok-Hwan Kim

Currently, the existing vehicle-centric semi-autonomous driving modules do not consider the driver’s situation and emotions. In an autonomous driving environment, when changing to manual driving, human–machine interface and advanced driver assistance systems (ADAS) are essential to assist vehicle driving. This study proposes a human–machine interface that considers the driver’s situation and emotions to enhance the ADAS. A 1D convolutional neural network model based on multimodal bio-signals is used and applied to control semi-autonomous vehicles. The possibility of semi-autonomous driving is confirmed by classifying four driving scenarios and controlling the speed of the vehicle. In the experiment, by using a driving simulator and hardware-in-the-loop simulation equipment, we confirm that the response speed of the driving assistance system is 351.75 ms and the system recognizes four scenarios and eight emotions through bio-signal data.


2020 ◽  
Vol 77 ◽  
pp. 01002
Author(s):  
Tomohide Fukuchi ◽  
Mark Ogbodo Ikechukwu ◽  
Abderazek Ben Abdallah

Autonomous Driving has recently become a research trend and efficient autonomous driving system is difficult to achieve due to safety concerns, Applying traffic light recognition to autonomous driving system is one of the factors to prevent accidents that occur as a result of traffic light violation. To realize safe autonomous driving system, we propose in this work a design and optimization of a traffic light detection system based on deep neural network. We designed a lightweight convolution neural network with parameters less than 10000 and implemented in software. We achieved 98.3% inference accuracy with 2.5 fps response time. Also we optimized the input image pixel values with normalization and optimized convolution layer with pipeline on FPGA with 5% resource consumption.


Author(s):  
Vladimir Haltakov ◽  
Jakob Mayr ◽  
Christian Unger ◽  
Slobodan Ilic

2020 ◽  
Vol 13 (2) ◽  
pp. 265-274 ◽  
Author(s):  
Wael Farag

Background: Enabling fast and reliable lane-lines detection and tracking for advanced driving assistance systems and self-driving cars. Methods: The proposed technique is mainly a pipeline of computer vision algorithms that augment each other and take in raw RGB images to produce the required lane-line segments that represent the boundary of the road for the car. The main emphasis of the proposed technique in on simplicity and fast computation capability so that it can be embedded in affordable CPUs that are employed by ADAS systems. Results: Each used algorithm is described in details, implemented and its performance is evaluated using actual road images and videos captured by the front mounted camera of the car. The whole pipeline performance is also tested and evaluated on real videos. Conclusion: The evaluation of the proposed technique shows that it reliably detects and tracks road boundaries under various conditions.


Author(s):  
A. Leichter ◽  
M. Werner ◽  
M. Sester

Abstract. Feature extraction from a range of scales is crucial for successful classification of objects of different size in 3D point clouds with varying point density. 3D point clouds have high relevance in application areas such as terrain modelling, building modelling or autonomous driving. A large amount of such data is available but also that these data is subject to investigation in the context of different tasks like segmentation, classification, simultaneous localisation and mapping and others. In this paper, we introduce a novel multiscale approach to recover neighbourhood in unstructured 3D point clouds. Unlike the typical strategy of defining one single scale for the whole dataset or use a single optimised scale for every point, we consider an interval of scales. In this initial work our primary goal is to evaluate the information gain through the usage of the multiscale neighbourhood definition for the calculation of shape features, which are used for point classification. Therefore, we show and discuss empirical results from the application of classical classification models to multiscale features. The unstructured nature of 3D point cloud makes it necessary to recover neighbourhood information before meaningful features can be extracted. This paper proposes the extraction of geometrical features from a range of neighbourhood with different scales, i.e. neighborhood ranges. We investigate the utilisation of the large set of features in combination with feature aggregation/selection algorithms and classical machine learning techniques. We show that the all-scale-approach outperform single scale approaches as well as the approach with an optimised per point selected scale.


Sign in / Sign up

Export Citation Format

Share Document