scholarly journals Road Segmentation using Semantic Segmentation Networks for ADAS

In this paper, we propose a method to automatically segment the road area from the input road images to support safe driving of autonomous vehicles. In the proposed method, the semantic segmentation network (SSN) is trained by using the deep learning method and the road area is segmented by utilizing the SSN. The SSN uses the weights initialized from the VGC-16 network to create the SegNet network. In order to fast the learning time and to obtain results, the class is simplified and learned so that it can be divided into two classes as the road area and the non-road area in the trained SegNet CNN network. In order to improve the accuracy of the road segmentation result, the boundary line of the road region with the straight-line component is detected through the Hough transform and the result is shown by dividing the accurate road region by combining with the segmentation result of the SSN. The proposed method can be applied to safe driving support by autonomously driving the autonomous vehicle by automatically classifying the road area during operation and applying it to the road area departure warning system

Author(s):  
Mhafuzul Islam ◽  
Mashrur Chowdhury ◽  
Hongda Li ◽  
Hongxin Hu

Vision-based navigation of autonomous vehicles primarily depends on the deep neural network (DNN) based systems in which the controller obtains input from sensors/detectors, such as cameras, and produces a vehicle control output, such as a steering wheel angle to navigate the vehicle safely in a roadway traffic environment. Typically, these DNN-based systems in the autonomous vehicle are trained through supervised learning; however, recent studies show that a trained DNN-based system can be compromised by perturbation or adverse inputs. Similarly, this perturbation can be introduced into the DNN-based systems of autonomous vehicles by unexpected roadway hazards, such as debris or roadblocks. In this study, we first introduce a hazardous roadway environment that can compromise the DNN-based navigational system of an autonomous vehicle, and produce an incorrect steering wheel angle, which could cause crashes resulting in fatality or injury. Then, we develop a DNN-based autonomous vehicle driving system using object detection and semantic segmentation to mitigate the adverse effect of this type of hazard, which helps the autonomous vehicle to navigate safely around such hazards. We find that our developed DNN-based autonomous vehicle driving system, including hazardous object detection and semantic segmentation, improves the navigational ability of an autonomous vehicle to avoid a potential hazard by 21% compared with the traditional DNN-based autonomous vehicle driving system.


Electronics ◽  
2021 ◽  
Vol 10 (12) ◽  
pp. 1402
Author(s):  
Taehee Lee ◽  
Yeohwan Yoon ◽  
Chanjun Chun ◽  
Seungki Ryu

Poor road-surface conditions pose a significant safety risk to vehicle operation, especially in the case of autonomous vehicles. Hence, maintenance of road surfaces will become even more important in the future. With the development of deep learning-based computer image processing technology, artificial intelligence models that evaluate road conditions are being actively researched. However, as the lighting conditions of the road surface vary depending on the weather, the model performance may degrade for an image whose brightness falls outside the range of the learned image, even for the same road. In this study, a semantic segmentation model with an autoencoder structure was developed for detecting road surface along with a CNN-based image preprocessing model. This setup ensures better road-surface crack detection by adjusting the image brightness before it is input into the road-crack detection model. When the preprocessing model was applied, the road-crack segmentation model exhibited consistent performance even under varying brightness values.


2020 ◽  
Vol 2020 ◽  
pp. 1-10
Author(s):  
Derek Hungness ◽  
Raj Bridgelall

The adoption of connected and autonomous vehicles (CAVs) is in its infancy. Therefore, very little is known about their potential impacts on traffic. Meanwhile, researchers and market analysts predict a wide range of possibilities about their potential benefits and the timing of their deployments. Planners traditionally use various types of travel demand models to forecast future traffic conditions. However, such models do not yet integrate any expected impacts from CAV deployments. Consequently, many long-range transportation plans do not yet account for their eventual deployment. To address some of these uncertainties, this work modified an existing model for Madison, Wisconsin. To compare outcomes, the authors used identical parameter changes and simulation scenarios for a model of Gainesville, Florida. Both models show that with increasing levels of CAV deployment, both the vehicle miles traveled and the average congestion speed will increase. However, there are some important exceptions due to differences in the road network layout, geospatial features, sociodemographic factors, land-use, and access to transit.


Sensors ◽  
2019 ◽  
Vol 19 (22) ◽  
pp. 5044
Author(s):  
Gerd Christian Krizek ◽  
Rene Hausleitner ◽  
Laura Böhme ◽  
Cristina Olaverri-Monreal

Driver disregard for the minimum safety distance increases the probability of rear-end collisions. In order to contribute to active safety on the road, we propose in this work a low-cost Forward Collision Warning system that captures and processes images. Using cameras located in the rear section of a leading vehicle, this system serves the purpose of discouraging tailgating behavior from the vehicle driving behind. We perform in this paper the pertinent field tests to assess system performance, focusing on the calculated distance from the processing of images and the error margins in a straight line, as well as in a curve. Based on the evaluation results, the current version of the Tailigator can be used at speeds up to 50 km per hour without any restrictions. The measurements showed similar characteristics both on the straight line and in the curve. At close distances, between 3 and 5 m, the values deviated from the real value. At average distances, around 10 to 15 m, the Tailigator achieved the best results. From distances higher than 20 m, the deviations increased steadily with the distance. We contribute to the state of the art with an innovative low-cost system to identify tailgating behavior and raise awareness, which works independently of the rear vehicle’s communication capabilities or equipment.


Author(s):  
Fanta Camara ◽  
Charles Fox

AbstractUnderstanding pedestrian proxemic utility and trust will help autonomous vehicles to plan and control interactions with pedestrians more safely and efficiently. When pedestrians cross the road in front of human-driven vehicles, the two agents use knowledge of each other’s preferences to negotiate and to determine who will yield to the other. Autonomous vehicles will require similar understandings, but previous work has shown a need for them to be provided in the form of continuous proxemic utility functions, which are not available from previous proxemics studies based on Hall’s discrete zones. To fill this gap, a new Bayesian method to infer continuous pedestrian proxemic utility functions is proposed, and related to a new definition of ‘physical trust requirement’ (PTR) for road-crossing scenarios. The method is validated on simulation data then its parameters are inferred empirically from two public datasets. Results show that pedestrian proxemic utility is best described by a hyperbolic function, and that trust by the pedestrian is required in a discrete ‘trust zone’ which emerges naturally from simple physics. The PTR concept is then shown to be capable of generating and explaining the empirically observed zone sizes of Hall’s discrete theory of proxemics.


Author(s):  
Naveen Kumar Bangalore Ramaiah ◽  
◽  
Subrata Kumar Kundu ◽  

Reliable detection of obstacles around an autonomous vehicle is essential to avoid potential collision and ensure safe driving. However, a vast majority of existing systems are mainly focused on detecting large obstacles such as vehicles, pedestrians, and so on. Detection of small obstacles such as road debris, which pose a serious potential threat are often overlooked. In this article, a novel stereo vision-based road debris detection algorithm is proposed that detects debris on the road surfaces and estimates their height accurately. Moreover, a collision warning system that could warn the driver of an imminent crash by using 3D information of detected debris has been studied. A novel feature-based classifier that uses a combination of strong and weak features has been developed for the proposed algorithm, which identifies debris from selected candidates and calculates its height. 3D information of detected debris and vehicle’s speed are used in the collision warning system to warn the driver to safely maneuver the vehicle. The performance of the proposed algorithm has been evaluated by implementing it on a passenger vehicle. Experimental results confirm that the proposed algorithm can successfully detect debris of ≥5 cm height for up to a 22 m distance with an accuracy of 90%. Moreover, the debris detection algorithm runs at 20 Hz in a commercially available stereo camera making it suitable for real-time applications in commercial vehicles.


Author(s):  
Michal Hochman ◽  
Tal Oron-Gilad

This study explored pedestrians’ understanding of Fully Autonomous Vehicle (FAV) intention and what influences their decision to cross. Twenty participants saw fixed simulated urban road crossing scenes with a FAV present on the road. The scenes differed from one another in the FAV’s messages: the external Human-Machine Interfaces (e-HMI) background color, message type and modality, the FAV’s distance from the crossing place, and its size. Eye-tracking data and objective measurements were collected. Results revealed that pedestrians looked at the e-HMI before making their decision; however, they did not always make the decision according to the e-HMIs’ color, instructions (in advice messages), or intention (in status messages). Moreover, when they acted according to the e-HMI proposition, for certain distance conditions, they tended to hesitate before making the decision. Findings suggest that pedestrians’ decision making to cross depends on a combination of the e-HMI implementation and the car distance. Future work should explore the robustness of the findings in dynamic and more complex crossing environments.


2021 ◽  
Vol 13 (3) ◽  
pp. 495
Author(s):  
Zijian Zhu ◽  
Xu Li ◽  
Jianhua Xu ◽  
Jianhua Yuan ◽  
Ju Tao

The segmentation of unstructured roads, a key technology in self-driving technology, remains a challenging problem. At present, most unstructured road segmentation algorithms are based on cameras or use LiDAR for projection, which has considerable limitations that the camera will fail at night, and the projection method will lose one-dimensional information. Therefore, this paper proposes a road boundary enhancement Point-Cylinder Network, called BE-PCFCN, which uses Point-Cylinder in order to extract point cloud features directly and integrates the road enhancement module to achieve accurate unstructured road segmentation. Firstly, we use the improved RANSAC-Boundary algorithm to calculate the rough road boundary point set, training in the same parameters with the original point cloud as a submodule. The whole network adopts the encoder and decoder structure, using Point-Cylinder as the basic module, while considering the data locality and the algorithm complexity. Subsequently, we made an unstructured road data set for training and compared it with existing LiDAR(Light Detection And Ranging) semantic segmentation algorithms. Finally, the experiment verified the robustness of BE-PCFCN. The road intersection-over-union (IoU) was increased by 4% when compared with the best existing algorithm, reaching 95.6%. Even on unstructured roads with an extremely irregular shape, BE-PCFCN also currently has the best segmentation results.


Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2032
Author(s):  
Sampo Kuutti ◽  
Richard Bowden ◽  
Saber Fallah

The use of neural networks and reinforcement learning has become increasingly popular in autonomous vehicle control. However, the opaqueness of the resulting control policies presents a significant barrier to deploying neural network-based control in autonomous vehicles. In this paper, we present a reinforcement learning based approach to autonomous vehicle longitudinal control, where the rule-based safety cages provide enhanced safety for the vehicle as well as weak supervision to the reinforcement learning agent. By guiding the agent to meaningful states and actions, this weak supervision improves the convergence during training and enhances the safety of the final trained policy. This rule-based supervisory controller has the further advantage of being fully interpretable, thereby enabling traditional validation and verification approaches to ensure the safety of the vehicle. We compare models with and without safety cages, as well as models with optimal and constrained model parameters, and show that the weak supervision consistently improves the safety of exploration, speed of convergence, and model performance. Additionally, we show that when the model parameters are constrained or sub-optimal, the safety cages can enable a model to learn a safe driving policy even when the model could not be trained to drive through reinforcement learning alone.


Vehicles ◽  
2021 ◽  
Vol 3 (4) ◽  
pp. 764-777
Author(s):  
Dario Niermann ◽  
Alexander Trende ◽  
Klas Ihme ◽  
Uwe Drewitz ◽  
Cornelia Hollander ◽  
...  

The quickly rising development of autonomous vehicle technology and increase of (semi-) autonomous vehicles on the road leads to an increased demand for more sophisticated human–machine-cooperation approaches to improve trust and acceptance of these new systems. In this work, we investigate the feeling of discomfort of human passengers while driving autonomously and the automatic detection of this discomfort with several model approaches, using the combination of different data sources. Based on a driving simulator study, we analyzed the discomfort reports of 50 participants for autonomous inner city driving. We found that perceived discomfort depends on the driving scenario (with discomfort generally peaking in complex situations) and on the passenger (resulting in interindividual differences in reported discomfort extend and duration). Further, we describe three different model approaches on how to predict the passenger discomfort using data from the vehicle’s sensors as well as physiological and behavioral data from the passenger. The model’s precision varies greatly across the approaches, the best approach having a precision of up to 80%. All of our presented model approaches use combinations of linear models and are thus fast, transparent, and safe. Lastly, we analyzed these models using the SHAP method, which enables explaining the models’ discomfort predictions. These explanations are used to infer the importance of our collected features and to create a scenario-based discomfort analysis. Our work demonstrates a novel approach on passenger state modelling with simple, safe, and transparent models and with explainable model predictions, which can be used to adapt the vehicles’ actions to the needs of the passenger.


Sign in / Sign up

Export Citation Format

Share Document