scholarly journals Real-Time LIDAR-Based Urban Road and Sidewalk Detection for Autonomous Vehicles

Sensors ◽  
2021 ◽  
Vol 22 (1) ◽  
pp. 194
Author(s):  
Ernő Horváth ◽  
Claudiu Pozna ◽  
Miklós Unger

Road and sidewalk detection in urban scenarios is a challenging task because of the road imperfections and high sensor data bandwidth. Traditional free space and ground filter algorithms are not sensitive enough for small height differences. Camera-based or sensor-fusion solutions are widely used to classify drivable road from sidewalk or pavement. A LIDAR sensor contains all the necessary information from which the feature extraction can be done. Therefore, this paper focuses on LIDAR-based feature extraction. For road and sidewalk detection, the current paper presents a real-time (20 Hz+) solution. This solution can also be used for local path planning. Sidewalk edge detection is the combination of three algorithms working parallelly. To validate the result, the de facto standard benchmark dataset, KITTI, was used alongside our measurements. The data and the source code to reproduce the results are shared publicly on our GitHub repository.

Author(s):  
M. L. R. Lagahit ◽  
Y. H. Tseng

Abstract. The concept of Autonomous Vehicles (AV) or self-driving cars has been increasingly popular these past few years. As such, research and development of AVs have also escalated around the world. One of those researches is about High-Definition (HD) maps. HD Maps are basically very detailed maps that provide all the geometric and semantic information on the road, which helps the AV in positioning itself on the lanes as well as mapping objects and markings on the road. This research will focus on the early stages of updating said HD maps. The methodology mainly consists of (1) running YOLOv3, a real-time object detection system, on a photo taken from a stereo camera to detect the object of interest, in this case a traffic cone, (2) applying the theories of stereo-photogrammetry to determine the 3D coordinates of the traffic cone, and (3) executing all of it at the same time on a Python-based platform. Results have shown centimeter-level accuracy in terms of obtained distance and height of the detected traffic cone from the camera setup. In future works, observed coordinates can be uploaded to a database and then connected to an application for real-time data storage/management and interactive visualization.


Author(s):  
Michal Hochman ◽  
Tal Oron-Gilad

This study explored pedestrians’ understanding of Fully Autonomous Vehicle (FAV) intention and what influences their decision to cross. Twenty participants saw fixed simulated urban road crossing scenes with a FAV present on the road. The scenes differed from one another in the FAV’s messages: the external Human-Machine Interfaces (e-HMI) background color, message type and modality, the FAV’s distance from the crossing place, and its size. Eye-tracking data and objective measurements were collected. Results revealed that pedestrians looked at the e-HMI before making their decision; however, they did not always make the decision according to the e-HMIs’ color, instructions (in advice messages), or intention (in status messages). Moreover, when they acted according to the e-HMI proposition, for certain distance conditions, they tended to hesitate before making the decision. Findings suggest that pedestrians’ decision making to cross depends on a combination of the e-HMI implementation and the car distance. Future work should explore the robustness of the findings in dynamic and more complex crossing environments.


Sensors ◽  
2021 ◽  
Vol 21 (8) ◽  
pp. 2814
Author(s):  
Tsige Tadesse Alemayoh ◽  
Jae Hoon Lee ◽  
Shingo Okamoto

For the effective application of thriving human-assistive technologies in healthcare services and human–robot collaborative tasks, computing devices must be aware of human movements. Developing a reliable real-time activity recognition method for the continuous and smooth operation of such smart devices is imperative. To achieve this, light and intelligent methods that use ubiquitous sensors are pivotal. In this study, with the correlation of time series data in mind, a new method of data structuring for deeper feature extraction is introduced herein. The activity data were collected using a smartphone with the help of an exclusively developed iOS application. Data from eight activities were shaped into single and double-channels to extract deep temporal and spatial features of the signals. In addition to the time domain, raw data were represented via the Fourier and wavelet domains. Among the several neural network models used to fit the deep-learning classification of the activities, a convolutional neural network with a double-channeled time-domain input performed well. This method was further evaluated using other public datasets, and better performance was obtained. The practicability of the trained model was finally tested on a computer and a smartphone in real-time, where it demonstrated promising results.


2020 ◽  
Author(s):  
Huihui Pan ◽  
Weichao Sun ◽  
Qiming Sun ◽  
Huijun Gao

Abstract Environmental perception is one of the key technologies to realize autonomous vehicles. Autonomous vehicles are often equipped with multiple sensors to form a multi-source environmental perception system. Those sensors are very sensitive to light or background conditions, which will introduce a variety of global and local fault signals that bring great safety risks to autonomous driving system during long-term running. In this paper, a real-time data fusion network with fault diagnosis and fault tolerance mechanism is designed. By introducing prior features to realize the lightweight of the backbone network, the features of the input data can be extracted in real time accurately. Through the temporal and spatial correlation between sensor data, the sensor redundancy is utilized to diagnose the local and global condence of sensor data in real time, eliminate the fault data, and ensure the accuracy and reliability of data fusion. Experiments show that the network achieves the state-of-the-art results in speed and accuracy, and can accurately detect the location of the target when some sensors are out of focus or out of order.


Sensors ◽  
2018 ◽  
Vol 18 (12) ◽  
pp. 4274 ◽  
Author(s):  
Qingquan Li ◽  
Jian Zhou ◽  
Bijun Li ◽  
Yuan Guo ◽  
Jinsheng Xiao

Vision-based lane-detection methods provide low-cost density information about roads for autonomous vehicles. In this paper, we propose a robust and efficient method to expand the application of these methods to cover low-speed environments. First, the reliable region near the vehicle is initialized and a series of rectangular detection regions are dynamically constructed along the road. Then, an improved symmetrical local threshold edge extraction is introduced to extract the edge points of the lane markings based on accurate marking width limitations. In order to meet real-time requirements, a novel Bresenham line voting space is proposed to improve the process of line segment detection. Combined with straight lines, polylines, and curves, the proposed geometric fitting method has the ability to adapt to various road shapes. Finally, different status vectors and Kalman filter transfer matrices are used to track the key points of the linear and nonlinear parts of the lane. The proposed method was tested on a public database and our autonomous platform. The experimental results show that the method is robust and efficient and can meet the real-time requirements of autonomous vehicles.


2018 ◽  
Vol 7 (2) ◽  
pp. 489-506 ◽  
Author(s):  
Manuel Bastuck ◽  
Tobias Baur ◽  
Andreas Schütze

Abstract. We present DAV3E, a MATLAB toolbox for feature extraction from, and evaluation of, cyclic sensor data. These kind of data arise from many real-world applications like gas sensors in temperature cycled operation or condition monitoring of hydraulic machines. DAV3E enables interactive shape-describing feature extraction from such datasets, which is lacking in current machine learning tools, with subsequent methods to build validated statistical models for the prediction of unknown data. It also provides more sophisticated methods like model hierarchies, exhaustive parameter search, and automatic data fusion, which can all be accessed in the same graphical user interface for a streamlined and efficient workflow, or via command line for more advanced users. New features and visualization methods can be added with minimal MATLAB knowledge through the plug-in system. We describe ideas and concepts implemented in the software, as well as the currently existing modules, and demonstrate its capabilities for one synthetic and two real datasets. An executable version of DAV3E can be found at http://www.lmt.uni-saarland.de/dave (last access: 14 September 2018). The source code is available on request.


Sensors ◽  
2019 ◽  
Vol 19 (20) ◽  
pp. 4357 ◽  
Author(s):  
Babak Shahian Jahromi ◽  
Theja Tulabandhula ◽  
Sabri Cetin

There are many sensor fusion frameworks proposed in the literature using different sensors and fusion methods combinations and configurations. More focus has been on improving the accuracy performance; however, the implementation feasibility of these frameworks in an autonomous vehicle is less explored. Some fusion architectures can perform very well in lab conditions using powerful computational resources; however, in real-world applications, they cannot be implemented in an embedded edge computer due to their high cost and computational need. We propose a new hybrid multi-sensor fusion pipeline configuration that performs environment perception for autonomous vehicles such as road segmentation, obstacle detection, and tracking. This fusion framework uses a proposed encoder-decoder based Fully Convolutional Neural Network (FCNx) and a traditional Extended Kalman Filter (EKF) nonlinear state estimator method. It also uses a configuration of camera, LiDAR, and radar sensors that are best suited for each fusion method. The goal of this hybrid framework is to provide a cost-effective, lightweight, modular, and robust (in case of a sensor failure) fusion system solution. It uses FCNx algorithm that improve road detection accuracy compared to benchmark models while maintaining real-time efficiency that can be used in an autonomous vehicle embedded computer. Tested on over 3K road scenes, our fusion algorithm shows better performance in various environment scenarios compared to baseline benchmark networks. Moreover, the algorithm is implemented in a vehicle and tested using actual sensor data collected from a vehicle, performing real-time environment perception.


Author(s):  
Zamani Md Sani ◽  
Hadhrami Abd Ghani ◽  
Rosli Besar ◽  
Azizul Azizan ◽  
Hafiza Abas

Road users make vital decisions to safely maneuver their vehicles based on the road markers, which need to be correctly classified. The road markers classification is significantly important especially for the autonomous car technology. The current problems of extensive processing time and relatively lower average accuracy when classifying up to five types of road markers are addressed in this paper. Two novel real time video processing methods are proposed by extracting two formulated features namely the contour number, , and angle, 𝜃 to classify the road markers. Initially, the camera position is calibrated to obtain the best Field of View (FOV) for identifying a customized Region of Interest (ROI). An adaptive smoothing algorithm is performed on the ROI before the contours of the road markers and the corresponding two features are determined. It is observed that the achievable accuracy of the proposed methods at several non-urban road scenarios is approximately 96% and the processing time per frame is significantly reduced when the video resolution increases as compared to that of the existing approach.


2020 ◽  
Vol 11 ◽  
Author(s):  
Michal Hochman ◽  
Yisrael Parmet ◽  
Tal Oron-Gilad

This study explored pedestrians’ understanding of Fully Autonomous Vehicles (FAVs) intention to stop and what influences pedestrians’ decision to cross the road over time, i.e., learnability. Twenty participants saw fixed simulated urban road crossing scenes with a single FAV on the road as if they were pedestrians intending to cross. Scenes differed from one another in the FAV’s, distance from the crossing place, its physical size, and external Human-Machine Interfaces (e-HMI) message by background color (red/green), message type (status/advice), and presentation modality (text/symbol). Eye-tracking data and decision measurements were collected. Results revealed that pedestrians tend to look at the e-HMI before making their decision. However, they did not necessarily decide according to the e-HMIs’ color or message type. Moreover, when they complied with the e-HMI proposition, they tended to hesitate before making the decision. Overall, a learning effect over time was observed in all conditions regardless of e- HMI features and crossing context. Findings suggest that pedestrians’ decision making depends on a combination of the e-HMI implementation and the car distance. Moreover, since the learning curve exists in all conditions and has the same proportion, it is critical to design an interaction that would encourage higher probability of compatible decisions from the first phase. However, to extend all these findings, it is necessary to further examine dynamic situations.


Sign in / Sign up

Export Citation Format

Share Document