Marked and unmarked speed bump detection for autonomous vehicles using stereo vision

2021 ◽  
pp. 1-14
Author(s):  
Ana Luisa Ballinas-Hernández ◽  
Ivan Olmos-Pineda ◽  
José Arturo Olvera-López

 A current challenge for autonomous vehicles is the detection of irregularities on road surfaces in order to prevent accidents; in particular, speed bump detection is an important task for safe and comfortable autonomous navigation. There are some techniques that have achieved acceptable speed bump detection under optimal road surface conditions, especially when signs are well-marked. However, in developing countries it is very common to find unmarked speed bumps and existing techniques fail. In this paper a methodology to detect both marked and unmarked speed bumps is proposed, for clearly painted speed bumps we apply local binary patterns technique to extract features from an image dataset. For unmarked speed bump detection, we apply stereo vision where point clouds obtained by the 3D reconstruction are converted to triangular meshes by applying Delaunay triangulation. A selection and extraction of the most relevant features is made to speed bump elevation on surfaces meshes. Results obtained have an important contribution and improve some of the existing techniques since the reconstruction of three-dimensional meshes provides relevant information for the detection of speed bumps by elevations on surfaces even though they are not marked.

Robotica ◽  
2018 ◽  
Vol 36 (8) ◽  
pp. 1225-1243 ◽  
Author(s):  
Jose-Pablo Sanchez-Rodriguez ◽  
Alejandro Aceves-Lopez

SUMMARYThis paper presents an overview of the most recent vision-based multi-rotor micro unmanned aerial vehicles (MUAVs) intended for autonomous navigation using a stereoscopic camera. Drone operation is difficult because pilots need the expertise to fly the drones. Pilots have a limited field of view, and unfortunate situations, such as loss of line of sight or collision with objects such as wires and branches, can happen. Autonomous navigation is an even more difficult challenge than remote control navigation because the drones must make decisions on their own in real time and simultaneously build maps of their surroundings if none is available. Moreover, MUAVs are limited in terms of useful payload capability and energy consumption. Therefore, a drone must be equipped with small sensors, and it must carry low weight. In addition, a drone requires a sufficiently powerful onboard computer so that it can understand its surroundings and navigate accordingly to achieve its goal safely. A stereoscopic camera is considered a suitable sensor because of its three-dimensional (3D) capabilities. Hence, a drone can perform vision-based navigation through object recognition and self-localise inside a map if one is available; otherwise, its autonomous navigation creates a simultaneous localisation and mapping problem.


2020 ◽  
Vol 10 (3) ◽  
pp. 1140 ◽  
Author(s):  
Jorge L. Martínez ◽  
Mariano Morán ◽  
Jesús Morales ◽  
Alfredo Robles ◽  
Manuel Sánchez

Autonomous navigation of ground vehicles on natural environments requires looking for traversable terrain continuously. This paper develops traversability classifiers for the three-dimensional (3D) point clouds acquired by the mobile robot Andabata on non-slippery solid ground. To this end, different supervised learning techniques from the Python library Scikit-learn are employed. Training and validation are performed with synthetic 3D laser scans that were labelled point by point automatically with the robotic simulator Gazebo. Good prediction results are obtained for most of the developed classifiers, which have also been tested successfully on real 3D laser scans acquired by Andabata in motion.


Micromachines ◽  
2020 ◽  
Vol 11 (5) ◽  
pp. 456 ◽  
Author(s):  
Dingkang Wang ◽  
Connor Watkins ◽  
Huikai Xie

In recent years, Light Detection and Ranging (LiDAR) has been drawing extensive attention both in academia and industry because of the increasing demand for autonomous vehicles. LiDAR is believed to be the crucial sensor for autonomous driving and flying, as it can provide high-density point clouds with accurate three-dimensional information. This review presents an extensive overview of Microelectronechanical Systems (MEMS) scanning mirrors specifically for applications in LiDAR systems. MEMS mirror-based laser scanners have unrivalled advantages in terms of size, speed and cost over other types of laser scanners, making them ideal for LiDAR in a wide range of applications. A figure of merit (FoM) is defined for MEMS mirrors in LiDAR scanners in terms of aperture size, field of view (FoV) and resonant frequency. Various MEMS mirrors based on different actuation mechanisms are compared using the FoM. Finally, a preliminary assessment of off-the-shelf MEMS scanned LiDAR systems is given.


2020 ◽  
Vol 17 (1) ◽  
pp. 172988141989671 ◽  
Author(s):  
Luis R Ramírez-Hernández ◽  
Julio C Rodríguez-Quiñonez ◽  
Moises J Castro-Toscano ◽  
Daniel Hernández-Balbuena ◽  
Wendy Flores-Fuentes ◽  
...  

Computer vision systems have demonstrated to be useful in applications of autonomous navigation, especially with the use of stereo vision systems for the three-dimensional mapping of the environment. This article presents a novel camera calibration method to improve the accuracy of stereo vision systems for three-dimensional point localization. The proposed camera calibration method uses the least square method to model the error caused by the image digitalization and the lens distortion. To obtain particular three-dimensional point coordinates, the stereo vision systems use the information of two images taken by two different cameras. Then, the system locates the two-dimensional pixel coordinates of the three-dimensional point in both images and coverts them into angles. With the obtained angles, the system finds the three-dimensional point coordinates through a triangulation process. The proposed camera calibration method is applied in the stereo vision systems, and a comparative analysis between the real and calibrated three-dimensional data points is performed to validate the improvements. Moreover, the developed method is compared with three classical calibration methods to analyze their advantages in terms of accuracy with respect to tested methods.


Sensors ◽  
2019 ◽  
Vol 19 (20) ◽  
pp. 4423 ◽  
Author(s):  
Hu ◽  
Yang ◽  
Li

Environment perception is critical for feasible path planning and safe driving for autonomous vehicles. Perception devices, such as camera, LiDAR (Light Detection and Ranging), IMU(Inertial Measurement Unit), etc., only provide raw sensing data with no identification of vital objects, which is insufficient for autonomous vehicles to perform safe and efficient self-driving operations. This study proposes an improved edge-oriented segmentation-based method to detect the objects from the sensed three-dimensional (3D) point cloud. The improved edge-oriented segmentation-based method consists of three main steps: First, the bounding areas of objects are identified by edge detection and stixel estimation in corresponding two-dimensional (2D) images taken by a stereo camera. Second, 3D sparse point clouds of objects are reconstructed in bounding areas. Finally, the dense point clouds of objects are segmented by matching the 3D sparse point clouds of objects with the whole scene point cloud. After comparison with the existing methods of segmentation, the experimental results demonstrate that the proposed edge-oriented segmentation method improves the precision of 3D point cloud segmentation, and that the objects can be segmented accurately. Meanwhile, the visualization of output data in advanced driving assistance systems (ADAS) can be greatly facilitated due to the decrease in computational time and the decrease in the number of points in the object’s point cloud.


2010 ◽  
Vol 28 (1) ◽  
pp. 93-111 ◽  
Author(s):  
Luiz Naveda ◽  
Marc Leman

Spatiotemporal gestures in music and dance have been approached using both qualitative and quantitative research methods. Applying quantitative methods has offered new perspectives but imposed several constraints such as artificial metric systems, weak links with qualitative information, and incomplete accounts of variability. In this study, we tackle these problems using concepts from topology to analyze gestural relationships in space. The Topological Gesture Analysis (TGA) relies on the projection of musical cues onto gesture trajectories, which generates point clouds in a three-dimensional space. Point clouds can be interpreted as topologies equipped with musical qualities, which gives us an idea about the relationships between gesture, space, and music. Using this method, we investigate the relationships between musical meter, dance style, and expertise in two popular dances (samba and Charleston). The results show how musical meter is encoded in the dancer's space and how relevant information about styles and expertise can be revealed by means of simple topological relationships.


2018 ◽  
Vol 66 (9) ◽  
pp. 745-751
Author(s):  
Lukas Schneider ◽  
Michael Hafner ◽  
Uwe Franke

Abstract Autonomous vehicles as well as sophisticated driver assistance systems use stereo vision to perceive their environment in 3D. At least two Million 3D points will be delivered by next generation automotive stereo vision systems. In order to cope with this huge amount of data in real-time, we developed a medium level representation, named Stixel world. This representation condenses the relevant scene information by three orders of magnitude. Since traffic scenes are dominated by planar horizontal and vertical surfaces our representation approximates the three-dimensional scene by means of thin planar rectangles called Stixel. This survey paper summarizes the progress of the Stixel world. The evolution started with a rather simple representation based on a flat world assumption. A major break-through was achieved by introducing deep-learning that allows to incorporate rich semantic information. In its most recent form, the Stixel world encodes geometric, semantic and motion cues and is capable to handle even steepest roads in San Francisco.


2021 ◽  
Vol 10 (4) ◽  
pp. 234
Author(s):  
Jing Ding ◽  
Zhigang Yan ◽  
Xuchen We

To obtain effective indoor moving target localization, a reliable and stable moving target localization method based on binocular stereo vision is proposed in this paper. A moving target recognition extraction algorithm, which integrates displacement pyramid Horn–Schunck (HS) optical flow, Delaunay triangulation and Otsu threshold segmentation, is presented to separate a moving target from a complex background, called the Otsu Delaunay HS (O-DHS) method. Additionally, a stereo matching algorithm based on deep matching and stereo vision is presented to obtain dense stereo matching points pairs, called stereo deep matching (S-DM). The stereo matching point pairs of the moving target were extracted with the moving target area and stereo deep matching point pairs, then the three dimensional coordinates of the points in the moving target area were reconstructed according to the principle of binocular vision’s parallel structure. Finally, the moving target was located by the centroid method. The experimental results showed that this method can better resist image noise and repeated texture, can effectively detect and separate moving targets, and can match stereo image points in repeated textured areas more accurately and stability. This method can effectively improve the effectiveness, accuracy and robustness of three-dimensional moving target coordinates.


Sensors ◽  
2021 ◽  
Vol 21 (1) ◽  
pp. 297
Author(s):  
Ali Marzoughi ◽  
Andrey V. Savkin

We study problems of intercepting single and multiple invasive intruders on a boundary of a planar region by employing a team of autonomous unmanned surface vehicles. First, the problem of intercepting a single intruder has been studied and then the proposed strategy has been applied to intercepting multiple intruders on the region boundary. Based on the proposed decentralised motion control algorithm and decision making strategy, each autonomous vehicle intercepts any intruder, which tends to leave the region by detecting the most vulnerable point of the boundary. An efficient and simple mathematical rules based control algorithm for navigating the autonomous vehicles on the boundary of the see region is developed. The proposed algorithm is computationally simple and easily implementable in real life intruder interception applications. In this paper, we obtain necessary and sufficient conditions for the existence of a real-time solution to the considered problem of intruder interception. The effectiveness of the proposed method is confirmed by computer simulations with both single and multiple intruders.


Sign in / Sign up

Export Citation Format

Share Document