Fast and accurate ground plane detection for the visually impaired from 3D organized point clouds

Author(s):  
Ramy Ashraf Zeineldin ◽  
Nawal Ahmed El-Fishawy
Author(s):  
T. Fiolka ◽  
F. Rouatbi ◽  
D. Bender

3D terrain models are an important instrument in areas like geology, agriculture and reconnaissance. Using an automated UAS with a line-based LiDAR can create terrain models fast and easily even from large areas. But the resulting point cloud may contain holes and therefore be incomplete. This might happen due to occlusions, a missed flight route due to wind or simply as a result of changes in the ground height which would alter the swath of the LiDAR system. This paper proposes a method to detect holes in 3D point clouds generated during the flight and adjust the course in order to close them. First, a grid-based search for holes in the horizontal ground plane is performed. Then a check for vertical holes mainly created by buildings walls is done. Due to occlusions and steep LiDAR angles, closing the vertical gaps may be difficult or even impossible. Therefore, the current approach deals with holes in the ground plane and only marks the vertical holes in such a way that the operator can decide on further actions regarding them. The aim is to efficiently create point clouds which can be used for the generation of complete 3D terrain models.


Sensors ◽  
2021 ◽  
Vol 21 (17) ◽  
pp. 5908
Author(s):  
Yiru Niu ◽  
Zhihua Xu ◽  
Ershuai Xu ◽  
Gongwei Li ◽  
Yuan Huo ◽  
...  

Social distancing protocols have been highly recommended by the World Health Organization (WHO) to curb the spread of COVID-19. However, one major challenge to enforcing social distancing in public areas is how to perceive people in three dimensions. This paper proposes an innovative pedestrian 3D localization method using monocular images combined with terrestrial point clouds. In the proposed approach, camera calibration is achieved based on the correspondences between 2D image points and 3D world points. The vertical coordinates of the ground plane where pedestrians stand are extracted from the point clouds. Then, using the assumption that the pedestrian is always perpendicular to the ground, the 3D coordinates of the pedestrian’s feet and head are calculated iteratively using collinear equations. This allows the three-dimensional localization and height determination of pedestrians using monocular cameras, which are widely distributed in many major cities. The performance of the proposed method was evaluated using two different datasets. Experimental results show that the pedestrian localization error of the proposed approach was less than one meter within tens of meters and performed better than other localization techniques. The proposed approach uses simple and efficient calculations, obtains accurate location, and can be used to implement social distancing rules. Moreover, since the proposed approach also generates accurate height values, exclusionary schemes to social distancing protocols, particularly the parent-child exemption, can be introduced in the framework.


2012 ◽  
Vol 591-593 ◽  
pp. 1265-1268
Author(s):  
Zi Ming Xiong ◽  
Gang Wan ◽  
Xue Feng Cao

Recent progress in structure-from-motion (SfM) has led to robust techniques that can operate in extremely general conditions. However, a limitation of SfM is that the scene can only be recovered up to a similarity transformation. We address the problem of automatically aligning 3D point clouds from SfM reconstructions to orthographic images. We extract feature lines from 3D point clouds, and project the feature lines onto the ground plane to create a 2D feature lines. So we reduce this alignment problem to a 2D line to 2D line alignment(match), and a novel technique for the automatic feature lines matching is presented in this paper.


Author(s):  
Jian Wu ◽  
Qingxiong Yang

In this paper, we study the semantic segmentation of 3D LiDAR point cloud data in urban environments for autonomous driving, and a method utilizing the surface information of the ground plane was proposed. In practice, the resolution of a LiDAR sensor installed in a self-driving vehicle is relatively low and thus the acquired point cloud is indeed quite sparse. While recent work on dense point cloud segmentation has achieved promising results, the performance is relatively low when directly applied to sparse point clouds. This paper is focusing on semantic segmentation of the sparse point clouds obtained from 32-channel LiDAR sensor with deep neural networks. The main contribution is the integration of the ground information which is used to group ground points far away from each other. Qualitative and quantitative experiments on two large-scale point cloud datasets show that the proposed method outperforms the current state-of-the-art.


Author(s):  
Stefano Feraco ◽  
Angelo Bonfitto ◽  
Nicola Amati ◽  
Andrea Tonoli

Abstract This paper presents a clustering technique for the detection of the obstacles and lane boundaries on a road. The algorithm consists of two nested clustering stages. The first stage is based on hierarchical clustering, and the second on k-means clustering. The method exploits a preliminary ground-plane filtering algorithm to process the raw LIDAR point cloud, that is based on the semantic segmentation of point clouds. The clustering algorithm estimates the position of the obstacles that define the race track. Once the race track is sensed, the lane boundaries are detected. The method is validated experimentally on a four-wheel drive electric vehicle participating in the Formula SAE events. The validation environment is structured with traffic cones to define the race track.


Sign in / Sign up

Export Citation Format

Share Document