Plane Segmentation in Organized Point Clouds using Flood Fill

Author(s):  
Arindam Roychoudhury ◽  
Marcell Missura ◽  
Maren Bennewitz
Keyword(s):  
2019 ◽  
Vol 16 (6) ◽  
pp. 172988141988520
Author(s):  
Phuong Minh Chu ◽  
Seoungjae Cho ◽  
Kaisi Huang ◽  
Kyungeun Cho

In this article, an application for object segmentation and tracking for intelligent vehicles is presented. The proposed object segmentation and tracking method is implemented by combining three stages in each frame. First, based on our previous research on a fast ground segmentation method, the present approach segments three-dimensional point clouds into ground and non-ground points. The ground segmentation is important for clustering each object in subsequent steps. From the non-ground parts, we continue to segment objects using a flood-fill algorithm in the second stage. Finally, object tracking is implemented to determine the same objects over time in the final stage. This stage is performed based on likelihood probability calculated using features of each object. Experimental results demonstrate that the proposed system shows effective, real-time performance.


Author(s):  
Jiayong Yu ◽  
Longchen Ma ◽  
Maoyi Tian, ◽  
Xiushan Lu

The unmanned aerial vehicle (UAV)-mounted mobile LiDAR system (ULS) is widely used for geomatics owing to its efficient data acquisition and convenient operation. However, due to limited carrying capacity of a UAV, sensors integrated in the ULS should be small and lightweight, which results in decrease in the density of the collected scanning points. This affects registration between image data and point cloud data. To address this issue, the authors propose a method for registering and fusing ULS sequence images and laser point clouds, wherein they convert the problem of registering point cloud data and image data into a problem of matching feature points between the two images. First, a point cloud is selected to produce an intensity image. Subsequently, the corresponding feature points of the intensity image and the optical image are matched, and exterior orientation parameters are solved using a collinear equation based on image position and orientation. Finally, the sequence images are fused with the laser point cloud, based on the Global Navigation Satellite System (GNSS) time index of the optical image, to generate a true color point cloud. The experimental results show the higher registration accuracy and fusion speed of the proposed method, thereby demonstrating its accuracy and effectiveness.


2019 ◽  
Vol 10 (2) ◽  
pp. 59-64
Author(s):  
D.J. Owen Hoetama ◽  
Farica Perdana Putri ◽  
P.M. Winarno

Maze game is an interesting game and used to spend time. However, in the maze game, the level used forthis game still uses static levels. Static levels make the maze shape stay the same if we play the same level. Thus, players will quickly feel bored because it finds the same complexity. Maze generator is a static level problem solution on the maze game. This research uses Fisher-Yates Shuffle algorithm and Flood Fill algorithm to make maze generator. Fisher-Yates Shuffle algorithm is used for wall position randomization and Flood Fill algorithm to keep the maze results to remain resolved. The results of the application implementation yielded 30 mazes and were tested using the Hamming Distance algorithm, yielding that the result of the maze formed is always different. The average percentage rate difference produced 48% each time the maze was formed. The results of the maze that was formed performed perfect maze checking with the result of 83.33% percentage. Index Terms— Fisher-Yates Shuffle, Flood Fill, MazeGenerator, Hamming Distance


2020 ◽  
Vol 28 (10) ◽  
pp. 2301-2310
Author(s):  
Chun-kang ZHANG ◽  
◽  
Hong-mei LI ◽  
Xia ZHANG

2018 ◽  
Author(s):  
Marissa J. Dudek ◽  
◽  
John Paul Ligush ◽  
Colin Hogg ◽  
Yonathan Admassu
Keyword(s):  

2021 ◽  
Vol 13 (11) ◽  
pp. 2135
Author(s):  
Jesús Balado ◽  
Pedro Arias ◽  
Henrique Lorenzo ◽  
Adrián Meijide-Rodríguez

Mobile Laser Scanning (MLS) systems have proven their usefulness in the rapid and accurate acquisition of the urban environment. From the generated point clouds, street furniture can be extracted and classified without manual intervention. However, this process of acquisition and classification is not error-free, caused mainly by disturbances. This paper analyses the effect of three disturbances (point density variation, ambient noise, and occlusions) on the classification of urban objects in point clouds. From point clouds acquired in real case studies, synthetic disturbances are generated and added. The point density reduction is generated by downsampling in a voxel-wise distribution. The ambient noise is generated as random points within the bounding box of the object, and the occlusion is generated by eliminating points contained in a sphere. Samples with disturbances are classified by a pre-trained Convolutional Neural Network (CNN). The results showed different behaviours for each disturbance: density reduction affected objects depending on the object shape and dimensions, ambient noise depending on the volume of the object, while occlusions depended on their size and location. Finally, the CNN was re-trained with a percentage of synthetic samples with disturbances. An improvement in the performance of 10–40% was reported except for occlusions with a radius larger than 1 m.


2021 ◽  
Vol 13 (5) ◽  
pp. 957
Author(s):  
Guglielmo Grechi ◽  
Matteo Fiorucci ◽  
Gian Marco Marmoni ◽  
Salvatore Martino

The study of strain effects in thermally-forced rock masses has gathered growing interest from engineering geology researchers in the last decade. In this framework, digital photogrammetry and infrared thermography have become two of the most exploited remote surveying techniques in engineering geology applications because they can provide useful information concerning geomechanical and thermal conditions of these complex natural systems where the mechanical role of joints cannot be neglected. In this paper, a methodology is proposed for generating point clouds of rock masses prone to failure, combining the high geometric accuracy of RGB optical images and the thermal information derived by infrared thermography surveys. Multiple 3D thermal point clouds and a high-resolution RGB point cloud were separately generated and co-registered by acquiring thermograms at different times of the day and in different seasons using commercial software for Structure from Motion and point cloud analysis. Temperature attributes of thermal point clouds were merged with the reference high-resolution optical point cloud to obtain a composite 3D model storing accurate geometric information and multitemporal surface temperature distributions. The quality of merged point clouds was evaluated by comparing temperature distributions derived by 2D thermograms and 3D thermal models, with a view to estimating their accuracy in describing surface thermal fields. Moreover, a preliminary attempt was made to test the feasibility of this approach in investigating the thermal behavior of complex natural systems such as jointed rock masses by analyzing the spatial distribution and temporal evolution of surface temperature ranges under different climatic conditions. The obtained results show that despite the low resolution of the IR sensor, the geometric accuracy and the correspondence between 2D and 3D temperature measurements are high enough to consider 3D thermal point clouds suitable to describe surface temperature distributions and adequate for monitoring purposes of jointed rock mass.


Sign in / Sign up

Export Citation Format

Share Document