scholarly journals AUTOMATIC BUILDING EXTRACTION FROM LIDAR POINT CLOUD DATA IN THE FUSION OF ORTHOIMAGE

Author(s):  
B. Hujebri ◽  
M. Ebrahimikia ◽  
H. Enayati

Abstract. Three-dimensional building models are important in various applications such as disaster management and urban planning. In this paper, a method based on the fusion of LiDAR point cloud and aerial image data sources has been proposed. The first step of the proposed method is to separate ground and non-ground (that contain 3d objects like buildings, trees, …) points using cloth simulation filtering and then normalize the non-ground points. This research experiment applied a 0.1 threshold for the z component of the normal vector to remove wall points, and 2-meter height threshold to remove off-terrain objects lower than the minimum building height. It is possible to discriminate vegetation and building based on spectral information from orthoimage. After elimination of vegetation points, the mean shift algorithm applied on remaining points to detect buildings. This method provides good performance in dense urban areas with complex ground covering such as trees, shrubs, short walls, and vehicles.

2013 ◽  
Vol 760-762 ◽  
pp. 1556-1561
Author(s):  
Ting Wei Du ◽  
Bo Liu

Indoor scene understanding based on the depth image data is a cutting-edge issue in the field of three-dimensional computer vision. Taking the layout characteristics of the indoor scenes and more plane features in these scenes into account, this paper presents a depth image segmentation method based on Gauss Mixture Model clustering. First, transform the Kinect depth image data into point cloud which is in the form of discrete three-dimensional point data, and denoise and down-sample the point cloud data; second, calculate the point normal of all points in the entire point cloud, then cluster the entire normal using Gaussian Mixture Model, and finally implement the entire point clouds segmentation by RANSAC algorithm. Experimental results show that the divided regions have obvious boundaries and segmentation quality is above normal, and lay a good foundation for object recognition.


2020 ◽  
Vol 10 (22) ◽  
pp. 8073
Author(s):  
Min Woo Ryu ◽  
Sang Min Oh ◽  
Min Ju Kim ◽  
Hun Hee Cho ◽  
Chang Baek Son ◽  
...  

This study proposes a new method to generate a three-dimensional (3D) geometric representation of an indoor environment by refining and processing an indoor point cloud data (PCD) captured through backpack laser scanners. The proposed algorithm comprises two parts to generate the 3D geometric representation: data refinement and data processing. In the refinement section, the inputted indoor PCD are roughly segmented by applying random sample consensus (RANSAC) to raw data based on an estimated normal vector. Next, the 3D geometric representation is generated by calculating and separating tangent points on segmented PCD. This study proposes a robust algorithm that utilizes the topological feature of the indoor PCD created by a hierarchical data process. The algorithm minimizes the size and the uncertainty of raw PCD caused by the absence of a global navigation satellite system and equipment errors. The result of this study shows that the indoor environment can be converted into 3D geometric representation by applying the proposed algorithm to the indoor PCD.


2021 ◽  
Vol 13 (17) ◽  
pp. 3417
Author(s):  
Yibo He ◽  
Zhenqi Hu ◽  
Kan Wu ◽  
Rui Wang

Repairing point cloud holes has become an important problem in the research of 3D laser point cloud data, which ensures the integrity and improves the precision of point cloud data. However, for the point cloud data with non-characteristic holes, the boundary data of point cloud holes cannot be used for repairing. Therefore, this paper introduces photogrammetry technology and analyzes the density of the image point cloud data with the highest precision. The 3D laser point cloud data are first formed into hole data with sharp features. The image data are calculated into six density image point cloud data. Next, the barycenterization Bursa model is used to fine-register the two types of data and to delete the overlapping regions. Then, the cross-section is used to evaluate the precision of the combined point cloud data to get the optimal density. A three-dimensional model is constructed for this data and the original point cloud data, respectively and the surface area method and the deviation method are used to compare them. The experimental results show that the ratio of the areas is less than 0.5%, and the maximum standard deviation is 0.0036 m and the minimum is 0.0015 m.


Author(s):  
Jiayong Yu ◽  
Longchen Ma ◽  
Maoyi Tian, ◽  
Xiushan Lu

The unmanned aerial vehicle (UAV)-mounted mobile LiDAR system (ULS) is widely used for geomatics owing to its efficient data acquisition and convenient operation. However, due to limited carrying capacity of a UAV, sensors integrated in the ULS should be small and lightweight, which results in decrease in the density of the collected scanning points. This affects registration between image data and point cloud data. To address this issue, the authors propose a method for registering and fusing ULS sequence images and laser point clouds, wherein they convert the problem of registering point cloud data and image data into a problem of matching feature points between the two images. First, a point cloud is selected to produce an intensity image. Subsequently, the corresponding feature points of the intensity image and the optical image are matched, and exterior orientation parameters are solved using a collinear equation based on image position and orientation. Finally, the sequence images are fused with the laser point cloud, based on the Global Navigation Satellite System (GNSS) time index of the optical image, to generate a true color point cloud. The experimental results show the higher registration accuracy and fusion speed of the proposed method, thereby demonstrating its accuracy and effectiveness.


Sensors ◽  
2021 ◽  
Vol 21 (3) ◽  
pp. 884
Author(s):  
Chia-Ming Tsai ◽  
Yi-Horng Lai ◽  
Yung-Da Sun ◽  
Yu-Jen Chung ◽  
Jau-Woei Perng

Numerous sensors can obtain images or point cloud data on land, however, the rapid attenuation of electromagnetic signals and the lack of light in water have been observed to restrict sensing functions. This study expands the utilization of two- and three-dimensional detection technologies in underwater applications to detect abandoned tires. A three-dimensional acoustic sensor, the BV5000, is used in this study to collect underwater point cloud data. Some pre-processing steps are proposed to remove noise and the seabed from raw data. Point clouds are then processed to obtain two data types: a 2D image and a 3D point cloud. Deep learning methods with different dimensions are used to train the models. In the two-dimensional method, the point cloud is transferred into a bird’s eye view image. The Faster R-CNN and YOLOv3 network architectures are used to detect tires. Meanwhile, in the three-dimensional method, the point cloud associated with a tire is cut out from the raw data and is used as training data. The PointNet and PointConv network architectures are then used for tire classification. The results show that both approaches provide good accuracy.


Sensors ◽  
2020 ◽  
Vol 21 (1) ◽  
pp. 201
Author(s):  
Michael Bekele Maru ◽  
Donghwan Lee ◽  
Kassahun Demissie Tola ◽  
Seunghee Park

Modeling a structure in the virtual world using three-dimensional (3D) information enhances our understanding, while also aiding in the visualization, of how a structure reacts to any disturbance. Generally, 3D point clouds are used for determining structural behavioral changes. Light detection and ranging (LiDAR) is one of the crucial ways by which a 3D point cloud dataset can be generated. Additionally, 3D cameras are commonly used to develop a point cloud containing many points on the external surface of an object around it. The main objective of this study was to compare the performance of optical sensors, namely a depth camera (DC) and terrestrial laser scanner (TLS) in estimating structural deflection. We also utilized bilateral filtering techniques, which are commonly used in image processing, on the point cloud data for enhancing their accuracy and increasing the application prospects of these sensors in structure health monitoring. The results from these sensors were validated by comparing them with the outputs from a linear variable differential transformer sensor, which was mounted on the beam during an indoor experiment. The results showed that the datasets obtained from both the sensors were acceptable for nominal deflections of 3 mm and above because the error range was less than ±10%. However, the result obtained from the TLS were better than those obtained from the DC.


2013 ◽  
Vol 796 ◽  
pp. 513-518
Author(s):  
Rong Jin ◽  
Bing Fei Gu ◽  
Guo Lian Liu

In this paper 110 female undergraduates in Soochow University are measured by using 3D non-contact measurement system and manual measurement. 3D point cloud data of human body is taken as research objects by using anti-engineering software, and secondary development of point cloud data is done on the basis of optimizing point cloud data. In accordance with the definition of the human chest width points and other feature points, and in the operability of the three-dimensional point cloud data, the width, thickness, and length dimensions of the curve through the chest width point are measured. Classification of body type is done by choosing the ratio values as classification index which is the ratio between thickness and width of the curve. The generation rules of the chest curve are determined for each type by using linear regression method. Human arm model could be established by the computer automatically. Thereby the individual model of the female upper body mannequin modeling can be improved effectively.


Author(s):  
Romina Dastoorian ◽  
Ahmad E. Elhabashy ◽  
Wenmeng Tian ◽  
Lee J. Wells ◽  
Jaime A. Camelio

With the latest advancements in three-dimensional (3D) measurement technologies, obtaining 3D point cloud data for inspection purposes in manufacturing is becoming more common. While 3D point cloud data allows for better inspection capabilities, their analysis is typically challenging. Especially with unstructured 3D point cloud data, containing coordinates at random locations, the challenges increase with higher levels of noise and larger volumes of data. Hence, the objective of this paper is to extend the previously developed Adaptive Generalized Likelihood Ratio (AGLR) approach to handle unstructured 3D point cloud data used for automated surface defect inspection in manufacturing. More specifically, the AGLR approach was implemented in a practical case study to inspect twenty-seven samples, each with a unique fault. These faults were designed to cover an array of possible faults having three different sizes, three different magnitudes, and located in three different locations. The results show that the AGLR approach can indeed differentiate between non-faulty and a varying range of faulty surfaces while being able to pinpoint the fault location. This work also serves as a validation for the previously developed AGLR approach in a practical scenario.


Sign in / Sign up

Export Citation Format

Share Document