scholarly journals Technical Assessment of Historic Buildings on The Basis of Information Obtained from a Three-Dimensional Point Clouds / Ocena Stanu Technicznego Budynków Zabytkowych w Oparciu o Dane Uzyskane z Trójwymiarowej Chmury Punktów

2016 ◽  
Vol 20 (1) ◽  
pp. 71-78 ◽  
Author(s):  
Joanna A. Pawłowicz ◽  
Elżbieta Szafranko

Abstract 3D scanning is the most modern method of unlimited possibilities based on laser technology. Its main advantage is the speed of obtaining large amounts of data in a very short time, which gives a huge advantage over existing methods of the measuring. Scanning provides opportunities for use in engineering works, geodetic and the inventory of buildings and objects of a high complexity, as well as in studies of damage or deformation of the structure. 3D scanner is a device, which with high accuracy collects data about the shape and texture of the tested object and its surroundings in the form of a point cloud.

2018 ◽  
Vol 62 (4) ◽  
pp. 107-116
Author(s):  
Adrián Mezei ◽  
Tibor Kovács

Three-dimensional objects can be scanned by 3D laser scanners that use active triangulation. These scanners create three-dimensional point clouds from the scanned objects. The laser line is identified in the images, which are captured at given transformations by the camera, and the point cloud can be calculated from these. The hardest challenge is to construct these transformations so that most of the surface can be captured. The result of a scanning may have missing parts because either not the best transformations were used or because some parts of the object cannot be scanned. Based on the results of the previous scans, a better transformation plan can be created, with which the next scan can be performed. In this paper, a method is proposed for transforming a special 3D scanner into a position from where the scanned point can be seen from an ideal angle. A method is described for estimating this transformation in real-time, so these can be calculated for every point of a previous scan to set up a next improved scan.


Sensors ◽  
2021 ◽  
Vol 21 (3) ◽  
pp. 884
Author(s):  
Chia-Ming Tsai ◽  
Yi-Horng Lai ◽  
Yung-Da Sun ◽  
Yu-Jen Chung ◽  
Jau-Woei Perng

Numerous sensors can obtain images or point cloud data on land, however, the rapid attenuation of electromagnetic signals and the lack of light in water have been observed to restrict sensing functions. This study expands the utilization of two- and three-dimensional detection technologies in underwater applications to detect abandoned tires. A three-dimensional acoustic sensor, the BV5000, is used in this study to collect underwater point cloud data. Some pre-processing steps are proposed to remove noise and the seabed from raw data. Point clouds are then processed to obtain two data types: a 2D image and a 3D point cloud. Deep learning methods with different dimensions are used to train the models. In the two-dimensional method, the point cloud is transferred into a bird’s eye view image. The Faster R-CNN and YOLOv3 network architectures are used to detect tires. Meanwhile, in the three-dimensional method, the point cloud associated with a tire is cut out from the raw data and is used as training data. The PointNet and PointConv network architectures are then used for tire classification. The results show that both approaches provide good accuracy.


Sensors ◽  
2020 ◽  
Vol 21 (1) ◽  
pp. 201
Author(s):  
Michael Bekele Maru ◽  
Donghwan Lee ◽  
Kassahun Demissie Tola ◽  
Seunghee Park

Modeling a structure in the virtual world using three-dimensional (3D) information enhances our understanding, while also aiding in the visualization, of how a structure reacts to any disturbance. Generally, 3D point clouds are used for determining structural behavioral changes. Light detection and ranging (LiDAR) is one of the crucial ways by which a 3D point cloud dataset can be generated. Additionally, 3D cameras are commonly used to develop a point cloud containing many points on the external surface of an object around it. The main objective of this study was to compare the performance of optical sensors, namely a depth camera (DC) and terrestrial laser scanner (TLS) in estimating structural deflection. We also utilized bilateral filtering techniques, which are commonly used in image processing, on the point cloud data for enhancing their accuracy and increasing the application prospects of these sensors in structure health monitoring. The results from these sensors were validated by comparing them with the outputs from a linear variable differential transformer sensor, which was mounted on the beam during an indoor experiment. The results showed that the datasets obtained from both the sensors were acceptable for nominal deflections of 3 mm and above because the error range was less than ±10%. However, the result obtained from the TLS were better than those obtained from the DC.


Author(s):  
Y. Hori ◽  
T. Ogawa

The implementation of laser scanning in the field of archaeology provides us with an entirely new dimension in research and surveying. It allows us to digitally recreate individual objects, or entire cities, using millions of three-dimensional points grouped together in what is referred to as "point clouds". In addition, the visualization of the point cloud data, which can be used in the final report by archaeologists and architects, should usually be produced as a JPG or TIFF file. Not only the visualization of point cloud data, but also re-examination of older data and new survey of the construction of Roman building applying remote-sensing technology for precise and detailed measurements afford new information that may lead to revising drawings of ancient buildings which had been adduced as evidence without any consideration of a degree of accuracy, and finally can provide new research of ancient buildings. We used laser scanners at fields because of its speed, comprehensive coverage, accuracy and flexibility of data manipulation. Therefore, we “skipped” many of post-processing and focused on the images created from the meta-data simply aligned using a tool which extended automatic feature-matching algorithm and a popular renderer that can provide graphic results.


2020 ◽  
Vol 2020 ◽  
pp. 1-13
Author(s):  
Ruizhen Gao ◽  
Xiaohui Li ◽  
Jingjun Zhang

With the emergence of new intelligent sensing technologies such as 3D scanners and stereo vision, high-quality point clouds have become very convenient and lower cost. The research of 3D object recognition based on point clouds has also received widespread attention. Point clouds are an important type of geometric data structure. Because of its irregular format, many researchers convert this data into regular three-dimensional voxel grids or image collections. However, this can lead to unnecessary bulk of data and cause problems. In this paper, we consider the problem of recognizing objects in realistic senses. We first use Euclidean distance clustering method to segment objects in realistic scenes. Then we use a deep learning network structure to directly extract features of the point cloud data to recognize the objects. Theoretically, this network structure shows strong performance. In experiment, there is an accuracy rate of 98.8% on the training set, and the accuracy rate in the experimental test set can reach 89.7%. The experimental results show that the network structure in this paper can accurately identify and classify point cloud objects in realistic scenes and maintain a certain accuracy when the number of point clouds is small, which is very robust.


2013 ◽  
Vol 760-762 ◽  
pp. 1556-1561
Author(s):  
Ting Wei Du ◽  
Bo Liu

Indoor scene understanding based on the depth image data is a cutting-edge issue in the field of three-dimensional computer vision. Taking the layout characteristics of the indoor scenes and more plane features in these scenes into account, this paper presents a depth image segmentation method based on Gauss Mixture Model clustering. First, transform the Kinect depth image data into point cloud which is in the form of discrete three-dimensional point data, and denoise and down-sample the point cloud data; second, calculate the point normal of all points in the entire point cloud, then cluster the entire normal using Gaussian Mixture Model, and finally implement the entire point clouds segmentation by RANSAC algorithm. Experimental results show that the divided regions have obvious boundaries and segmentation quality is above normal, and lay a good foundation for object recognition.


2019 ◽  
Vol 16 (1) ◽  
pp. 172988141983184 ◽  
Author(s):  
Brayan S Zapata-Impata ◽  
Pablo Gil ◽  
Jorge Pomares ◽  
Fernando Torres

Industrial and service robots deal with the complex task of grasping objects that have different shapes and which are seen from diverse points of view. In order to autonomously perform grasps, the robot must calculate where to place its robotic hand to ensure that the grasp is stable. We propose a method to find the best pair of grasping points given a three-dimensional point cloud with the partial view of an unknown object. We use a set of straightforward geometric rules to explore the cloud and propose grasping points on the surface of the object. We then adapt the pair of contacts to a multi-fingered hand used in experimentation. We prove that, after performing 500 grasps of different objects, our approach is fast, taking an average of 17.5 ms to propose contacts, while attaining a grasp success rate of 85.5%. Moreover, the method is sufficiently flexible and stable to work with objects in changing environments, such as those confronted by industrial or service robots.


2020 ◽  
Vol 12 (6) ◽  
pp. 942 ◽  
Author(s):  
Maria Rosaria De Blasiis ◽  
Alessandro Di Benedetto ◽  
Margherita Fiani

The surface conditions of road pavements, including the occurrence and severity of distresses present on the surface, are an important indicator of pavement performance. Periodic monitoring and condition assessment is an essential requirement for the safety of vehicles moving on that road and the wellbeing of people. The traditional characterization of the different types of distress often involves complex activities, sometimes inefficient and risky, as they interfere with road traffic. The mobile laser systems (MLS) are now widely used to acquire detailed information about the road surface in terms of a three-dimensional point cloud. Despite its increasing use, there are still no standards for the acquisition and processing of the data collected. The aim of our work was to develop a procedure for processing the data acquired by MLS, in order to identify the localized degradations that mostly affect safety. We have studied the data flow and implemented several processing algorithms to identify and quantify a few types of distresses, namely potholes and swells/shoves, starting from very dense point clouds. We have implemented data processing in four steps: (i) editing of the point cloud to extract only the points belonging to the road surface, (ii) determination of the road roughness as deviation in height of every single point of the cloud with respect to the modeled road surface, (iii) segmentation of the distress (iv) computation of the main geometric parameters of the distress in order to classify it by severity levels. The results obtained by the proposed methodology are promising. The procedures implemented have made it possible to correctly segmented and identify the types of distress to be analyzed, in accordance with the on-site inspections. The tests carried out have shown that the choice of the values of some parameters to give as input to the software is not trivial: the choice of some of them is based on considerations related to the nature of the data, for others, it derives from the distress to be segmented. Due to the different possible configurations of the various distresses it is better to choose these parameters according to the boundary conditions and not to impose default values. The test involved a 100-m long urban road segment, the surface of which was measured with an MLS installed on a vehicle that traveled the road at 10 km/h.


2019 ◽  
Vol 8 (5) ◽  
pp. 213 ◽  
Author(s):  
Florent Poux ◽  
Roland Billen

Automation in point cloud data processing is central in knowledge discovery within decision-making systems. The definition of relevant features is often key for segmentation and classification, with automated workflows presenting the main challenges. In this paper, we propose a voxel-based feature engineering that better characterize point clusters and provide strong support to supervised or unsupervised classification. We provide different feature generalization levels to permit interoperable frameworks. First, we recommend a shape-based feature set (SF1) that only leverages the raw X, Y, Z attributes of any point cloud. Afterwards, we derive relationship and topology between voxel entities to obtain a three-dimensional (3D) structural connectivity feature set (SF2). Finally, we provide a knowledge-based decision tree to permit infrastructure-related classification. We study SF1/SF2 synergy on a new semantic segmentation framework for the constitution of a higher semantic representation of point clouds in relevant clusters. Finally, we benchmark the approach against novel and best-performing deep-learning methods while using the full S3DIS dataset. We highlight good performances, easy-integration, and high F1-score (> 85%) for planar-dominant classes that are comparable to state-of-the-art deep learning.


Photonics ◽  
2021 ◽  
Vol 8 (8) ◽  
pp. 330
Author(s):  
Changjiang Zhou ◽  
Hao Yu ◽  
Bo Yuan ◽  
Liqiang Wang ◽  
Qing Yang

There are shortcomings of binocular endoscope three-dimensional (3D) reconstruction in the conventional algorithm, such as low accuracy, small field of view, and loss of scale information. To address these problems, aiming at the specific scenes of stomach organs, a method of 3D endoscopic image stitching based on feature points is proposed. The left and right images are acquired by moving the endoscope and converting them into point clouds by binocular matching. They are then preprocessed to compensate for the errors caused by the scene characteristics such as uneven illumination and weak texture. The camera pose changes are estimated by detecting and matching the feature points of adjacent left images. Finally, based on the calculated transformation matrix, point cloud registration is carried out by the iterative closest point (ICP) algorithm, and the 3D dense reconstruction of the whole gastric organ is realized. The results show that the root mean square error is 2.07 mm, and the endoscopic field of view is expanded by 2.20 times, increasing the observation range. Compared with the conventional methods, it does not only preserve the organ scale information but also makes the scene much denser, which is convenient for doctors to measure the target areas, such as lesions, in 3D. These improvements will help improve the accuracy and efficiency of diagnosis.


Sign in / Sign up

Export Citation Format

Share Document