Study on 3D Point Clouds Accuracy of Elongated Object Reconstruction in Close Range – Comparison of Different Software

Author(s):  
Grzegorz Gabara ◽  
Piotr Sawicki

The image based point clouds generated from multiple different oriented photos enable 3D object reconstruction in a variety spectrum of close range applications. The paper presents the results of testing the accuracy the image based point clouds generated in disadvantageous conditions of digital photogrammetric data processing. The subject of the study was a long shaped object, i.e. the horizontal and rectilinear section of the railway track. DSLR Nikon D5100 camera, 16MP, equipped with the zoom lens (f = 18 ÷ 55mm), was used to acquire the block of terrestrial convergent and very oblique photos at different scales, with the full longitudinal overlap. The point clouds generated from digital images, automatic determination of the interior orientation parameters, the spatial orientation of photos and 3D distribution of discrete points were obtained using the successively tested software: RealityCapture, Photoscan, VisualSFM+SURE and iWitness+SURE. The dense point clouds of the test object generated with the use of RealityCapture and PhotoScan applications were filtered using MeshLab application. The geometric parameters of test object were determined by means of CloudCompare software. The image based dense point clouds allow, in the case of disadvantageous conditions of photogrammetric digital data processing, to determine the geometric parameters of a close range elongated object with the high accuracy (mXYZ < 1 mm).

Author(s):  
Tee-Ann Teo ◽  
Peter Tian-Yuan Shih ◽  
Sz-Cheng Yu ◽  
Fuan Tsai

With the development of technology, UAS is an advance technology to support rapid mapping for disaster response. The aim of this study is to develop educational modules for UAS data processing in rapid 3D mapping. The designed modules for this study are focused on UAV data processing from available freeware or trial software for education purpose. The key modules include orientation modelling, 3D point clouds generation, image georeferencing and visualization. The orientation modelling modules adopts VisualSFM to determine the projection matrix for each image station. Besides, the approximate ground control points are measured from OpenStreetMap for absolute orientation. The second module uses SURE and the orientation files from previous module for 3D point clouds generation. Then, the ground point selection and digital terrain model generation can be archived by LAStools. The third module stitches individual rectified images into a mosaic image using Microsoft ICE (Image Composite Editor). The last module visualizes and measures the generated dense point clouds in CloudCompare. These comprehensive UAS processing modules allow the students to gain the skills to process and deliver UAS photogrammetric products in rapid 3D mapping. Moreover, they can also apply the photogrammetric products for analysis in practice.


Author(s):  
Tee-Ann Teo ◽  
Peter Tian-Yuan Shih ◽  
Sz-Cheng Yu ◽  
Fuan Tsai

With the development of technology, UAS is an advance technology to support rapid mapping for disaster response. The aim of this study is to develop educational modules for UAS data processing in rapid 3D mapping. The designed modules for this study are focused on UAV data processing from available freeware or trial software for education purpose. The key modules include orientation modelling, 3D point clouds generation, image georeferencing and visualization. The orientation modelling modules adopts VisualSFM to determine the projection matrix for each image station. Besides, the approximate ground control points are measured from OpenStreetMap for absolute orientation. The second module uses SURE and the orientation files from previous module for 3D point clouds generation. Then, the ground point selection and digital terrain model generation can be archived by LAStools. The third module stitches individual rectified images into a mosaic image using Microsoft ICE (Image Composite Editor). The last module visualizes and measures the generated dense point clouds in CloudCompare. These comprehensive UAS processing modules allow the students to gain the skills to process and deliver UAS photogrammetric products in rapid 3D mapping. Moreover, they can also apply the photogrammetric products for analysis in practice.


2019 ◽  
Vol 93 (3) ◽  
pp. 411-429 ◽  
Author(s):  
Maria Immacolata Marzulli ◽  
Pasi Raumonen ◽  
Roberto Greco ◽  
Manuela Persia ◽  
Patrizia Tartarino

Abstract Methods for the three-dimensional (3D) reconstruction of forest trees have been suggested for data from active and passive sensors. Laser scanner technologies have become popular in the last few years, despite their high costs. Since the improvements in photogrammetric algorithms (e.g. structure from motion—SfM), photographs have become a new low-cost source of 3D point clouds. In this study, we use images captured by a smartphone camera to calculate dense point clouds of a forest plot using SfM. Eighteen point clouds were produced by changing the densification parameters (Image scale, Point density, Minimum number of matches) in order to investigate their influence on the quality of the point clouds produced. In order to estimate diameter at breast height (d.b.h.) and stem volumes, we developed an automatic method that extracts the stems from the point cloud and then models them with cylinders. The results show that Image scale is the most influential parameter in terms of identifying and extracting trees from the point clouds. The best performance with cylinder modelling from point clouds compared to field data had an RMSE of 1.9 cm and 0.094 m3, for d.b.h. and volume, respectively. Thus, for forest management and planning purposes, it is possible to use our photogrammetric and modelling methods to measure d.b.h., stem volume and possibly other forest inventory metrics, rapidly and without felling trees. The proposed methodology significantly reduces working time in the field, using ‘non-professional’ instruments and automating estimates of dendrometric parameters.


Sensors ◽  
2018 ◽  
Vol 18 (7) ◽  
pp. 2245 ◽  
Author(s):  
Karel Kuželka ◽  
Peter Surový

We evaluated two unmanned aerial systems (UASs), namely the DJI Phantom 4 Pro and DJI Mavic Pro, for 3D forest structure mapping of the forest stand interior with the use of close-range photogrammetry techniques. Assisted flights were performed within two research plots established in mature pure Norway spruce (Picea abies (L.) H. Karst.) and European beech (Fagus sylvatica L.) forest stands. Geotagged images were used to produce georeferenced 3D point clouds representing tree stem surfaces. With a flight height of 8 m above the ground, the stems were precisely modeled up to a height of 10 m, which represents a considerably larger portion of the stem when compared with terrestrial close-range photogrammetry. Accuracy of the point clouds was evaluated by comparing field-measured tree diameters at breast height (DBH) with diameter estimates derived from the point cloud using four different fitting methods, including the bounding circle, convex hull, least squares circle, and least squares ellipse methods. The accuracy of DBH estimation varied with the UAS model and the diameter fitting method utilized. With the Phantom 4 Pro and the least squares ellipse method to estimate diameter, the mean error of diameter estimates was −1.17 cm (−3.14%) and 0.27 cm (0.69%) for spruce and beech stands, respectively.


2019 ◽  
Vol 11 (16) ◽  
pp. 1940 ◽  
Author(s):  
Fausto Mistretta ◽  
Giannina Sanna ◽  
Flavio Stochino ◽  
Giuseppina Vacca

Dense point clouds acquired from Terrestrial Laser Scanners (TLS) have proved to be effective for structural deformation assessment. In the last decade, many researchers have defined methodology and workflow in order to compare different point clouds, with respect to each other or to a known model, assessing the potentialities and limits of this technique. Currently, dense point clouds can be obtained by Close-Range Photogrammetry (CRP) based on a Structure from Motion (SfM) algorithm. This work reports on a comparison between the TLS technique and the Close-Range Photogrammetry using the Structure from Motion algorithm. The analysis of two Reinforced Concrete (RC) beams tested under four-points bending loading is presented. In order to measure displacement distributions, point clouds at different beam loading states were acquired and compared. A description of the instrumentation used and the experimental environment, along with a comprehensive report on the calculations and results obtained is reported. Two kinds of point clouds comparison were investigated: Mesh to mesh and modeling with geometric primitives. The comparison between the mesh to mesh (m2m) approach and the modeling (m) one showed that the latter leads to significantly better results for both TLS and CRP. The results obtained with the TLS for both m2m and m methodologies present a Root Mean Square (RMS) levels below 1 mm, while the CRP method yields to an RMS level of a few millimeters for m2m, and of 1 mm for m.


2019 ◽  
Vol 252 ◽  
pp. 03020 ◽  
Author(s):  
Emilia Bachtiak-Radka ◽  
Sara Dudzińska ◽  
Daniel Grochała ◽  
Stefan Berczyński

Digital processing of the recorded point clouds on innovative surfaces could facilitate the operator’s planning of the metrological process and give more freedom in the assessment of the surface texture. The current state of knowledge about surface characteristics, precision and quality of measurements and especially the repeatability of measurements – not only in the laboratory environment but also in the industry pose a big challenge. The paper presents research works related to the identification of the impact of the method of acquisition point clouds using digital data processing on surface texture. The main assumption of the paper was to carry out, according to the prepared plan of the experiment, the series of sample measurements with the use of the optical measuring systems AltiSurf A520 in the Laboratory of Surface Topography at the West Pomeranian University of Technology in Szczecin. The next task was to determine the impact of the digital data processing strategy in order to identify the significance of the impact (conditions and methods of filtration), which in practice largely determines the repeatability and reproducibility of the parameter values of the geometry surface structure.


Author(s):  
Y. Liang ◽  
Y. H. Sheng

To solve existing problems in modeling facade of building merely with point feature based on close-range images , a new method for modeling building facade under line feature constraint is proposed in this paper. Firstly, Camera parameters and sparse spatial point clouds data were restored using the SFM , and 3D dense point clouds were generated with MVS; Secondly, the line features were detected based on the gradient direction , those detected line features were fit considering directions and lengths , then line features were matched under multiple types of constraints and extracted from multi-image sequence. At last, final facade mesh of a building was triangulated with point cloud and line features. The experiment shows that this method can effectively reconstruct the geometric facade of buildings using the advantages of combining point and line features of the close - range image sequence,especially in restoring the contour information of the facade of buildings.


Author(s):  
A. Pérez Ramos ◽  
G. Robleda Prieto

Indoor Gothic apse provides a complex environment for virtualization using imaging techniques due to its light conditions and architecture. Light entering throw large windows in combination with the apse shape makes difficult to find proper conditions to photo capture for reconstruction purposes. Thus, documentation techniques based on images are usually replaced by scanning techniques inside churches. Nevertheless, the need to use Terrestrial Laser Scanning (TLS) for indoor virtualization means a significant increase in the final surveying cost. So, in most cases, scanning techniques are used to generate dense point clouds. However, many Terrestrial Laser Scanner (TLS) internal cameras are not able to provide colour images or cannot reach the image quality that can be obtained using an external camera. Therefore, external quality images are often used to build high resolution textures of these models. This paper aims to solve the problem posted by virtualizing indoor Gothic churches, making that task more affordable using exclusively techniques base on images. It reviews a previous proposed methodology using a DSRL camera with 18-135 lens commonly used for close range photogrammetry and add another one using a HDR 360° camera with four lenses that makes the task easier and faster in comparison with the previous one. Fieldwork and office-work are simplified. The proposed methodology provides photographs in such a good conditions for building point clouds and textured meshes. Furthermore, the same imaging resources can be used to generate more deliverables without extra time consuming in the field, for instance, immersive virtual tours. In order to verify the usefulness of the method, it has been decided to apply it to the apse since it is considered one of the most complex elements of Gothic churches and it could be extended to the whole building.


Sensors ◽  
2020 ◽  
Vol 20 (17) ◽  
pp. 4819
Author(s):  
Jeremy Castagno ◽  
Ella Atkins

Flat surfaces captured by 3D point clouds are often used for localization, mapping, and modeling. Dense point cloud processing has high computation and memory costs making low-dimensional representations of flat surfaces such as polygons desirable. We present Polylidar3D, a non-convex polygon extraction algorithm which takes as input unorganized 3D point clouds (e.g., LiDAR data), organized point clouds (e.g., range images), or user-provided meshes. Non-convex polygons represent flat surfaces in an environment with interior cutouts representing obstacles or holes. The Polylidar3D front-end transforms input data into a half-edge triangular mesh. This representation provides a common level of abstraction for subsequent back-end processing. The Polylidar3D back-end is composed of four core algorithms: mesh smoothing, dominant plane normal estimation, planar segment extraction, and finally polygon extraction. Polylidar3D is shown to be quite fast, making use of CPU multi-threading and GPU acceleration when available. We demonstrate Polylidar3D’s versatility and speed with real-world datasets including aerial LiDAR point clouds for rooftop mapping, autonomous driving LiDAR point clouds for road surface detection, and RGBD cameras for indoor floor/wall detection. We also evaluate Polylidar3D on a challenging planar segmentation benchmark dataset. Results consistently show excellent speed and accuracy.


Author(s):  
Jianqing Wu ◽  
Hao Xu ◽  
Yuan Sun ◽  
Jianying Zheng ◽  
Rui Yue

The high-resolution micro traffic data (HRMTD) of all roadway users is important for serving the connected-vehicle system in mixed traffic situations. The roadside LiDAR sensor gives a solution to providing HRMTD from real-time 3D point clouds of its scanned objects. Background filtering is the preprocessing step to obtain the HRMTD of different roadway users from roadside LiDAR data. It can significantly reduce the data processing time and improve the vehicle/pedestrian identification accuracy. An algorithm is proposed in this paper, based on the spatial distribution of laser points, which filters both static and moving background efficiently. Various thresholds of point density are applied in this algorithm to exclude background at different distances from the roadside sensor. The case study shows that the algorithm can filter background LiDAR points in different situations (different road geometries, different traffic demands, day/night time, different speed limits). Vehicle and pedestrian shape can be retained well after background filtering. The low computational load guarantees this method can be applied for real-time data processing such as vehicle monitoring and pedestrian tracking.


Sign in / Sign up

Export Citation Format

Share Document