scholarly journals Automatic Detection of Objects in 3D Point Clouds Based on Exclusively Semantic Guided Processes

2019 ◽  
Vol 8 (10) ◽  
pp. 442
Author(s):  
Jean-Jacques Ponciano ◽  
Alain Trémeau ◽  
Frank Boochs

In the domain of computer vision, object recognition aims at detecting and classifying objects in data sets. Model-driven approaches are typically constrained through their focus on either a specific type of data, a context (indoor, outdoor) or a set of objects. Machine learning-based approaches are more flexible but also constrained as they need annotated data sets to train the learning process. That leads to problems when this data is not available through the specialty of the application field, like archaeology, for example. In order to overcome such constraints, we present a fully semantic-guided approach. The role of semantics is to express all relevant knowledge of the representation of the objects inside the data sets and of the algorithms which address this representation. In addition, the approach contains a learning stage since it adapts the processing according to the diversity of the objects and data characteristics. The semantic is expressed via an ontological model and uses standard web technology like SPARQL queries, providing great flexibility. The ontological model describes the object, the data and the algorithms. It allows the selection and execution of algorithms adapted to the data and objects dynamically. Similarly, processing results are dynamically classified and allow for enriching the ontological model using SPARQL construct queries. The semantic formulated through SPARQL also acts as a bridge between the knowledge contained within the ontological model and the processing branch, which executes algorithms. It provides the capability to adapt the sequence of algorithms to an individual state of the processing chain and makes the solution robust and flexible. The comparison of this approach with others on the same use case shows the efficiency and improvement this approach brings.

Author(s):  
W. Nguatem ◽  
M. Drauschke ◽  
H. Mayer

In this paper, we present a fully automatic approach to localize the outlines of facade objects (windows and doors) in 3D point clouds of facades. We introduce an approach to search for the main facade wall and locate the facade objects within a probabilistic framework. Our search routine is based on Monte Carlo Simulation (MC-Simulation). Templates containing control points of curves are used to approximate the possible shapes of windows and doors. These are interpolated using parametric B-spline curves. These templates are scored in a sliding window style over the entire facade using a likelihood function in a probabilistic matching procedure. This produces many competing results for which a two layered model selection based on Bayes factor is applied. A major thrust in our work is the introduction of a 2D shape-space of similar shapes under affine transform in this architectural scene. This transforms the initial parametric B-splines curves representing the outlines of objects to curves of affine similarity in a strongly reduced dimensionality thus facilitating the generation of competing hypotheses within the search space. A further computational speedup is achieved through the clustering of the search space to disjoint regions, thus enabling a parallel implementation. We obtain state-of-the results on self-acquired data sets. The robustness of our algorithm is evaluated on 3D point clouds from image matching and LiDAR data of diverse quality.


Author(s):  
N. Tyagur ◽  
M. Hollaus

During the last ten years, mobile laser scanning (MLS) systems have become a very popular and efficient technology for capturing reality in 3D. A 3D laser scanner mounted on the top of a moving vehicle (e.g. car) allows the high precision capturing of the environment in a fast way. Mostly this technology is used in cities for capturing roads and buildings facades to create 3D city models. In our work, we used an MLS system in Moravian Karst, which is a protected nature reserve in the Eastern Part of the Czech Republic, with a steep rocky terrain covered by forests. For the 3D data collection, the Riegl VMX 450, mounted on a car, was used with integrated IMU/GNSS equipment, which provides low noise, rich and very dense 3D point clouds. <br><br> The aim of this work is to create a digital terrain model (DTM) from several MLS data sets acquired in the neighbourhood of a road. The total length of two covered areas is 3.9 and 6.1 km respectively, with an average width of 100 m. For the DTM generation, a fully automatic, robust, hierarchic approach was applied. The derivation of the DTM is based on combinations of hierarchical interpolation and robust filtering for different resolution levels. For the generation of the final DTMs, different interpolation algorithms are applied to the classified terrain points. The used parameters were determined by explorative analysis. All MLS data sets were processed with one parameter set. As a result, a high precise DTM was derived with high spatial resolution of 0.25 x 0.25 m. The quality of the DTMs was checked by geodetic measurements and visual comparison with raw point clouds. The high quality of the derived DTM can be used for analysing terrain changes and morphological structures. Finally, the derived DTM was compared with the DTM of the Czech Republic (DMR 4G) with a resolution of 5 x 5 m, which was created from airborne laser scanning data. The vertical accuracy of the derived DTMs is around 0.10 m.


Author(s):  
T. Wakita ◽  
J. Susaki

In this study, we propose a method to accurately extract vegetation from terrestrial three-dimensional (3D) point clouds for estimating landscape index in urban areas. Extraction of vegetation in urban areas is challenging because the light returned by vegetation does not show as clear patterns as man-made objects and because urban areas may have various objects to discriminate vegetation from. The proposed method takes a multi-scale voxel approach to effectively extract different types of vegetation in complex urban areas. With two different voxel sizes, a process is repeated that calculates the eigenvalues of the planar surface using a set of points, classifies voxels using the approximate curvature of the voxel of interest derived from the eigenvalues, and examines the connectivity of the valid voxels. We applied the proposed method to two data sets measured in a residential area in Kyoto, Japan. The validation results were acceptable, with F-measures of approximately 95% and 92%. It was also demonstrated that several types of vegetation were successfully extracted by the proposed method whereas the occluded vegetation were omitted. We conclude that the proposed method is suitable for extracting vegetation in urban areas from terrestrial light detection and ranging (LiDAR) data. In future, the proposed method will be applied to mobile LiDAR data and the performance of the method against lower density of point clouds will be examined.


Author(s):  
L. Markelin ◽  
E. Honkavaara ◽  
R. Näsi ◽  
N. Viljanen ◽  
T. Rosnell ◽  
...  

Novel miniaturized multi- and hyperspectral imaging sensors on board of unmanned aerial vehicles have recently shown great potential in various environmental monitoring and measuring tasks such as precision agriculture and forest management. These systems can be used to collect dense 3D point clouds and spectral information over small areas such as single forest stands or sample plots. Accurate radiometric processing and atmospheric correction is required when data sets from different dates and sensors, collected in varying illumination conditions, are combined. Performance of novel radiometric block adjustment method, developed at Finnish Geospatial Research Institute, is evaluated with multitemporal hyperspectral data set of seedling stands collected during spring and summer 2016. Illumination conditions during campaigns varied from bright to overcast. We use two different methods to produce homogenous image mosaics and hyperspectral point clouds: image-wise relative correction and image-wise relative correction with BRDF. Radiometric datasets are converted to reflectance using reference panels and changes in reflectance spectra is analysed. Tested methods improved image mosaic homogeneity by 5&amp;thinsp;% to 25&amp;thinsp;%. Results show that the evaluated method can produce consistent reflectance mosaics and reflectance spectra shape between different areas and dates.


Author(s):  
N. Tyagur ◽  
M. Hollaus

During the last ten years, mobile laser scanning (MLS) systems have become a very popular and efficient technology for capturing reality in 3D. A 3D laser scanner mounted on the top of a moving vehicle (e.g. car) allows the high precision capturing of the environment in a fast way. Mostly this technology is used in cities for capturing roads and buildings facades to create 3D city models. In our work, we used an MLS system in Moravian Karst, which is a protected nature reserve in the Eastern Part of the Czech Republic, with a steep rocky terrain covered by forests. For the 3D data collection, the Riegl VMX 450, mounted on a car, was used with integrated IMU/GNSS equipment, which provides low noise, rich and very dense 3D point clouds. &lt;br&gt;&lt;br&gt; The aim of this work is to create a digital terrain model (DTM) from several MLS data sets acquired in the neighbourhood of a road. The total length of two covered areas is 3.9 and 6.1 km respectively, with an average width of 100 m. For the DTM generation, a fully automatic, robust, hierarchic approach was applied. The derivation of the DTM is based on combinations of hierarchical interpolation and robust filtering for different resolution levels. For the generation of the final DTMs, different interpolation algorithms are applied to the classified terrain points. The used parameters were determined by explorative analysis. All MLS data sets were processed with one parameter set. As a result, a high precise DTM was derived with high spatial resolution of 0.25 x 0.25 m. The quality of the DTMs was checked by geodetic measurements and visual comparison with raw point clouds. The high quality of the derived DTM can be used for analysing terrain changes and morphological structures. Finally, the derived DTM was compared with the DTM of the Czech Republic (DMR 4G) with a resolution of 5 x 5 m, which was created from airborne laser scanning data. The vertical accuracy of the derived DTMs is around 0.10 m.


2021 ◽  
Vol 13 (15) ◽  
pp. 2999
Author(s):  
Ramón Alberto Díaz-Varela ◽  
Eduardo González-Ferreiro

Society is increasingly aware of the important role of forests and other woodlands as cultural heritage and as providers of different ecosystem services, such as biomass provision, soil protection, hydrological regulation, biodiversity conservation and carbon sequestration, among others [...]


2021 ◽  
Vol 13 (10) ◽  
pp. 1882
Author(s):  
Yijie Wu ◽  
Jianga Shang ◽  
Fan Xue

Coarse registration of 3D point clouds plays an indispensable role for parametric, semantically rich, and realistic digital twin buildings (DTBs) in the practice of GIScience, manufacturing, robotics, architecture, engineering, and construction. However, the existing methods have prominently been challenged by (i) the high cost of data collection for numerous existing buildings and (ii) the computational complexity from self-similar layout patterns. This paper studies the registration of two low-cost data sets, i.e., colorful 3D point clouds captured by smartphones and 2D CAD drawings, for resolving the first challenge. We propose a novel method named `Registration based on Architectural Reflection Detection’ (RegARD) for transforming the self-symmetries in the second challenge from a barrier of coarse registration to a facilitator. First, RegARD detects the innate architectural reflection symmetries to constrain the rotations and reduce degrees of freedom. Then, a nonlinear optimization formulation together with advanced optimization algorithms can overcome the second challenge. As a result, high-quality coarse registration and subsequent low-cost DTBs can be created with semantic components and realistic appearances. Experiments showed that the proposed method outperformed existing methods considerably in both effectiveness and efficiency, i.e., 49.88% less error and 73.13% less time, on average. The RegARD presented in this paper first contributes to coarse registration theories and exploitation of symmetries and textures in 3D point clouds and 2D CAD drawings. For practitioners in the industries, RegARD offers a new automatic solution to utilize ubiquitous smartphone sensors for massive low-cost DTBs.


2020 ◽  
Vol 15 (3) ◽  
pp. 15-25
Author(s):  
Richard Honti ◽  
Ján Erdélyi ◽  
Alojz Kopáčik

Nowadays huge datasets can be collected in a relatively short time. After capturing these data sets the next step is their processing. Automation of the processing steps can contribute to efficiency increase, to reduction of the time needed for processing, and to reduction of interactions of the user. The paper brings a short review of the most reliable methods for sphere segmentation. An innovative algorithm for automated detection of spheres and for estimating their parameters from 3D point clouds is introduced. The algorithm proposed was tested on complex point clouds. In the last part of the paper, the implementation of the algorithm proposed to a standalone application is described.


Author(s):  
F. Thiel ◽  
S. Discher ◽  
R. Richter ◽  
J. Döllner

<p><strong>Abstract.</strong> Emerging virtual reality (VR) technology allows immersively exploring digital 3D content on standard consumer hardware. Using in-situ or remote sensing technology, such content can be automatically derived from real-world sites. External memory algorithms allow for the non-immersive exploration of the resulting 3D point clouds on a diverse set of devices with vastly different rendering capabilities. Applications for VR environments raise additional challenges for those algorithms as they are highly sensitive towards visual artifacts that are typical for point cloud depictions (i.e., overdraw and underdraw), while simultaneously requiring higher frame rates (i.e., around 90<span class="thinspace"></span>fps instead of 30&amp;ndash;60<span class="thinspace"></span>fps). We present a rendering system for the immersive exploration and inspection of massive 3D point clouds on state-of-the-art VR devices. Based on a multi-pass rendering pipeline, we combine point-based and image-based rendering techniques to simultaneously improve the rendering performance and the visual quality. A set of interaction and locomotion techniques allows users to inspect a 3D point cloud in detail, for example by measuring distances and areas or by scaling and rotating visualized data sets. All rendering, interaction and locomotion techniques can be selected and configured dynamically, allowing to adapt the rendering system to different use cases. Tests on data sets with up to 2.6 billion points show the feasibility and scalability of our approach.</p>


Author(s):  
J. Yan ◽  
S. Zlatanova ◽  
M. Aleksandrov ◽  
A. A. Diakite ◽  
C. Pettit

<p><strong>Abstract.</strong> 3D modelling of precincts and cities has significantly advanced in the last decades, as we move towards the concept of the Digital Twin. Many 3D city models have been created but a large portion of them neglect representing terrain and buildings accurately. Very often the surface is either considered planar or is not represented. On the other hand, many Digital Terrain Models (DTM) have been created as 2.5D triangular irregular networks (TIN) or grids for different applications such as water management, sign of view or shadow computation, tourism, land planning, telecommunication, military operations and communications. 3D city models need to represent both the 3D objects and terrain in one consistent model, but still many challenges remain. A critical issue when integrating 3D objects and terrain is the identification of the valid intersection between 2.5D terrain and 3D objects. Commonly, 3D objects may partially float over or sink into the terrain; the depth of the underground parts might not be known; or the accuracy of data sets might be different. This paper discusses some of these issues and presents an approach for a consistent 3D reconstruction of LOD1 models on the basis of 3D point clouds, DTM, and 2D footprints of buildings. Such models are largely used for urban planning, city analytics or environmental analysis. The proposed method can be easily extended for higher LODs or BIM models.</p>


Sign in / Sign up

Export Citation Format

Share Document