scholarly journals FAÇADE RECONSTRUCTION USING GEOMETRIC AND RADIOMETRIC POINT CLOUD INFORMATION

Author(s):  
P. Tutzauer ◽  
N. Haala

This paper aims at façade reconstruction for subsequent enrichment of LOD2 building models. We use point clouds from dense image matching with imagery both from Mobile Mapping systems and oblique airborne cameras. The interpretation of façade structures is based on a geometric reconstruction. For this purpose a pre-segmentation of the point cloud into façade points and non-façade points is necessary. We present an approach for point clouds with limited geometric accuracy where a geometric segmentation might fail. Our contribution is a radiometric segmentation approach. Via local point features, based on a clustering in hue space, the point cloud is segmented into façade-points and non-façade points. This way, the initial geometric reconstruction step can be bypassed and point clouds with limited accuracy can still serve as input for the façade reconstruction and modelling approach.

Author(s):  
W. Ostrowski ◽  
M. Pilarska ◽  
J. Charyton ◽  
K. Bakuła

Creating 3D building models in large scale is becoming more popular and finds many applications. Nowadays, a wide term “3D building models” can be applied to several types of products: well-known CityGML solid models (available on few Levels of Detail), which are mainly generated from Airborne Laser Scanning (ALS) data, as well as 3D mesh models that can be created from both nadir and oblique aerial images. City authorities and national mapping agencies are interested in obtaining the 3D building models. Apart from the completeness of the models, the accuracy aspect is also important. Final accuracy of a building model depends on various factors (accuracy of the source data, complexity of the roof shapes, etc.). In this paper the methodology of inspection of dataset containing 3D models is presented. The proposed approach check all building in dataset with comparison to ALS point clouds testing both: accuracy and level of details. Using analysis of statistical parameters for normal heights for reference point cloud and tested planes and segmentation of point cloud provides the tool that can indicate which building and which roof plane in do not fulfill requirement of model accuracy and detail correctness. Proposed method was tested on two datasets: solid and mesh model.


Author(s):  
A. Adam ◽  
L. Grammatikopoulos ◽  
G. Karras ◽  
E. Protopapadakis ◽  
K. Karantzalos

Abstract. 3D semantic segmentation is the joint task of partitioning a point cloud into semantically consistent 3D regions and assigning them to a semantic class/label. While the traditional approaches for 3D semantic segmentation typically rely only on structural information of the objects (i.e. object geometry and shape), the last years many techniques combining both visual and geometric features have emerged, taking advantage of the progress in SfM/MVS algorithms that reconstruct point clouds from multiple overlapping images. Our work describes a hybrid methodology for 3D semantic segmentation, relying both on 2D and 3D space and aiming at exploring whether image selection is critical as regards the accuracy of 3D semantic segmentation of point clouds. Experimental results are demonstrated on a free online dataset depicting city blocks around Paris. The experimental procedure not only validates that hybrid features (geometric and visual) can achieve a more accurate semantic segmentation, but also demonstrates the importance of the most appropriate view for the 2D feature extraction.


Author(s):  
B. Xiong ◽  
S. Oude Elberink ◽  
G. Vosselman

The Multi-View Stereo (MVS) technology has improved significantly in the last decade, providing a much denser and more accurate point cloud than before. The point cloud now becomes a valuable data for modelling the LOD2 buildings. However, it is still not accurate enough to replace the lidar point cloud. Its relative high level of noise prevents the accurate interpretation of roof faces, e.g. one planar roof face has uneven surface of points therefore is segmented into many parts. The derived roof topology graphs are quite erroneous and cannot be used to model the buildings using the current methods based on roof topology graphs. We propose a parameter-free algorithm to robustly and precisely derive roof structures and building models. The points connecting roof segments are searched and grouped as structure points and structure boundaries, accordingly presenting the roof corners and boundaries. Their geometries are computed by the plane equations of their attached roof segments. If data available, the algorithm guarantees complete building structures in noisy point clouds and meanwhile achieves global optimized models. Experiments show that, when comparing to the roof topology graph based methods, the novel algorithm achieves consistent quality for both lidar and photogrammetric point clouds. But the new method is fully automatic and is a good alternative for the model-driven method when the processing time is important.


Author(s):  
M. Chizhova ◽  
A. Gurianov ◽  
M. Hess ◽  
T. Luhmann ◽  
A. Brunn ◽  
...  

For the interpretation of point clouds, the semantic definition of extracted segments from point clouds or images is a common problem. Usually, the semantic of geometrical pre-segmented point cloud elements are determined using probabilistic networks and scene databases. The proposed semantic segmentation method is based on the psychological human interpretation of geometric objects, especially on fundamental rules of primary comprehension. Starting from these rules the buildings could be quite well and simply classified by a human operator (e.g. architect) into different building types and structural elements (dome, nave, transept etc.), including particular building parts which are visually detected. The key part of the procedure is a novel method based on hashing where point cloud projections are transformed into binary pixel representations. A segmentation approach released on the example of classical Orthodox churches is suitable for other buildings and objects characterized through a particular typology in its construction (e.g. industrial objects in standardized enviroments with strict component design allowing clear semantic modelling).


Author(s):  
L. Moradi ◽  
M. Saadatseresht

Abstract. In this paper, a model for simultaneous registration and 3D modelling of Velodyne VLP 32e laser scanner point clouds based on least square adjustment methods was developed. Considering that the most of proposed methods for registration of point clouds which obtained by mobile mapping systems have applications in navigation and visualization. They usually do not pay enough attention to geometric accuracy, error propagation, and weights analysis. In addition, in these methods, some point correspondence solutions are used which increase the computation time and decrease the accuracy. Therefore, the purpose of this paper is to develop a model based on the least square adjustment and focus on the weight of the plane parameters which created by a robust least square fitting algorithm. It also simultaneously creates a 3D environmental model and registers point clouds. To do this, it utilizes both point cloud voxelization and differential planes techniques. The result illustrates the high capability of the proposed solution with the optimum weight of plane parameters to 100, and average distance between two scans can reach to below 10 mm.In addition, the best voxel size was 10 cm which is twice of point cloud resolutions.


Sensors ◽  
2018 ◽  
Vol 18 (11) ◽  
pp. 3625 ◽  
Author(s):  
Dawei Li ◽  
Yan Cao ◽  
Xue-song Tang ◽  
Siyuan Yan ◽  
Xin Cai

Leaves account for the largest proportion of all organ areas for most kinds of plants, and are comprise the main part of the photosynthetically active material in a plant. Observation of individual leaves can help to recognize their growth status and measure complex phenotypic traits. Current image-based leaf segmentation methods have problems due to highly restricted species and vulnerability toward canopy occlusion. In this work, we propose an individual leaf segmentation approach for dense plant point clouds using facet over-segmentation and facet region growing. The approach can be divided into three steps: (1) point cloud pre-processing, (2) facet over-segmentation, and (3) facet region growing for individual leaf segmentation. The experimental results show that the proposed method is effective and efficient in segmenting individual leaves from 3D point clouds of greenhouse ornamentals such as Epipremnum aureum, Monstera deliciosa, and Calathea makoyana, and the average precision and recall are both above 90%. The results also reveal the wide applicability of the proposed methodology for point clouds scanned from different kinds of 3D imaging systems, such as stereo vision and Kinect v2. Moreover, our method is potentially applicable in a broad range of applications that aim at segmenting regular surfaces and objects from a point cloud.


Author(s):  
A. Kharroubi ◽  
R. Hajji ◽  
R. Billen ◽  
F. Poux

Abstract. With the increasing volume of 3D applications using immersive technologies such as virtual, augmented and mixed reality, it is very interesting to create better ways to integrate unstructured 3D data such as point clouds as a source of data. Indeed, this can lead to an efficient workflow from 3D capture to 3D immersive environment creation without the need to derive 3D model, and lengthy optimization pipelines. In this paper, the main focus is on the direct classification and integration of massive 3D point clouds in a virtual reality (VR) environment. The emphasis is put on leveraging open-source frameworks for an easy replication of the findings. First, we develop a semi-automatic segmentation approach to provide semantic descriptors (mainly classes) to groups of points. We then build an octree data structure leveraged through out-of-core algorithms to load in real time and continuously only the points that are in the VR user's field of view. Then, we provide an open-source solution using Unity with a user interface for VR point cloud interaction and visualisation. Finally, we provide a full semantic VR data integration enhanced through developed shaders for future spatio-semantic queries. We tested our approach on several datasets of which a point cloud composed of 2.3 billion points, representing the heritage site of the castle of Jehay (Belgium). The results underline the efficiency and performance of the solution for visualizing classifieds massive point clouds in virtual environments with more than 100 frame per second.


2021 ◽  
Vol 13 (22) ◽  
pp. 4497
Author(s):  
Jianjun Zou ◽  
Zhenxin Zhang ◽  
Dong Chen ◽  
Qinghua Li ◽  
Lan Sun ◽  
...  

Point cloud registration is the foundation and key step for many vital applications, such as digital city, autonomous driving, passive positioning, and navigation. The difference of spatial objects and the structure complexity of object surfaces are the main challenges for the registration problem. In this paper, we propose a graph attention capsule model (named as GACM) for the efficient registration of terrestrial laser scanning (TLS) point cloud in the urban scene, which fuses graph attention convolution and a three-dimensional (3D) capsule network to extract local point cloud features and obtain 3D feature descriptors. These descriptors can take into account the differences of spatial structure and point density in objects and make the spatial features of ground objects more prominent. During the training progress, we used both matched points and non-matched points to train the model. In the test process of the registration, the points in the neighborhood of each keypoint were sent to the trained network, in order to obtain feature descriptors and calculate the rotation and translation matrix after constructing a K-dimensional (KD) tree and random sample consensus (RANSAC) algorithm. Experiments show that the proposed method achieves more efficient registration results and higher robustness than other frontier registration methods in the pairwise registration of point clouds.


Author(s):  
Beril Sirmacek ◽  
Yueqian Shen ◽  
Roderik Lindenbergh ◽  
Sisi Zlatanova ◽  
Abdoulaye Diakite

We present a comparison of point cloud generation and quality of data acquired by Zebedee (Zeb1) and Leica C10 devices which are used in the same building interior. Both sensor devices come with different practical and technical advantages. As it could be expected, these advantages come with some drawbacks. Therefore, depending on the requirements of the project, it is important to have a vision about what to expect from different sensors. In this paper, we provide a detailed analysis of the point clouds of the same room interior acquired from Zeb1 and Leica C10 sensors. First, it is visually assessed how different features appear in both the Zeb1 and Leica C10 point clouds. Next, a quantitative analysis is given by comparing local point density, local noise level and stability of local normals. Finally, a simple 3D room plan is extracted from both the Zeb1 and the Leica C10 point clouds and the lengths of constructed line segments connecting corners of the room are compared. The results show that Zeb1 is far superior in ease of data acquisition. No heavy handling, hardly no measurement planning and no point cloud registration is required from the operator. The resulting point cloud has a quality in the order of centimeters, which is fine for generating a 3D interior model of a building. Our results also clearly show that fine details of for example ornaments are invisible in the Zeb1 data. If point clouds with a quality in the order of millimeters are required, still a high-end laser scanner like the Leica C10 is required, in combination with a more sophisticated, time-consuming and elaborative data acquisition and processing approach.


Author(s):  
E. Nocerino ◽  
F. Poiesi ◽  
A. Locher ◽  
Y. T. Tefera ◽  
F. Remondino ◽  
...  

The paper presents a collaborative image-based 3D reconstruction pipeline to perform image acquisition with a smartphone and geometric 3D reconstruction on a server during concurrent or disjoint acquisition sessions. Images are selected from the video feed of the smartphone’s camera based on their quality and novelty. The smartphone’s app provides on-the-fly reconstruction feedback to users co-involved in the acquisitions. The server is composed of an incremental SfM algorithm that processes the received images by seamlessly merging them into a single sparse point cloud using bundle adjustment. Dense image matching algorithm can be lunched to derive denser point clouds. The reconstruction details, experiments and performance evaluation are presented and discussed.


Sign in / Sign up

Export Citation Format

Share Document