scholarly journals A Flexible Inference Machine for Global Alignment of Wall Openings

2020 ◽  
Vol 12 (12) ◽  
pp. 1968
Author(s):  
Jiaqiang Li ◽  
Biao Xiong ◽  
Rongjun Qin ◽  
Armin Gruen

Openings such as windows and doors are essential components of architectural wall surfaces. It is still a challenge to reconstruct them robustly from unstructured 3D point clouds because of occlusions, noises and non-uniformly distributed points. Current research primarily focuses on meliorating the robustness of detection and pays little attention to the geometric correctness. To improve the reconstruction quality, assumptions on the opening layout are usually applied as rules to support the reconstruction algorithm. The commonly used assumptions, such as the strict grid and symmetry pattern, however, are not suitable in many cases. In this paper, we propose a novel approach, named an inference machine, to identify and use flexible rules in wall opening modelling. Our method first detects and models openings through a data-driven method and then refines the opening boundaries by global and flexible rules. The key is to identify the global flexible rules from the detected openings, composed by various combinations of alignments. As our method is oblivious of the type of architectural layout, it can be applied to both interior wall surfaces and exterior building facades. We demonstrate the flexibility of our approach in both outdoor and indoor scenes with a variety of opening layouts. The qualitative and quantitative evaluation results indicate the potential of the approach to be a general method in opening detection and modelling. However, this data-driven method suffers from the existence of occlusions and non-planar wall surfaces.

Author(s):  
Fouad Amer ◽  
Mani Golparvar-Fard

Complete and accurate 3D monitoring of indoor construction progress using visual data is challenging. It requires (a) capturing a large number of overlapping images, which is time-consuming and labor-intensive to collect, and (b) processing using Structure from Motion (SfM) algorithms, which can be computationally expensive. To address these inefficiencies, this paper proposes a hybrid SfM-SLAM 3D reconstruction algorithm along with a decentralized data collection workflow to map indoor construction work locations in 3D and any desired frequency. The hybrid 3D reconstruction method is composed of a pipeline of Structure from Motion (SfM) coupled with Multi-View Stereo (MVS) to generate 3D point clouds and a SLAM (Simultaneous Localization and Mapping) algorithm to register the separately formed models together. Our SfM and SLAM pipelines are built on binary Oriented FAST and Rotated BRIEF (ORB) descriptors to tightly couple these two separate reconstruction workflows and enable fast computation. To elaborate the data capture workflow and validate the proposed method, a case study was conducted on a real-world construction site. Compared to state-of-the-art methods, our preliminary results show a decrease in both registration error and processing time, demonstrating the potential of using daily images captured by different trades coupled with weekly walkthrough videos captured by a field engineer for complete 3D visual monitoring of indoor construction operations.


2021 ◽  
Vol 8 ◽  
Author(s):  
Elena Prado ◽  
Javier Cristobo ◽  
Augusto Rodríguez-Basalo ◽  
Pilar Ríos ◽  
Cristina Rodríguez-Cabello ◽  
...  

We describe the first application of a non-invasive and novel approach to estimate the growth rate of Asconema setubalense (Porifera, Hexactinellida) through the use of 3D photogrammetric methodology. Structure-from-Motion techniques (SfM) were applied to videos acquired with the Politolana ROTV in the El Cachucho Marine Protected Area (MPA) (Cantabrian Sea) on three different dates (2014, 2017, and 2019) over six years. With these data, a multi-temporal study was conducted within the framework of MPA monitoring. A complete 3D reconstruction of the deep-sea floor was achieved with Pix4D Mapper Pro software for each date. Having 3D point clouds of the study area enabled a series of measurements that were impossible to obtain in 2D images. In 3D space, the sizes (height, diameter, cup-perimeter, and cup-surface area) of several A. setubalense specimens were measured each year. The annual growth rates recorded ranged from zero (“no growth”) for a large size specimen, to an average of 2.2 cm year–1 in cup-diameter, and 2.5 cm year–1 in height for developing specimens. Von Bertalanffy growth parameters were estimated. Taking into account the size indicators used in this study and based on the von Bertalanffy growth model, this sponge reaches 95% maximum size at 98 years of age. During the MPA monitoring program, a high number of specimens disappeared. This raised suspicions of a phenomenon affecting the survival of this species in the area. This type of image-based methodology does not cause damage or alterations to benthic communities and should be employed in vulnerable ecosystem studies and MPA monitoring.


2021 ◽  
Vol 13 (16) ◽  
pp. 3140
Author(s):  
Liman Liu ◽  
Jinjin Yu ◽  
Longyu Tan ◽  
Wanjuan Su ◽  
Lin Zhao ◽  
...  

In order to deal with the problem that some existing semantic segmentation networks for 3D point clouds generally have poor performance on small objects, a Spatial Eight-Quadrant Kernel Convolution (SEQKC) algorithm is proposed to enhance the ability of the network for extracting fine-grained features from 3D point clouds. As a result, the semantic segmentation accuracy of small objects in indoor scenes can be improved. To be specific, in the spherical space of the point cloud neighborhoods, a kernel point with attached weights is constructed in each octant, the distances between the kernel point and the points in its neighborhood are calculated, and the distance and the kernel points’ weights are used together to weight the point cloud features in the neighborhood space. In this case, the relationship between points are modeled, so that the local fine-grained features of the point clouds can be extracted by the SEQKC. Based on the SEQKC, we design a downsampling module for point clouds, and embed it into classical semantic segmentation networks (PointNet++, PointSIFT and PointConv) for semantic segmentation. Experimental results on benchmark dataset ScanNet V2 show that SEQKC-based PointNet++, PointSIFT and PointConv outperform the original networks about 1.35–2.12% in terms of MIoU, and they effectively improve the semantic segmentation performance of the networks for small objects of indoor scenes, e.g., the segmentation accuracy of small object “picture” is improved from 0.70% of PointNet++ to 10.37% of SEQKC-PointNet++.


2019 ◽  
Vol 39 (2-3) ◽  
pp. 339-355 ◽  
Author(s):  
Renaud Dubé ◽  
Andrei Cramariuc ◽  
Daniel Dugas ◽  
Hannes Sommer ◽  
Marcin Dymczyk ◽  
...  

Precisely estimating a robot’s pose in a prior, global map is a fundamental capability for mobile robotics, e.g., autonomous driving or exploration in disaster zones. This task, however, remains challenging in unstructured, dynamic environments, where local features are not discriminative enough and global scene descriptors only provide coarse information. We therefore present SegMap: a map representation solution for localization and mapping based on the extraction of segments in 3D point clouds. Working at the level of segments offers increased invariance to view-point and local structural changes, and facilitates real-time processing of large-scale 3D data. SegMap exploits a single compact data-driven descriptor for performing multiple tasks: global localization, 3D dense map reconstruction, and semantic information extraction. The performance of SegMap is evaluated in multiple urban driving and search and rescue experiments. We show that the learned SegMap descriptor has superior segment retrieval capabilities, compared with state-of-the-art handcrafted descriptors. As a consequence, we achieve a higher localization accuracy and a 6% increase in recall over state-of-the-art handcrafted descriptors. These segment-based localizations allow us to reduce the open-loop odometry drift by up to 50%. SegMap is open-source available along with easy to run demonstrations.


Author(s):  
L. S. Runceanu ◽  
N. Haala

<p><strong>Abstract.</strong> This work addresses the automatic reconstruction of objects useful for BIM, like walls, floors and ceilings, from meshed and textured mapped 3D point clouds of indoor scenes. For this reason, we focus on the semantic segmentation of 3D indoor meshes as the initial step for the automatic generation of BIM models. Our investigations are based on the benchmark dataset ScanNet, which aims at the interpretation of 3D indoor scenes. For this purpose it provides 3D meshed representations as collected from low cost range cameras. In our opinion such RGB-D data has a great potential for the automated reconstruction of BIM objects.</p>


Author(s):  
F. Poux ◽  
C. Mattes ◽  
L. Kobbelt

Abstract. Point cloud data of indoor scenes is primarily composed of planar-dominant elements. Automatic shape segmentation is thus valuable to avoid labour intensive labelling. This paper provides a fully unsupervised region growing segmentation approach for efficient clustering of massive 3D point clouds. Our contribution targets a low-level grouping beneficial to object-based classification. We argue that the use of relevant segments for object-based classification has the potential to perform better in terms of recognition accuracy, computing time and lowers the manual labelling time needed. However, fully unsupervised approaches are rare due to a lack of proper generalisation of user-defined parameters. We propose a self-learning heuristic process to define optimal parameters, and we validate our method on a large and richly annotated dataset (S3DIS) yielding 88.1% average F1-score for object-based classification. It permits to automatically segment indoor point clouds with no prior knowledge at commercially viable performance and is the foundation for efficient indoor 3D modelling in cluttered point clouds.


2021 ◽  
Vol 13 (10) ◽  
pp. 1946
Author(s):  
Pingbo Hu ◽  
Yiming Miao ◽  
Miaole Hou

Three-dimensional (3D) building models are closely related to human activities in urban environments. Due to the variations in building styles and complexity in roof structures, automatically reconstructing 3D buildings with semantics and topology information still faces big challenges. In this paper, we present an automated modeling approach that can semantically decompose and reconstruct the complex building light detection and ranging (LiDAR) point clouds into simple parametric structures, and each generated structure is an unambiguous roof semantic unit without overlapping planar primitive. The proposed method starts by extracting roof planes using a multi-label energy minimization solution, followed by constructing a roof connection graph associated with proximity, similarity, and consistency attributes. Furthermore, a progressive decomposition and reconstruction algorithm is introduced to generate explicit semantic subparts and hierarchical representation of an isolated building. The proposed approach is performed on two various datasets and compared with the state-of-the-art reconstruction techniques. The experimental modeling results, including the assessment using the International Society for Photogrammetry and Remote Sensing (ISPRS) benchmark LiDAR datasets, demonstrate that the proposed modeling method can efficiently decompose complex building models into interpretable semantic structures.


Author(s):  
SU YAN ◽  
Lei Yu

Abstract Simultaneous Localization and Mapping (SLAM) is one of the key technologies used in sweepers, autonomous vehicles, virtual reality and other fields. This paper presents a dense RGB-D SLAM reconstruction algorithm based on convolutional neural network of multi-layer image invariant feature transformation. The main contribution of the system lies in the construction of a convolutional neural network based on multi-layer image invariant feature, which optimized the extraction of ORB (Oriented FAST and Rotated Brief) feature points and the reconstruction effect. After the feature point matching, pose estimation, loop detection and other steps, the 3D point clouds were finally spliced to construct a complete and smooth spatial model. The system can improve the accuracy and robustness in feature point processing and pose estimation. Comparative experiments show that the optimized algorithm saves 0.093s compared to the ordinary extraction algorithm while guaranteeing a high accuracy rate at the same time. The results of reconstruction experiments show that the spatial models have more clear details, smoother connection with no fault layers than the original ones. The reconstruction results are generally better than other common algorithms, such as Kintinuous, Elasticfusion and ORBSLAM2 dense reconstruction.


Sign in / Sign up

Export Citation Format

Share Document