scholarly journals Automatic Reconstruction of Multi-Level Indoor Spaces from Point Cloud and Trajectory

Sensors ◽  
2021 ◽  
Vol 21 (10) ◽  
pp. 3493
Author(s):  
Gahyeon Lim ◽  
Nakju Doh

Remarkable progress in the development of modeling methods for indoor spaces has been made in recent years with a focus on the reconstruction of complex environments, such as multi-room and multi-level buildings. Existing methods represent indoor structure models as a combination of several sub-spaces, which are constructed by room segmentation or horizontal slicing approach that divide the multi-room or multi-level building environments into several segments. In this study, we propose an automatic reconstruction method of multi-level indoor spaces with unique models, including inter-room and inter-floor connections from point cloud and trajectory. We construct structural points from registered point cloud and extract piece-wise planar segments from the structural points. Then, a three-dimensional space decomposition is conducted and water-tight meshes are generated with energy minimization using graph cut algorithm. The data term of the energy function is expressed as a difference in visibility between each decomposed space and trajectory. The proposed method allows modeling of indoor spaces in complex environments, such as multi-room, room-less, and multi-level buildings. The performance of the proposed approach is evaluated for seven indoor space datasets.

Author(s):  
B. Gaurier ◽  
Ph. Druault ◽  
M. Ikhennicheu ◽  
G. Germain

In the main tidal energy sites like Alderney Race, turbulence intensity is high and velocity fluctuations may have a significant impact on marine turbines. To understand such phenomena better, a three-bladed turbine model is positioned in the wake of a generic wall-mounted obstacle, representative of in situ bathymetric variation. From two-dimensional Particle Image Velocimetry planes, the time-averaged velocity in the wake of the obstacle is reconstructed in the three-dimensional space. The reconstruction method is based on Proper Orthogonal Decomposition and enables access to a representation of the mean flow field and the associated shear. Then, the effect of the velocity gradient is observed on the turbine blade root force, for four turbine locations in the wake of the obstacle. The blade root force average decreases whereas its standard deviation increases when the distance to the obstacle increases. The angular distribution of this phase-averaged force is shown to be non-homogeneous, with variation of about 20% of its time-average during a turbine rotation cycle. Such force variations due to velocity shear will have significant consequences in terms of blade fatigue. This article is part of the theme issue ‘New insights on tidal dynamics and tidal energy harvesting in the Alderney Race’.


2012 ◽  
Vol 26 (20) ◽  
pp. 1250120 ◽  
Author(s):  
FUZHONG NIAN ◽  
XINGYUAN WANG

Projective synchronization investigates the synchronization of systems evolve in same orientation, however, in practice, the situation of same orientation is only minority, and the majority is different orientation. This paper investigates the latter, proposes the concept of rotating synchronization, and verifies its necessity and feasibility via theoretical analysis and numerical simulations. Three conclusions were elicited: first, in three-dimensional space, two arbitrary nonlinear chaotic systems who evolve in different orientation can realize synchronization at end; second, projective synchronization is a special case of rotating synchronization, so, the application fields of rotating synchronization is more broadly than that of the former; third, the overall evolving information can be reflected by single state variable's evolving, it has self-similarity, this is the same as the basic idea of phase space reconstruction method, it indicates that we got the same result from different approach, so, our method and the phase space reconstruction method are verified each other.


2017 ◽  
Vol 42 (3) ◽  
pp. 219-237 ◽  
Author(s):  
Witold Czajewski ◽  
Krzysztof Kołomyjec

AbstractThis paper describes the results of experiments on detection and recognition of 3D objects in RGB-D images provided by the Microsoft Kinect sensor. While the studies focus on single image use, sequences of frames are also considered and evaluated. Observed objects are categorized based on both geometrical and visual cues, but the emphasis is laid on the performance of the point cloud matching method. To this end, a rarely used approach consisting of independent VFH and CRH descriptors matching, followed by ICP and HV algorithms from the Point Cloud Library is applied. Successfully recognized objects are then subjected to a classical 2D analysis based on color histogram comparison exclusively with objects in the same geometrical category. The proposed two-stage approach allows to distinguish objects of similar geometry and different visual appearance, like soda cans of various brands. By separating geometry and color identification phases, the applied system is still able to categorize objects based on their geometry, even if there is no color match. The recognized objects are then localized in the three-dimensional space and autonomously grasped by a manipulator. To evaluate this approach, a special validation set was created, and additionally a selected scene from the Washington RGB-D Object Dataset was used.


2014 ◽  
Vol 543-547 ◽  
pp. 2656-2659
Author(s):  
Bo Ren ◽  
Ji Xin Yang ◽  
Peng Wan ◽  
Xue Heng Tao ◽  
Xue Jun Wang ◽  
...  

In order to realize the reverse design of human bodys curve, the curves parameter conversion and reconstruction based on non-contact measuring system are studied in the paper. Firstly, obtain the model of point cloud data by the non-contact measurement system, and then import the data into reverse the engineering software Geomagic. Second, process the point cloud data with the method of human characteristic curves and surfaces division, structure fitting surface, and get the three-dimensional reconstruction model of human bodys point cloud data. Lastly, import the model into the forward design software Solidworks with different methods and edit it. Then finish the parameter conversion from Geomagic to the forward design software. The reconstruction method has a good value in reverse design of the mold.


2020 ◽  
Vol 2020 ◽  
pp. 1-17
Author(s):  
Yunchao Tang ◽  
Mingyou Chen ◽  
Yunfan Lin ◽  
Xueyu Huang ◽  
Kuangyu Huang ◽  
...  

A four-ocular vision system is proposed for the three-dimensional (3D) reconstruction of large-scale concrete-filled steel tube (CFST) under complex testing conditions. These measurements are vitally important for evaluating the seismic performance and 3D deformation of large-scale specimens. A four-ocular vision system is constructed to sample the large-scale CFST; then point cloud acquisition, point cloud filtering, and point cloud stitching algorithms are applied to obtain a 3D point cloud of the specimen surface. A point cloud correction algorithm based on geometric features and a deep learning algorithm are utilized, respectively, to correct the coordinates of the stitched point cloud. This enhances the vision measurement accuracy in complex environments and therefore yields a higher-accuracy 3D model for the purposes of real-time complex surface monitoring. The performance indicators of the two algorithms are evaluated on actual tasks. The cross-sectional diameters at specific heights in the reconstructed models are calculated and compared against laser rangefinder data to test the performance of the proposed algorithms. A visual tracking test on a CFST under cyclic loading shows that the reconstructed output well reflects the complex 3D surface after correction and meets the requirements for dynamic monitoring. The proposed methodology is applicable to complex environments featuring dynamic movement, mechanical vibration, and continuously changing features.


2020 ◽  
Vol 1453 ◽  
pp. 012023 ◽  
Author(s):  
Xiaokang Ren ◽  
Mai Zhang ◽  
Wenqiao Wang ◽  
Xuetao Mao ◽  
Jie Ren

2021 ◽  
Author(s):  
Simone Müller ◽  
Dieter Kranzlmüller

Based on depth perception of individual stereo cameras, spatial structures can be derived as point clouds. The quality of such three-dimensional data is technically restricted by sensor limitations, latency of recording, and insufficient object reconstructions caused by surface illustration. Additionally external physical effects like lighting conditions, material properties, and reflections can lead to deviations between real and virtual object perception. Such physical influences can be seen in rendered point clouds as geometrical imaging errors on surfaces and edges. We propose the simultaneous use of multiple and dynamically arranged cameras. The increased information density leads to more details in surrounding detection and object illustration. During a pre-processing phase the collected data are merged and prepared. Subsequently, a logical analysis part examines and allocates the captured images to three-dimensional space. For this purpose, it is necessary to create a new metadata set consisting of image and localisation data. The post-processing reworks and matches the locally assigned images. As a result, the dynamic moving images become comparable so that a more accurate point cloud can be generated. For evaluation and better comparability we decided to use synthetically generated data sets. Our approach builds the foundation for dynamic and real-time based generation of digital twins with the aid of real sensor data.


Sensors ◽  
2021 ◽  
Vol 21 (14) ◽  
pp. 4628
Author(s):  
Xiaowen Teng ◽  
Guangsheng Zhou ◽  
Yuxuan Wu ◽  
Chenglong Huang ◽  
Wanjing Dong ◽  
...  

The three-dimensional reconstruction method using RGB-D camera has a good balance in hardware cost and point cloud quality. However, due to the limitation of inherent structure and imaging principle, the acquired point cloud has problems such as a lot of noise and difficult registration. This paper proposes a 3D reconstruction method using Azure Kinect to solve these inherent problems. Shoot color images, depth images and near-infrared images of the target from six perspectives by Azure Kinect sensor with black background. Multiply the binarization result of the 8-bit infrared image with the RGB-D image alignment result provided by Microsoft corporation, which can remove ghosting and most of the background noise. A neighborhood extreme filtering method is proposed to filter out the abrupt points in the depth image, by which the floating noise point and most of the outlier noise will be removed before generating the point cloud, and then using the pass-through filter eliminate rest of the outlier noise. An improved method based on the classic iterative closest point (ICP) algorithm is presented to merge multiple-views point clouds. By continuously reducing both the size of the down-sampling grid and the distance threshold between the corresponding points, the point clouds of each view are continuously registered three times, until get the integral color point cloud. Many experiments on rapeseed plants show that the success rate of cloud registration is 92.5% and the point cloud accuracy obtained by this method is 0.789 mm, the time consuming of a integral scanning is 302 seconds, and with a good color restoration. Compared with a laser scanner, the proposed method has considerable reconstruction accuracy and a significantly ahead of the reconstruction speed, but the hardware cost is much lower when building a automatic scanning system. This research shows a low-cost, high-precision 3D reconstruction technology, which has the potential to be widely used for non-destructive measurement of rapeseed and other crops phenotype.


Author(s):  
Gege Zhang ◽  
Qinghua Ma ◽  
Licheng Jiao ◽  
Fang Liu ◽  
Qigong Sun

3D point cloud semantic segmentation has attracted wide attention with its extensive applications in autonomous driving, AR/VR, and robot sensing fields. However, in existing methods, each point in the segmentation results is predicted independently from each other. This property causes the non-contiguity of label sets in three-dimensional space and produces many noisy label points, which hinders the improvement of segmentation accuracy. To address this problem, we first extend adversarial learning to this task and propose a novel framework Attention Adversarial Networks (AttAN). With high-order correlations in label sets learned from the adversarial learning, segmentation network can predict labels closer to the real ones and correct noisy results. Moreover, we design an additive attention block for the segmentation network, which is used to automatically focus on regions critical to the segmentation task by learning the correlation between multi-scale features. Adversarial learning, which explores the underlying relationship between labels in high-dimensional space, opens up a new way in 3D point cloud semantic segmentation. Experimental results on ScanNet and S3DIS datasets show that this framework effectively improves the segmentation quality and outperforms other state-of-the-art methods.


Sign in / Sign up

Export Citation Format

Share Document