scholarly journals Application of 3D Information Expression Method of Ancient Buildings Based on Point Cloud Data and BIM

2021 ◽  
Vol 2037 (1) ◽  
pp. 012127
Author(s):  
Li Wang
Sensors ◽  
2021 ◽  
Vol 21 (1) ◽  
pp. 317
Author(s):  
Mehrdad Eslami ◽  
Mohammad Saadatseresht

Cameras and laser scanners are complementary tools for a 2D/3D information generation. Systematic and random errors cause the misalignment of the multi-sensor imagery and point cloud data. In this paper, a novel feature-based approach is proposed for imagery and point cloud fine registration. The tie points and its two neighbor pixels are matched in the overlap images, which are intersected in the object space to create the differential tie plane. A preprocessing is applied to the corresponding tie points and non-robust ones are removed. Initial coarse Exterior Orientation Parameters (EOPs), Interior Orientation Parameters (IOPs), and Additional Parameters (APs) are used to transform tie plane points to the object space. Then, the nearest points of the point cloud data to the transformed tie plane points are estimated. These estimated points are used to calculate Directional Vectors (DV) of the differential planes. As a constraint equation along with the collinearity equation, each object space tie point is forced to be located on the point cloud differential plane. Two different indoor and outdoor experimental data are used to assess the proposed approach. Achieved results show about 2.5 pixels errors on checkpoints. Such results demonstrated the robustness and practicality of the proposed approach.


Author(s):  
Z. Li ◽  
M. Hou ◽  
Y. Dong ◽  
J. Wang ◽  
Y. Ji ◽  
...  

Abstract. Tibetan Buddhist architecture embodies ancient Chinese architectural culture and religious culture. In the past, the information retention mechanisms for ancient buildings were implemented as photos, tracings, and rubbings, which cannot fundamentally document the authenticity of architectural heritage. To explore the digital retention method for the unique style of Han Tibetan architecture,this research that based on the idea of reverse documentation first collects point cloud data with the technical support of unmanned aerial vehicle (UAV) photogrammetry and terrestrial laser scanner (TLS) and then uses registration method to obtain the integral of the point cloud model of Baoguang Hall. This paper explores the possibility of extracting 2D and 3D information, such as architectural plans, facades, decorative components, and models of the temple architecture, by processing point cloud data. Finally, this study proves the feasibility of using digital technology for the preservation and protection of architectural heritage.


2020 ◽  
Vol 12 (9) ◽  
pp. 1452
Author(s):  
Ming Huang ◽  
Xueyu Wu ◽  
Xianglei Liu ◽  
Tianhang Meng ◽  
Peiyuan Zhu

The preference of three-dimensional representation of underground cable wells from two-dimensional symbols is a developing trend, and three-dimensional (3D) point cloud data is widely used due to its high precision. In this study, we utilize the characteristics of 3D terrestrial lidar point cloud data to build a CSG-BRep 3D model of underground cable wells, whose spatial topological relationship is fully considered. In order to simplify the modeling process, first, point cloud simplification is performed; then, the point cloud main axis is extracted by OBB bounding box, and lastly the point cloud orientation correction is realized by quaternion rotation. Furthermore, employing the adaptive method, the top point cloud is extracted, and it is projected for boundary extraction. Thereupon, utilizing the boundary information, we design the 3D cable well model. Finally, the cable well component model is generated by scanning the original point cloud. The experiments demonstrate that, along with the algorithm being fast, the proposed model is effective at displaying the 3D information of the actual cable wells and meets the current production demands.


2021 ◽  
Vol 54 (9-10) ◽  
pp. 1309-1318
Author(s):  
Xiangjun Liu ◽  
Wenfeng Zheng ◽  
Yuanyuan Mou ◽  
Yulin Li ◽  
Lirong Yin

Most of the 3D reconstruction requirements of microscopic scenes exist in industrial detection, and this scene requires real-time object reconstruction and can get object surface information quickly. However, this demand is challenging to obtain for micro scenarios. The reason is that the microscope’s depth of field is shallow, and it is easy to blur the image because the object’s surface is not in the focus plane. Under the video microscope, the images taken frame by frame are mostly defocused images. In the process of 3D reconstruction, a single sheet or a few 2D images are used for geometric-optical calculation, and the affine transformation is used to obtain the 3D information of the object and complete the 3D reconstruction. The feature of defocus image is that its complete information needs to be restored by a whole set of single view defocus image sequences. The defocused image cannot complete the task of affine transformation due to the lack of information. Therefore, using defocus image sequence to restore 3D information has higher processing difficulty than ordinary scenes, and the real-time performance is more difficult to guarantee. In this paper, the surface reconstruction process based on point-cloud data is studied. A Delaunay triangulation method based on plane projection and synthesis algorithm is used to complete surface fitting. Finally, the 3D reconstruction experiment of the collected image sequence is completed. The experimental results show that the reconstructed surface conforms to the surface contour information of the selected object.


Author(s):  
Jiayong Yu ◽  
Longchen Ma ◽  
Maoyi Tian, ◽  
Xiushan Lu

The unmanned aerial vehicle (UAV)-mounted mobile LiDAR system (ULS) is widely used for geomatics owing to its efficient data acquisition and convenient operation. However, due to limited carrying capacity of a UAV, sensors integrated in the ULS should be small and lightweight, which results in decrease in the density of the collected scanning points. This affects registration between image data and point cloud data. To address this issue, the authors propose a method for registering and fusing ULS sequence images and laser point clouds, wherein they convert the problem of registering point cloud data and image data into a problem of matching feature points between the two images. First, a point cloud is selected to produce an intensity image. Subsequently, the corresponding feature points of the intensity image and the optical image are matched, and exterior orientation parameters are solved using a collinear equation based on image position and orientation. Finally, the sequence images are fused with the laser point cloud, based on the Global Navigation Satellite System (GNSS) time index of the optical image, to generate a true color point cloud. The experimental results show the higher registration accuracy and fusion speed of the proposed method, thereby demonstrating its accuracy and effectiveness.


Author(s):  
Keisuke YOSHIDA ◽  
Shiro MAENO ◽  
Syuhei OGAWA ◽  
Sadayuki ISEKI ◽  
Ryosuke AKOH

Sign in / Sign up

Export Citation Format

Share Document