scholarly journals Construction and application of factory digital twin model based on BIM and point cloud

2021 ◽  
Vol 293 ◽  
pp. 02031
Author(s):  
Guocheng Qin ◽  
Ling Wang ◽  
YiMei Hou ◽  
HaoRan Gui ◽  
YingHao Jian

The digital twin model of the factory is the basis for the construction of a digital factory, and the professional system of the factory is complex. The traditional BIM model is not completely consistent with the actual position of the corresponding component, and it is difficult to directly replace the digital twin model. In response to this situation, relying on a certain factory project, the point cloud is used to eliminate the positional deviation between the BIM model and the factory during the construction phase, improve the efficiency and accuracy and reliability of model adjustment and optimization, and , realize the conversion from BIM model to digital twin model. A novel algorithm is developed to quickly detect and evaluate the construction quality of the local structure of the factory, so as to input the initial deformation data of the structure into the corresponding model and feed back to the construction party for improvement. The results show that the digital twin model, which is highly consistent with the actual location of the factory components, not only lays a solid foundation for the construction of a digital factory, but also further deepens the integration and application of BIM and point clouds.

2015 ◽  
Vol 764-765 ◽  
pp. 1375-1379 ◽  
Author(s):  
Cheng Tiao Hsieh

This paper aims at presenting a simple approach utilizing a Kinect-based scanner to create models available for 3D printing or other digital manufacturing machines. The outputs of Kinect-based scanners are a depth map and they usually need complicated computational processes to prepare them ready for a digital fabrication. The necessary processes include noise filtering, point cloud alignment and surface reconstruction. Each process may require several functions and algorithms to accomplish these specific tasks. For instance, the Iterative Closest Point (ICP) is frequently used in a 3D registration and the bilateral filter is often used in a noise point filtering process. This paper attempts to develop a simple Kinect-based scanner and its specific modeling approach without involving the above complicated processes.The developed scanner consists of an ASUS’s Xtion Pro and rotation table. A set of organized point cloud can be generated by the scanner. Those organized point clouds can be aligned precisely by a simple transformation matrix instead of the ICP. The surface quality of raw point clouds captured by Kinect are usually rough. For this drawback, this paper introduces a solution to obtain a smooth surface model. Inaddition, those processes have been efficiently developed by free open libraries, VTK, Point Cloud Library and OpenNI.


2020 ◽  
Vol 6 (9) ◽  
pp. 94
Author(s):  
Magda Alexandra Trujillo-Jiménez ◽  
Pablo Navarro ◽  
Bruno Pazos ◽  
Leonardo Morales ◽  
Virginia Ramallo ◽  
...  

Current point cloud extraction methods based on photogrammetry generate large amounts of spurious detections that hamper useful 3D mesh reconstructions or, even worse, the possibility of adequate measurements. Moreover, noise removal methods for point clouds are complex, slow and incapable to cope with semantic noise. In this work, we present body2vec, a model-based body segmentation tool that uses a specifically trained Neural Network architecture. Body2vec is capable to perform human body point cloud reconstruction from videos taken on hand-held devices (smartphones or tablets), achieving high quality anthropometric measurements. The main contribution of the proposed workflow is to perform a background removal step, thus avoiding the spurious points generation that is usual in photogrammetric reconstruction. A group of 60 persons were taped with a smartphone, and the corresponding point clouds were obtained automatically with standard photogrammetric methods. We used as a 3D silver standard the clean meshes obtained at the same time with LiDAR sensors post-processed and noise-filtered by expert anthropological biologists. Finally, we used as gold standard anthropometric measurements of the waist and hip of the same people, taken by expert anthropometrists. Applying our method to the raw videos significantly enhanced the quality of the results of the point cloud as compared with the LiDAR-based mesh, and of the anthropometric measurements as compared with the actual hip and waist perimeter measured by the anthropometrists. In both contexts, the resulting quality of body2vec is equivalent to the LiDAR reconstruction.


Author(s):  
H.-J. Przybilla ◽  
M. Lindstaedt ◽  
T. Kersten

<p><strong>Abstract.</strong> The quality of image-based point clouds generated from images of UAV aerial flights is subject to various influencing factors. In addition to the performance of the sensor used (a digital camera), the image data format (e.g. TIF or JPG) is another important quality parameter. At the UAV test field at the former Zollern colliery (Dortmund, Germany), set up by Bochum University of Applied Sciences, a medium-format camera from Phase One (IXU 1000) was used to capture UAV image data in RAW format. This investigation aims at evaluating the influence of the image data format on point clouds generated by a Dense Image Matching process. Furthermore, the effects of different data filters, which are part of the evaluation programs, were considered. The processing was carried out with two software packages from Agisoft and Pix4D on the basis of both generated TIF or JPG data sets. The point clouds generated are the basis for the investigation presented in this contribution. Point cloud comparisons with reference data from terrestrial laser scanning were performed on selected test areas representing object-typical surfaces (with varying surface structures). In addition to these area-based comparisons, selected linear objects (profiles) were evaluated between the different data sets. Furthermore, height point deviations from the dense point clouds were determined using check points. Differences in the results generated through the two software packages used could be detected. The reasons for these differences are filtering settings used for the generation of dense point clouds. It can also be assumed that there are differences in the algorithms for point cloud generation which are implemented in the two software packages. The slightly compressed JPG image data used for the point cloud generation did not show any significant changes in the quality of the examined point clouds compared to the uncompressed TIF data sets.</p>


Author(s):  
F. Alidoost ◽  
H. Arefi

Nowadays, Unmanned Aerial System (UAS)-based photogrammetry offers an affordable, fast and effective approach to real-time acquisition of high resolution geospatial information and automatic 3D modelling of objects for numerous applications such as topography mapping, 3D city modelling, orthophoto generation, and cultural heritages preservation. In this paper, the capability of four different state-of-the-art software packages as 3DSurvey, Agisoft Photoscan, Pix4Dmapper Pro and SURE is examined to generate high density point cloud as well as a Digital Surface Model (DSM) over a historical site. The main steps of this study are including: image acquisition, point cloud generation, and accuracy assessment. The overlapping images are first captured using a quadcopter and next are processed by different software to generate point clouds and DSMs. In order to evaluate the accuracy and quality of point clouds and DSMs, both visual and geometric assessments are carry out and the comparison results are reported.


2021 ◽  
Author(s):  
DIOGO GARCIA ◽  
Andre Souto ◽  
Gustavo Sandri ◽  
Tomas Borges ◽  
Ricardo Queiroz

Geometry-based point cloud compression (G-PCC) has been rapidly evolving in the context of international standards. Despite the inherent scalability of octree-based geometry description, current G-PCC attribute compression techniques prevent full scalability for compressed point clouds. In this paper, we present a solution to add scalability to attributes compressed using the region-adaptive hierarchical transform (RAHT), enabling the reconstruction of the point cloud using only a portion of the original bitstream. Without the full geometry information, one cannot compute the weights in which the RAHT relies on to calculate its coefficients for further levels of detail. In order to overcome this problem, we propose a linear relationship approximation relating the downsampled point cloud to the truncated inverse RAHT coefficients at that same level. The linear relationship parameters are sent as side information. After truncating the bitstream at a point corresponding to a given octree level, we can, then, recreate the attributes at that level. Tests were carried out and results attest the good approximation quality of the proposed technique.


Author(s):  
Z. Hui ◽  
P. Cheng ◽  
L. Wang ◽  
Y. Xia ◽  
H. Hu ◽  
...  

<p><strong>Abstract.</strong> Denoising is a key pre-processing step for many airborne LiDAR point cloud applications. However, the previous algorithms have a number of problems, which affect the quality of point cloud post-processing, such as DTM generation. In this paper, a novel automated denoising algorithm is proposed based on empirical mode decomposition to remove outliers from airborne LiDAR point cloud. Comparing with traditional point cloud denoising algorithms, the proposed method can detect outliers from a signal processing perspective. Firstly, airborne LiDAR point clouds are decomposed into a series of intrinsic mode functions with the help of morphological operations, which would significantly decrease the computational complexity. By applying OTSU algorithm to these intrinsic mode functions, noise-dominant components can be detected and filtered. Finally, outliers are detected automatically by comparing observed elevations and reconstructed elevations. Three datasets located at three different cities in China were used to verify the validity and robustness of the proposed method. The experimental results demonstrate that the proposed method removes both high and low outliers effectively with various terrain features while preserving useful ground details.</p>


Author(s):  
S. Hofmann ◽  
C. Brenner

Mobile mapping data is widely used in various applications, what makes it especially important for data users to get a statistically verified quality statement on the geometric accuracy of the acquired point clouds or its processed products. The accuracy of point clouds can be divided into an absolute and a relative quality, where the absolute quality describes the position of the point cloud in a world coordinate system such as WGS84 or UTM, whereas the relative accuracy describes the accuracy within the point cloud itself. Furthermore, the quality of processed products such as segmented features depends on the global accuracy of the point cloud but mainly on the quality of the processing steps. Several data sources with different characteristics and quality can be thought of as potential reference data, such as cadastral maps, orthophoto, artificial control objects or terrestrial surveys using a total station. In this work a test field in a selected residential area was acquired as reference data in a terrestrial survey using a total station. In order to reach high accuracy the stationing of the total station was based on a newly made geodetic network with a local accuracy of less than 3 mm. The global position of the network was determined using a long time GNSS survey reaching an accuracy of 8 mm. Based on this geodetic network a 3D test field with facades and street profiles was measured with a total station, each point with a two-dimensional position and altitude. In addition, the surface of poles of street lights, traffic signs and trees was acquired using the scanning mode of the total station. &lt;br&gt;&lt;br&gt; Comparing this reference data to the acquired mobile mapping point clouds of several measurement campaigns a detailed quality statement on the accuracy of the point cloud data is made. Additionally, the advantages and disadvantages of the described reference data source concerning availability, cost, accuracy and applicability are discussed.


Author(s):  
Mingshao Zhang ◽  
Zhou Zhang ◽  
Sven K. Esche ◽  
Constantin Chassapis

Since its introduction in 2010, Microsoft’s Kinect input device for game consoles and computers has shown its great potential in a large number of applications, including game development, research and education. Many of these implementations are still in the prototype stages and exhibit a somewhat limited performance. These limitations are mainly caused by the quality of the point clouds generated by the Kinect, which include limited range, high dependency on surface properties, shadowing, low depth accuracy, etc. One of the Kinect’s most significant limitations is the low accuracy and high error associated with its point cloud. The severity of these defects varies with the points’ locations in the Kinect’s camera coordinate system. The available traditional algorithms for processing point clouds are based on the same assumption that input point clouds are perfect and have the same characteristics throughout the entire point cloud. In the first part of this paper, the Kinect’s point cloud properties (including resolution, depth accuracy, noise level and error) and their dependency on the point pixel location will be systematically studied. Second, the Kinect’s calibration, both by hardware and software approaches, will be explored and methods for improving the quality of its output point clouds will be identified. Then, modified algorithms adapted to the Kinect’s unique properties will be introduced. This method allows to better judge the output point cloud properties in a quantifiable manner and then to modify traditional computer vision algorithms by adjusting their assumptions regarding the input cloud properties to the actual parameters of the Kinect. Finally, the modified algorithms will be tested in a prototype application, which shows that the Kinect does have the potential for successful usage in educational applications if the according algorithms are design properly.


2018 ◽  
Vol 170 ◽  
pp. 03033 ◽  
Author(s):  
Elizaveta Fateeva ◽  
Vladimir Badenko ◽  
Alexandr Fedotov ◽  
Ivan Kochetkov

Historical Building Information Modelling (HBIM) is nowadays used as a means to collect, store and preserve information about historical buildings and structures. The information is often collected via laser scanning. The resulting point cloud is manipulated and transformed into a polygon mesh, which is a type of model very easy to work with. This paper looks at the problems associated with creating mesh out of point clouds depending on various characteristics in context of façade reconstruction. The study is based on a point cloud recorded via terrestrial laser scanning in downtown Bremen, Germany that contains buildings completed in a number of different architectural styles, allowing to extract multiple architectural features. Analysis of meshes' quality depending on point cloud density was carried out. Conclusions were drawn as to what the rational solutions for effective surface extraction can be for each individual building in question. Recommendations on preprocessing of point clouds were given.


Author(s):  
Robert Niederheiser ◽  
Martin Mokroš ◽  
Julia Lange ◽  
Helene Petschko ◽  
Günther Prasicek ◽  
...  

Terrestrial photogrammetry nowadays offers a reasonably cheap, intuitive and effective approach to 3D-modelling. However, the important choice, which sensor and which software to use is not straight forward and needs consideration as the choice will have effects on the resulting 3D point cloud and its derivatives. <br><br> We compare five different sensors as well as four different state-of-the-art software packages for a single application, the modelling of a vegetated rock face. The five sensors represent different resolutions, sensor sizes and price segments of the cameras. The software packages used are: (1) Agisoft PhotoScan Pro (1.16), (2) Pix4D (2.0.89), (3) a combination of Visual SFM (V0.5.22) and SURE (1.2.0.286), and (4) MicMac (1.0). We took photos of a vegetated rock face from identical positions with all sensors. Then we compared the results of the different software packages regarding the ease of the workflow, visual appeal, similarity and quality of the point cloud. <br><br> While PhotoScan and Pix4D offer the user-friendliest workflows, they are also “black-box” programmes giving only little insight into their processing. Unsatisfying results may only be changed by modifying settings within a module. The combined workflow of Visual SFM, SURE and CloudCompare is just as simple but requires more user interaction. MicMac turned out to be the most challenging software as it is less user-friendly. However, MicMac offers the most possibilities to influence the processing workflow. The resulting point-clouds of PhotoScan and MicMac are the most appealing.


Sign in / Sign up

Export Citation Format

Share Document