scholarly journals Hole Repairing Algorithm for 3D Point Cloud Model of Symmetrical Objects Grasped by the Manipulator

Sensors ◽  
2021 ◽  
Vol 21 (22) ◽  
pp. 7558
Author(s):  
Linyan Cui ◽  
Guolong Zhang ◽  
Jinshen Wang

For the engineering application of manipulator grasping objects, mechanical arm occlusion and limited imaging angle produce various holes in the reconstructed 3D point clouds of objects. Acquiring a complete point cloud model of the grasped object plays a very important role in the subsequent task planning of the manipulator. This paper proposes a method with which to automatically detect and repair the holes in the 3D point cloud model of symmetrical objects grasped by the manipulator. With the established virtual camera coordinate system and boundary detection, repair and classification of holes, the closed boundaries for the nested holes were detected and classified into two kinds, which correspond to the mechanical claw holes caused by mechanical arm occlusion and the missing surface produced by limited imaging angle. These two kinds of holes were repaired based on surface reconstruction and object symmetry. Experiments on simulated and real point cloud models demonstrate that our approach outperforms the other state-of-the-art 3D point cloud hole repair algorithms.

Author(s):  
L. Zhang ◽  
P. van Oosterom ◽  
H. Liu

Abstract. Point clouds have become one of the most popular sources of data in geospatial fields due to their availability and flexibility. However, because of the large amount of data and the limited resources of mobile devices, the use of point clouds in mobile Augmented Reality applications is still quite limited. Many current mobile AR applications of point clouds lack fluent interactions with users. In our paper, a cLoD (continuous level-of-detail) method is introduced to filter the number of points to be rendered considerably, together with an adaptive point size rendering strategy, thus improve the rendering performance and remove visual artifacts of mobile AR point cloud applications. Our method uses a cLoD model that has an ideal distribution over LoDs, with which can remove unnecessary points without sudden changes in density as present in the commonly used discrete level-of-detail approaches. Besides, camera position, orientation and distance from the camera to point cloud model is taken into consideration as well. With our method, good interactive visualization of point clouds can be realized in the mobile AR environment, with both nice visual quality and proper resource consumption.


Author(s):  
A. Murtiyos ◽  
P. Grussenmeyer ◽  
D. Suwardhi ◽  
W. A. Fadilah ◽  
H. A. Permana ◽  
...  

<p><strong>Abstract.</strong> 3D recording is an important procedure in the conservation of heritage sites. This past decade, a myriad of 3D sensors has appeared in the market with different advantages and disadvantages. Most notably, the laser scanning and photogrammetry methods have become some of the most used techniques in 3D recording. The integration of these different sensors is an interesting topic, one which will be discussed in this paper. Integration is an activity to combine two or more data with different characteristics to produce a 3D model with the best results. The discussion in this study includes the process of acquisition, processing, and analysis of the geometric quality from the results of the 3D recording process; starting with the acquisition method, registration and georeferencing process, up to the integration of laser scanning and photogrammetry 3D point clouds. The final result of the integration of the two point clouds is the 3D point cloud model that has become a single entity. Some detailed parts of the object of interest draw both geometric and textural information from photogrammetry, while laser scanning provided a point cloud depicting the overall overview of the building. The object used as our case study is Sari Temple, located in Special Region of Yogyakarta, Indonesia.</p>


2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Jingli Wang ◽  
Huiyuan Zhang ◽  
Jingxiang Gao ◽  
Dong Xiao

With the further development of the construction of “smart mine,” the establishment of three-dimensional (3D) point cloud models of mines has become very common. However, the truck operation caused the 3D point cloud model of the mining area to contain dust points, and the 3D point cloud model established by the Context Capture modeling software is a hollow structure. The previous point cloud denoising algorithms caused holes in the model. In view of the above problems, this paper proposes the point cloud denoising method based on orthogonal total least squares fitting and two-layer extreme learning machine improved by genetic algorithm (GA-TELM). The steps are to separate dust points and ground points by orthogonal total least squares fitting and use GA-TELM to repair holes. The advantages of the proposed method are listed as follows. First, this method could denoise without generating holes, which solves engineering problems. Second, GA-TELM has a better effect in repairing holes compared with the other methods considered in this paper. Finally, this method starts from actual problems and could be used in mining areas with the same problems. Experimental results demonstrate that it can remove dust spots in the flat area of the mine effectively and ensure the integrity of the model.


Author(s):  
T. Shinohara ◽  
H. Xiu ◽  
M. Matsuoka

Abstract. This study introduces a novel image to a 3D point-cloud translation method with a conditional generative adversarial network that creates a large-scale 3D point cloud. This can generate supervised point clouds observed via airborne LiDAR from aerial images. The network is composed of an encoder to produce latent features of input images, generator to translate latent features to fake point clouds, and discriminator to classify false or real point clouds. The encoder is a pre-trained ResNet; to overcome the difficulty of generating 3D point clouds in an outdoor scene, we use a FoldingNet with features from ResNet. After a fixed number of iterations, our generator can produce fake point clouds that correspond to the input image. Experimental results show that our network can learn and generate certain point clouds using the data from the 2018 IEEE GRSS Data Fusion Contest.


2019 ◽  
Vol 9 (23) ◽  
pp. 5198 ◽  
Author(s):  
Luca Baronti ◽  
Mark Alston ◽  
Nikos Mavrakis ◽  
Amir M. Ghalamzan E. ◽  
Marco Castellani

In this study the problem of fitting shape primitives to point cloud scenes was tackled as a parameter optimisation procedure, and solved using the popular bees algorithm. Tested on three sets of clean and differently blurred point cloud models, the bees algorithm obtained performances comparable to those obtained using the state-of-the-art random sample consensus (RANSAC) method, and superior to those obtained by an evolutionary algorithm. Shape fitting times were compatible with real-time application. The main advantage of the bees algorithm over standard methods is that it doesn’t rely on ad hoc assumptions about the nature of the point cloud model like RANSAC approximation tolerance.


2016 ◽  
Vol 4 (4) ◽  
pp. 246-265 ◽  
Author(s):  
Samantha Ruggles ◽  
Joseph Clark ◽  
Kevin W. Franke ◽  
Derek Wolfe ◽  
Brandon Reimschiissel ◽  
...  

Structure from motion (SfM) computer vision is a remote sensing method that is gaining popularity due to its simplicity and ability to accurately characterize site geometry in three dimensions (3D). While many researchers have demonstrated the potential for SfM to be used with unmanned aerial vehicles (UAVs) to model in 3D various geologic features, such as landslides, little is understood concerning how the selection of the UAV platform can affect the resolution and accuracy of the model. This study evaluates the resolution and accuracy of 3D point cloud models of a large landslide that occurred in 2013 near Page, Arizona, that were developed from various small UAV platform and camera configurations. Terrestrial laser scans were performed at the landslide and were used to establish a comparative baseline model. Results from the study indicate that point cloud resolution improved by more than 16% when using multi-rotor UAVs instead of fixed-wing UAVs. However, accuracy of the points in the point cloud model appear to be independent of the UAV platform, but depend principally on the selected camera and the image resolution. Additional practical guidance on flying various UAV platforms in challenging field conditions is provided for geologists and engineers.


Buildings ◽  
2019 ◽  
Vol 9 (3) ◽  
pp. 70 ◽  
Author(s):  
Hadi Mahami ◽  
Farnad Nasirzadeh ◽  
Ali Hosseininaveh Ahmadabadian ◽  
Saeid Nahavandi

This research presents a novel method for automated construction progress monitoring. Using the proposed method, an accurate and complete 3D point cloud is generated for automatic outdoor and indoor progress monitoring throughout the project duration. In this method, Structured-from-Motion (SFM) and Multi-View-Stereo (MVS) algorithms coupled with photogrammetric principles for the coded targets’ detection are exploited to generate as-built 3D point clouds. The coded targets are utilized to automatically resolve the scale and increase the accuracy of the point cloud generated using SFM and MVS methods. Having generated the point cloud, the CAD model is generated from the as-built point cloud and compared with the as-planned model. Finally, the quantity of the performed work is determined in two real case study projects. The proposed method is compared to the Structured-from-Motion (SFM)/Clustering Multi-Views Stereo (CMVS)/Patch-based Multi-View Stereo (PMVS) algorithm, as a common method for generating 3D point cloud models. The proposed photogrammetric Multi-View Stereo method reveals an accuracy of around 99 percent and the generated noises are less compared to the SFM/CMVS/PMVS algorithm. It is observed that the proposed method has extensively improved the accuracy of generated points cloud compared to the SFM/CMVS/PMVS algorithm. It is believed that the proposed method may present a novel and robust tool for automated progress monitoring in construction projects.


Author(s):  
Y. Ding ◽  
X. Zheng ◽  
H. Xiong ◽  
Y. Zhang

<p><strong>Abstract.</strong> With the rapid development of new indoor sensors and acquisition techniques, the amount of indoor three dimensional (3D) point cloud models was significantly increased. However, these massive “blind” point clouds are difficult to satisfy the demand of many location-based indoor applications and GIS analysis. The robust semantic segmentation of 3D point clouds remains a challenge. In this paper, a segmentation with layout estimation network (SLENet)-based 2D&amp;ndash;3D semantic transfer method is proposed for robust segmentation of image-based indoor 3D point clouds. Firstly, a SLENet is devised to simultaneously achieve the semantic labels and indoor spatial layout estimation from 2D images. A pixel labeling pool is then constructed to incorporate the visual graphical model to realize the efficient 2D&amp;ndash;3D semantic transfer for 3D point clouds, which avoids the time-consuming pixel-wise label transfer and the reprojection error. Finally, a 3D-contextual refinement, which explores the extra-image consistency with 3D constraints is developed to suppress the labeling contradiction caused by multi-superpixel aggregation. The experiments were conducted on an open dataset (NYUDv2 indoor dataset) and a local dataset. In comparison with the state-of-the-art methods in terms of 2D semantic segmentation, SLENet can both learn discriminative enough features for inter-class segmentation while preserving clear boundaries for intra-class segmentation. Based on the excellence of SLENet, the final 3D semantic segmentation tested on the point cloud created from the local image dataset can reach a total accuracy of 89.97%, with the object semantics and indoor structural information both expressed.</p>


Author(s):  
Mohammad Nahangi ◽  
Christopher Rausch ◽  
Carl Haas

Geometric and dimensional deviations often create challenges for component aggregation in the assembly of interchangeable components in modular construction. Although the components are designed interchangeably, once they are fabricated, there are inevitable discrepancies between the designed and built states. Such discrepancies create problems for fitting interchangeable modular components. This paper presents a framework for optimally planning the assembly of interchangeable components based on their as-built state. A 3D point cloud model is captured and the critical interfaces between modules are compared to the original state, integrated in the building information models (BIM), as 3D drawings. The optimization framework is implemented based on two different approaches: (1) minimization of the total deviation for minimizing rework, and (2) intervention of rework by finding the best matching component for each investigated slot. Results show that the method can be effectively used for reducing rework in modular construction by optimum assembly planning.


Sign in / Sign up

Export Citation Format

Share Document