Automatic pipe and elbow recognition from three-dimensional point cloud model of industrial plant piping system using convolutional neural network-based primitive classification

2020 ◽  
Vol 116 ◽  
pp. 103236
Author(s):  
Youngdoo Kim ◽  
Cong Hong Phong Nguyen ◽  
Young Choi
Author(s):  
C. Altuntas

<p><strong>Abstract.</strong> Image based dense point cloud creation is easy and low-cost application for three dimensional digitization of small and large scale objects and surfaces. It is especially attractive method for cultural heritage documentation. Reprojection error on conjugate keypoints indicates accuracy of the model and keypoint localisation in this method. In addition, sequential registration of the images from large scale historical buildings creates big cumulative registration error. Thus, accuracy of the model should be increased with the control points or loop close imaging. The registration of point point cloud model into the georeference system is performed using control points. In this study historical Sultan Selim Mosque that was built in sixteen century by Great Architect Sinan was modelled via photogrammetric dense point cloud. The reprojection error and number of keypoints were evaluated for different base/length ratio. In addition, georeferencing accuracy was evaluated with many configuration of control points with loop and without loop closure imaging.</p>


2020 ◽  
Vol 57 (16) ◽  
pp. 161022
Author(s):  
任永梅 Ren Yongmei ◽  
杨杰 Yang Jie ◽  
郭志强 Guo Zhiqiang ◽  
陈奕蕾 Chen Yilei

Sensors ◽  
2019 ◽  
Vol 19 (15) ◽  
pp. 3345 ◽  
Author(s):  
Guoxiang Sun ◽  
Xiaochan Wang ◽  
Ye Sun ◽  
Yongqian Ding ◽  
Wei Lu

Nondestructive plant growth measurement is essential for researching plant growth and health. A nondestructive measurement system to retrieve plant information includes the measurement of morphological and physiological information, but most systems use two independent measurement systems for the two types of characteristics. In this study, a highly integrated, multispectral, three-dimensional (3D) nondestructive measurement system for greenhouse tomato plants was designed. The system used a Kinect sensor, an SOC710 hyperspectral imager, an electric rotary table, and other components. A heterogeneous sensing image registration technique based on the Fourier transform was proposed, which was used to register the SOC710 multispectral reflectance in the Kinect depth image coordinate system. Furthermore, a 3D multiview RGB-D image-reconstruction method based on the pose estimation and self-calibration of the Kinect sensor was developed to reconstruct a multispectral 3D point cloud model of the tomato plant. An experiment was conducted to measure plant canopy chlorophyll and the relative chlorophyll content was measured by the soil and plant analyzer development (SPAD) measurement model based on a 3D multispectral point cloud model and a single-view point cloud model and its performance was compared and analyzed. The results revealed that the measurement model established by using the characteristic variables from the multiview point cloud model was superior to the one established using the variables from the single-view point cloud model. Therefore, the multispectral 3D reconstruction approach is able to reconstruct the plant multispectral 3D point cloud model, which optimizes the traditional two-dimensional image-based SPAD measurement method and can obtain a precise and efficient high-throughput measurement of plant chlorophyll.


2012 ◽  
Vol 594-597 ◽  
pp. 2398-2401
Author(s):  
Dong Ling Ma ◽  
Jian Cui ◽  
Fei Cai

This paper provides a scheme to construct three dimensional (3D) model fast using laser scanning data. In the approach, firstly, laser point cloud are scanned from different scan positions and the point cloud coming from neighbor scan stations are spliced automatically to combine a uniform point cloud model, and then feature lines are extracted through the point cloud, and the framework of the building are extracted to generate 3D models. At last, a conclusion can be drawn that 3D visualization model can be generated quickly using 3D laser scanning technology. The experiment result shows that it will bring the application model and technical advantage which traditional mapping way can not have.


2019 ◽  
Vol 15 (1) ◽  
pp. 155014771982604 ◽  
Author(s):  
Jing Liu ◽  
Yajie Yang ◽  
Douli Ma ◽  
Wenjuan He ◽  
Yinghui Wang

A new blind watermarking scheme for three-dimensional point-cloud models is proposed based on vertex curvature to achieve an appropriate trade-off between transparency and robustness. The root mean square curvature of local set of every vertex is first calculated for the three-dimensional point-cloud model and then the vertices with larger root mean square curvature are used to carry the watermarking information; the vertices with smaller root mean square curvature are exploited to establish the synchronization relation between the watermark embedding and extraction. The three-dimensional point-cloud model is divided into ball rings, and the watermarking information is inserted by modifying the radial radii of vertices within ball rings. Those vertices taking part in establishing the synchronization relation do not carry the watermarking information; therefore, the synchronization relation is not affected by the embedded watermark. Experimental results show the proposed method outperforms other well-known three-dimensional point-cloud model watermarking methods in terms of imperceptibility and robustness, especially for against geometric attack.


2016 ◽  
Vol 12 (12) ◽  
pp. 1688-1694 ◽  
Author(s):  
Ping Su ◽  
Wenbo Cao ◽  
Jianshe Ma ◽  
Bingchao Cheng ◽  
Xianting Liang ◽  
...  

2020 ◽  
Vol 15 ◽  
pp. 155892502092154
Author(s):  
Zhicai Yu ◽  
Yueqi Zhong ◽  
R Hugh Gong ◽  
Haoyang Xie

To fill the binary image of draped fabric into a comparable grayscale image with detailed shade information, the three-dimensional point cloud of draped fabric was obtained with a self-built three-dimensional scanning device. The three-dimensional point cloud of drape fabric is encapsulated into a triangular mesh, and the binary and grayscale images of draped fabric were rendered in virtual environments separately. A pix2pix convolutional neural network with the binary image of draped fabric as input and the grayscale image of draped fabric as output was constructed and trained. The relationship between the binary image and the grayscale image was established. The results show that the trained pix2pix neural network can fill unknown binary top view images of draped fabric to grayscale images. The average pixel cosine similarity between filling results and ground truth could reach 0.97.


Sensors ◽  
2018 ◽  
Vol 18 (11) ◽  
pp. 3681 ◽  
Author(s):  
Le Zhang ◽  
Jian Sun ◽  
Qiang Zheng

The recognition of three-dimensional (3D) lidar (light detection and ranging) point clouds remains a significant issue in point cloud processing. Traditional point cloud recognition employs the 3D point clouds from the whole object. Nevertheless, the lidar data is a collection of two-and-a-half-dimensional (2.5D) point clouds (each 2.5D point cloud comes from a single view) obtained by scanning the object within a certain field angle by lidar. To deal with this problem, we initially propose a novel representation which expresses 3D point clouds using 2.5D point clouds from multiple views and then we generate multi-view 2.5D point cloud data based on the Point Cloud Library (PCL). Subsequently, we design an effective recognition model based on a multi-view convolutional neural network. The model directly acts on the raw 2.5D point clouds from all views and learns to get a global feature descriptor by fusing the features from all views by the view fusion network. It has been proved that our approach can achieve an excellent recognition performance without any requirement for three-dimensional reconstruction and the preprocessing of point clouds. In conclusion, this paper can effectively solve the recognition problem of lidar point clouds and provide vital practical value.


Sign in / Sign up

Export Citation Format

Share Document