scholarly journals Microscopic 3D reconstruction based on point cloud data generated using defocused images

2021 ◽  
Vol 54 (9-10) ◽  
pp. 1309-1318
Author(s):  
Xiangjun Liu ◽  
Wenfeng Zheng ◽  
Yuanyuan Mou ◽  
Yulin Li ◽  
Lirong Yin

Most of the 3D reconstruction requirements of microscopic scenes exist in industrial detection, and this scene requires real-time object reconstruction and can get object surface information quickly. However, this demand is challenging to obtain for micro scenarios. The reason is that the microscope’s depth of field is shallow, and it is easy to blur the image because the object’s surface is not in the focus plane. Under the video microscope, the images taken frame by frame are mostly defocused images. In the process of 3D reconstruction, a single sheet or a few 2D images are used for geometric-optical calculation, and the affine transformation is used to obtain the 3D information of the object and complete the 3D reconstruction. The feature of defocus image is that its complete information needs to be restored by a whole set of single view defocus image sequences. The defocused image cannot complete the task of affine transformation due to the lack of information. Therefore, using defocus image sequence to restore 3D information has higher processing difficulty than ordinary scenes, and the real-time performance is more difficult to guarantee. In this paper, the surface reconstruction process based on point-cloud data is studied. A Delaunay triangulation method based on plane projection and synthesis algorithm is used to complete surface fitting. Finally, the 3D reconstruction experiment of the collected image sequence is completed. The experimental results show that the reconstructed surface conforms to the surface contour information of the selected object.

2021 ◽  
Author(s):  
Chengxin Ju ◽  
Yuanyuan Zhao ◽  
Fengfeng Wu ◽  
Rui Li ◽  
Tianle Yang ◽  
...  

Abstract Background: Three-dimensional (3D) laser scanning technology could rapidly extract the surface geometric features of maize plants to achieve non-destructive monitoring of maize phenotypes. However, extracting the phenotypic parameters of maize plants based on laser point cloud data is challenging.Methods: In this paper, a rotational scanning method was used to collect the data of potted maize point cloud from different perspectives by using a laser scanner. Maize point cloud data were grid-reconstructed and aligned based on greedy projection triangulation algorithm and iterative closest point (ICP) algorithm, and the random sampling consistency algorithm was used to segment the stem and leaf point clouds of single maize plant to obtain the plant height and leaf parameters.Results: The results showed that the R2 between the predicted plant height and the measured plant height was above 0.95, and the R2 of the predicted leaf length, leaf width and leaf area were 0.938, 0878 and 0.956 respectively when compared with the measured values.Conclusions: The 3D reconstruction of maize plants using the laser scanner showed a good performance, and the phenotypic parameters obtained based on the reconstructed 3D model had high accuracy. The results were helpful to the practical application of plant 3D reconstruction and provided guidance for plant parameter acquisition and theoretical methods for intelligent agricultural research.


2014 ◽  
Vol 988 ◽  
pp. 467-470
Author(s):  
Liang Liu ◽  
Shu Guang Dai

3D reconstruction as the basis of many applications,such as 3D printing, has become more and more importantfor many enterprises and researchersThe very important step in 3D reconstruction is the joining together of point cloud.This paper introduces the structures of a system to obtain three-dimensional point cloud data and a kind ofmethodsusing of the system to get point cloud data through the rotation and translation of the coordinate system, joining together the point cloud data.Experiment shows that this method has achieved good effect.


Electronics ◽  
2020 ◽  
Vol 9 (5) ◽  
pp. 836 ◽  
Author(s):  
Young-Hoon Jin ◽  
In-Tae Hwang ◽  
Won-Hyung Lee

Augmented reality (AR) is a useful visualization technology that displays information by adding virtual images to the real world. In AR systems that require three-dimensional information, point cloud data is easy to use after real-time acquisition, however, it is difficult to measure and visualize real-time objects due to the large amount of data and a matching process. In this paper we explored a method of estimating pipes from point cloud data and visualizing them in real-time through augmented reality devices. In general, pipe estimation in a point cloud uses a Hough transform and is performed through a preprocessing process, such as noise filtering, normal estimation, or segmentation. However, there is a disadvantage in that the execution time is slow due to a large amount of computation. Therefore, for the real-time visualization in augmented reality devices, the fast cylinder matching method using random sample consensus (RANSAC) is required. In this paper, we proposed parallel processing, multiple frames, adjustable scale, and error correction for real-time visualization. The real-time visualization method through the augmented reality device obtained a depth image from the sensor and configured a uniform point cloud using a voxel grid algorithm. The constructed data was analyzed according to the fast cylinder matching method using RANSAC. The real-time visualization method through augmented reality devices is expected to be used to identify problems, such as the sagging of pipes, through real-time measurements at plant sites due to the spread of various AR devices.


Sign in / Sign up

Export Citation Format

Share Document