Sensors ◽  
2021 ◽  
Vol 21 (14) ◽  
pp. 4628
Author(s):  
Xiaowen Teng ◽  
Guangsheng Zhou ◽  
Yuxuan Wu ◽  
Chenglong Huang ◽  
Wanjing Dong ◽  
...  

The three-dimensional reconstruction method using RGB-D camera has a good balance in hardware cost and point cloud quality. However, due to the limitation of inherent structure and imaging principle, the acquired point cloud has problems such as a lot of noise and difficult registration. This paper proposes a 3D reconstruction method using Azure Kinect to solve these inherent problems. Shoot color images, depth images and near-infrared images of the target from six perspectives by Azure Kinect sensor with black background. Multiply the binarization result of the 8-bit infrared image with the RGB-D image alignment result provided by Microsoft corporation, which can remove ghosting and most of the background noise. A neighborhood extreme filtering method is proposed to filter out the abrupt points in the depth image, by which the floating noise point and most of the outlier noise will be removed before generating the point cloud, and then using the pass-through filter eliminate rest of the outlier noise. An improved method based on the classic iterative closest point (ICP) algorithm is presented to merge multiple-views point clouds. By continuously reducing both the size of the down-sampling grid and the distance threshold between the corresponding points, the point clouds of each view are continuously registered three times, until get the integral color point cloud. Many experiments on rapeseed plants show that the success rate of cloud registration is 92.5% and the point cloud accuracy obtained by this method is 0.789 mm, the time consuming of a integral scanning is 302 seconds, and with a good color restoration. Compared with a laser scanner, the proposed method has considerable reconstruction accuracy and a significantly ahead of the reconstruction speed, but the hardware cost is much lower when building a automatic scanning system. This research shows a low-cost, high-precision 3D reconstruction technology, which has the potential to be widely used for non-destructive measurement of rapeseed and other crops phenotype.


2021 ◽  
Vol 11 (17) ◽  
pp. 7961
Author(s):  
Ning Lv ◽  
Chengyu Wang ◽  
Yujing Qiao ◽  
Yongde Zhang

The 3D printing process lacks real-time inspection, which is still an open-loop manufacturing process, and the molding accuracy is low. Based on the 3D reconstruction theory of machine vision, in order to meet the applicability requirements of 3D printing process detection, a matching fusion method is proposed. The fast nearest neighbor (FNN) method is used to search matching point pairs. The matching point information of FFT-SIFT algorithm based on fast Fourier transform is superimposed with the matching point information of AKAZE algorithm, and then fused to obtain more dense feature point matching information and rich edge feature information. Combining incremental SFM algorithm with global SFM algorithm, an integrated SFM sparse point cloud reconstruction method is developed. The dense point cloud is reconstructed by PMVs algorithm, the point cloud model is meshed by Delaunay triangulation, and then the accurate 3D reconstruction model is obtained by texture mapping. The experimental results show that compared with the classical SIFT algorithm, the speed of feature extraction is increased by 25.0%, the number of feature matching is increased by 72%, and the relative error of 3D reconstruction results is about 0.014%, which is close to the theoretical error.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Wei He

The three-dimensional reconstruction of outdoor landscape is of great significance for the construction of digital city. With the rapid development of big data and Internet of things technology, when using the traditional image-based 3D reconstruction method to restore the 3D information of objects in the image, there will be a large number of redundant points in the point cloud and the density of the point cloud is insufficient. Based on the analysis of the existing three-dimensional reconstruction technology, combined with the characteristics of outdoor garden scene, this paper gives the detection and extraction methods of relevant feature points and adopts feature matching and repairing the holes generated by point cloud meshing. By adopting the candidate strategy of feature points and adding the mesh subdivision processing method, an improved PMVS algorithm is proposed and the problem of sparse point cloud in 3D reconstruction is solved. Experimental results show that the proposed method not only effectively realizes the three-dimensional reconstruction of outdoor garden scene, but also improves the execution efficiency of the algorithm on the premise of ensuring the reconstruction effect.


Author(s):  
Xiaowen Teng ◽  
Guangsheng Zhou ◽  
Yuxuan Wu ◽  
Chenglong Huang ◽  
Wanjing Dong ◽  
...  

The 3D reconstruction method using RGB-D camera has a good balance in hardware cost, point cloud quality and automation. However, due to the limitation of inherent structure and imaging principle, the acquired point cloud has problems such as a lot of noise and difficult registration. This paper proposes a three-dimensional reconstruction method using Azure Kinect to solve these inherent problems. Shoot color map, depth map and near-infrared image of the target from six perspectives by Azure Kinect sensor. Multiply the 8-bit infrared image binarization with the general RGB-D image alignment result provided by Microsoft to remove ghost images and most of the background noise. In order to filter the floating point and outlier noise of the point cloud, a neighborhood maximum filtering method is proposed to filter out the abrupt points in the depth map. The floating points in the point cloud are removed before generating the point cloud, and then using the through filter filters out outlier noise. Aiming at the shortcomings of the classic ICP algorithm, an improved method is proposed. By continuously reducing the size of the down-sampling grid and the distance threshold between the corresponding points, the point clouds of each view are continuously registered three times, until get the complete color point cloud. A large number of experimental results on rape plants show that the point cloud accuracy obtained by this method is 0.739mm, a complete scan time is 338.4 seconds, and the color reduction is high. Compared with a laser scanner, the proposed method has considerable reconstruction accuracy and a significantly ahead of the reconstruction speed, but the hardware cost is much lower and it is easy to automate the scanning system. This research shows a low-cost, high-precision 3D reconstruction technology, which has the potential to be widely used for non-destructive measurement of crop phenotype.


Author(s):  
J. Pan ◽  
L. Li ◽  
H. Yamaguchi ◽  
K. Hasegawa ◽  
F. I. Thufail ◽  
...  

Abstract. This paper proposes a fused 3D transparent visualization method with the aim of achieving see-through imaging of large-scale cultural heritage by combining photogrammetry point cloud data and 3D reconstructed models. 3D reconstructed models are efficiently reconstructed from a single monocular photo using deep learning. It is demonstrated that the proposed method can be widely applied, particularly to instances of incomplete cultural heritages. In this study, the proposed method is applied to a typical example, the Borobudur temple in Indonesia. The Borobudur temple possesses the most complete collection of Buddhist reliefs. However, some parts of the Borobudur reliefs have been hidden by stone walls and became not visible following the reinforcements during the Dutch rule. Today, only gray-scale monocular photos of those hidden parts are displayed in the Borobudur Museum. In this paper, the visible parts of the temple are first digitized into point cloud data by photogrammetry scanning. For the hidden parts, a 3D reconstruction method based on deep learning is proposed to reconstruct the invisible parts into point cloud data directly from single monocular photos from the museum. The proposed 3D reconstruction method achieves 95% accuracy of the reconstructed point cloud on average. With the point cloud data of both the visible parts and the hidden parts, the proposed transparent visualization method called the stochastic point-based rendering is applied to achieve a fused 3D transparent visualization of the valuable temple.


Sensors ◽  
2021 ◽  
Vol 21 (10) ◽  
pp. 3493
Author(s):  
Gahyeon Lim ◽  
Nakju Doh

Remarkable progress in the development of modeling methods for indoor spaces has been made in recent years with a focus on the reconstruction of complex environments, such as multi-room and multi-level buildings. Existing methods represent indoor structure models as a combination of several sub-spaces, which are constructed by room segmentation or horizontal slicing approach that divide the multi-room or multi-level building environments into several segments. In this study, we propose an automatic reconstruction method of multi-level indoor spaces with unique models, including inter-room and inter-floor connections from point cloud and trajectory. We construct structural points from registered point cloud and extract piece-wise planar segments from the structural points. Then, a three-dimensional space decomposition is conducted and water-tight meshes are generated with energy minimization using graph cut algorithm. The data term of the energy function is expressed as a difference in visibility between each decomposed space and trajectory. The proposed method allows modeling of indoor spaces in complex environments, such as multi-room, room-less, and multi-level buildings. The performance of the proposed approach is evaluated for seven indoor space datasets.


2015 ◽  
Vol 75 (2) ◽  
Author(s):  
Ho Wei Yong ◽  
Abdullah Bade ◽  
Rajesh Kumar Muniandy

Over the past thirty years, a number of researchers have investigated on 3D organ reconstruction from medical images and there are a few 3D reconstruction software available on the market. However, not many researcheshave focused on3D reconstruction of breast cancer’s tumours. Due to the method complexity, most 3D breast cancer’s tumours reconstruction were done based on MRI slices dataeven though mammogram is the current clinical practice for breast cancer screening. Therefore, this research will investigate the process of creating a method that will be able to reconstruct 3D breast cancer’s tumours from mammograms effectively.  Several steps were proposed for this research which includes data acquisition, volume reconstruction, andvolume rendering. The expected output from this research is the 3D breast cancer’s tumours model that is generated from correctly registered mammograms. The main purpose of this research is to come up with a 3D reconstruction method that can produce good breast cancer model from mammograms while using minimal computational cost.


Sign in / Sign up

Export Citation Format

Share Document