Feature-Based 3D Reconstruction Model for Close-Range Objects and Its Application to Human Finger

Author(s):  
Feng Liu ◽  
Linlin Shen ◽  
David Zhang
2011 ◽  
Vol 88-89 ◽  
pp. 755-758
Author(s):  
Bing Yan He ◽  
Jian Jun Cui

The paper researches the general procedure and method of 3D modeling on the mine terrain surface. The terrain modeling is being reconstructed with Delaunay triangular network. On this basis, Bezier triangular surface is adopted to approach, which effectively solves the problems of unsmooth surface and large amount of data caused by 3D reconstruction.


Author(s):  
Se-Won Park ◽  
Ra Gyoung Yoon ◽  
Hyunwoo Lee ◽  
Heon-Jin Lee ◽  
Yong-Do Choi ◽  
...  

In cone-beam computed tomography (CBCT), the minimum threshold of the gray value of segmentation is set to convert the CBCT images to the 3D mesh reconstruction model. This study aimed to assess the accuracy of image registration of optical scans to 3D CBCT reconstructions created by different thresholds of grey values of segmentation in partial edentulous jaw conditions. CBCT of a dentate jaw was reconstructed to 3D mesh models using three different thresholds of gray value (−500, 500, and 1500), and three partially edentulous models with different numbers of remaining teeth (4, 8, and 12) were made from each 3D reconstruction model. To merge CBCT and optical scan data, optical scan images were registered to respective 3D reconstruction CBCT images using a point-based best-fit algorithm. The accuracy of image registration was assessed by measuring the positional deviation between the matched 3D images. The Kruskal–Wallis test and a post hoc Mann–Whitney U test with Bonferroni correction were used to compare the results between groups (α = 0.05). The correlations between the experimental factors were calculated using the two-way analysis of variance test. The positional deviations were lowest with the threshold of 500, followed by the threshold of 1500, and then −500. A significant interaction was found between the threshold of gray values and the number of remaining teeth on the registration accuracy. The most significant deviation was observed in the arch model with four teeth reconstructed with a gray-value threshold of −500. The threshold for the gray value of CBCT segmentation affects the accuracy of image registration of optical scans to the 3D reconstruction model of CBCT. The appropriate gray value that can visualize the anatomical structure should be set, especially when few teeth remain in the dental arch.


2013 ◽  
Vol 718-720 ◽  
pp. 2184-2190
Author(s):  
Bao Quan ◽  
Jiang Nan

Tomographic particle image velocimetry (Tomo-PIV) is a newly developed technique for three-component three-dimensional (3C-3D) velocity measurement based on the tomographic reconstruction of a 3D volume light intensity field from multiple two-dimensional projections. A simplification of 3D tomographic reconstruction model, which reduced from a 3D volume with 2D images to a 2D slice with 1D lines, simplify this 3D reconstruction into a problem of 2D plane reconstruction by means of optical tomography, is applied in this paper . The principles and details of the tomographic algorithm are discussed, as well as the study of ART and MART reconstruction algorithm is carried out by means of computer-simulated image reconstruction procedure. The three-dimensional volume particle field is reconstructed by MART reconstruction algorithm base on the simplified 3D reconstruction model which made a high reconstruction quality Q=81.37% prove that the way of simplification by MART reconstruction is feasible, so it could be applied in reconstruction of 3D particle field in tomographic particle image velocimetry system.


Author(s):  
F.I. Apollonio ◽  
A. Ballabeni ◽  
M. Gaiani ◽  
F. Remondino

Every day new tools and algorithms for automated image processing and 3D reconstruction purposes become available, giving the possibility to process large networks of unoriented and markerless images, delivering sparse 3D point clouds at reasonable processing time. In this paper we evaluate some feature-based methods used to automatically extract the tie points necessary for calibration and orientation procedures, in order to better understand their performances for 3D reconstruction purposes. The performed tests – based on the analysis of the SIFT algorithm and its most used variants – processed some datasets and analysed various interesting parameters and outcomes (e.g. number of oriented cameras, average rays per 3D points, average intersection angles per 3D points, theoretical precision of the computed 3D object coordinates, etc.).


Author(s):  
C. Stamatopoulos ◽  
C. S. Fraser

Automated close-range photogrammetric network orientation and camera calibration has traditionally been associated with the use of coded targets in the object space to allow for an initial relative orientation (RO) and subsequent spatial resection of the images. However, over the last decade, advances coming mainly from the computer vision (CV) community have allowed for fully automated orientation via feature-based matching techniques. There are a number of advantages in such methodologies for various types of applications, as well as for cases where the use of artificial targets might be not possible or preferable, for example when attempting calibration from low-level aerial imagery, as with UAVs, or when calibrating long-focal length lenses where small image scales call for inconveniently large coded targets. While there are now a number of CV-based algorithms for multi-image orientation within narrow-baseline networks, with accompanying open-source software, from a photogrammetric standpoint the results are typically disappointing as the metric integrity of the resulting models is generally poor, or even unknown. The objective addressed in this paper is target-free automatic multi-image orientation, maintaining metric integrity, within networks that incorporate wide-baseline imagery. The focus is on both the development of a methodology that overcomes the shortcomings that can be present in current CV algorithms, and on the photogrammetric priorities and requirements that exist in current processing pipelines. This paper also reports on the application of the proposed methodology to automated target-free camera self-calibration and discusses the process via practical examples.


Sign in / Sign up

Export Citation Format

Share Document