Instrument contact force estimation using endoscopic image sequence and 3D reconstruction model

Author(s):  
Wei-Che Lin ◽  
Kai-Tai Song
2012 ◽  
Vol 2012 ◽  
pp. 1-11 ◽  
Author(s):  
Alireza Behrad ◽  
Nadia Roodsarabi

One of the most important issues in human motion analysis is the tracking and 3D reconstruction of human motion, which utilizes the anatomic points' positions. These points can uniquely define the position and orientation of all anatomical segments. In this work, a new method is proposed for tracking and 3D reconstruction of human motion from the image sequence of a monocular static camera. In this method, 2D tracking is used for 3D reconstruction, which a database of selected frames is used for the correction of tracking process. The method utilizes a new image descriptor based on discrete cosine transform (DCT), which is employed in different stages of the algorithm. The advantage of using this descriptor is the capabilities of selecting proper frequency regions in various tasks, which results in an efficient tracking and pose matching algorithms. The tracking and matching algorithms are based on reference descriptor matrixes (RDMs), which are updated after each stage based on the frequency regions in DCT blocks. Finally, 3D reconstruction is performed using Taylor’s method. Experimental results show the promise of the algorithm.


2011 ◽  
Vol 88-89 ◽  
pp. 755-758
Author(s):  
Bing Yan He ◽  
Jian Jun Cui

The paper researches the general procedure and method of 3D modeling on the mine terrain surface. The terrain modeling is being reconstructed with Delaunay triangular network. On this basis, Bezier triangular surface is adopted to approach, which effectively solves the problems of unsmooth surface and large amount of data caused by 3D reconstruction.


2020 ◽  
Vol 1550 ◽  
pp. 032051
Author(s):  
Yun-peng Liu ◽  
Xing-peng Yan ◽  
Ning Wang ◽  
Xin Zhang ◽  
Zhe Li

Author(s):  
Se-Won Park ◽  
Ra Gyoung Yoon ◽  
Hyunwoo Lee ◽  
Heon-Jin Lee ◽  
Yong-Do Choi ◽  
...  

In cone-beam computed tomography (CBCT), the minimum threshold of the gray value of segmentation is set to convert the CBCT images to the 3D mesh reconstruction model. This study aimed to assess the accuracy of image registration of optical scans to 3D CBCT reconstructions created by different thresholds of grey values of segmentation in partial edentulous jaw conditions. CBCT of a dentate jaw was reconstructed to 3D mesh models using three different thresholds of gray value (−500, 500, and 1500), and three partially edentulous models with different numbers of remaining teeth (4, 8, and 12) were made from each 3D reconstruction model. To merge CBCT and optical scan data, optical scan images were registered to respective 3D reconstruction CBCT images using a point-based best-fit algorithm. The accuracy of image registration was assessed by measuring the positional deviation between the matched 3D images. The Kruskal–Wallis test and a post hoc Mann–Whitney U test with Bonferroni correction were used to compare the results between groups (α = 0.05). The correlations between the experimental factors were calculated using the two-way analysis of variance test. The positional deviations were lowest with the threshold of 500, followed by the threshold of 1500, and then −500. A significant interaction was found between the threshold of gray values and the number of remaining teeth on the registration accuracy. The most significant deviation was observed in the arch model with four teeth reconstructed with a gray-value threshold of −500. The threshold for the gray value of CBCT segmentation affects the accuracy of image registration of optical scans to the 3D reconstruction model of CBCT. The appropriate gray value that can visualize the anatomical structure should be set, especially when few teeth remain in the dental arch.


2020 ◽  
Vol 148 ◽  
pp. 103800
Author(s):  
F. Mouzo ◽  
F. Michaud ◽  
U. Lugris ◽  
J. Cuadrado

Sensors ◽  
2019 ◽  
Vol 19 (10) ◽  
pp. 2333 ◽  
Author(s):  
Simone Mentasti ◽  
Federico Pedersini

In this paper we present a simple stand-alone system performing the autonomous acquisition of multiple pictures all around large objects, i.e., objects that are too big to be photographed from any side just with a camera held by hand. In this approach, a camera carried by a drone (an off-the-shelf quadcopter) is employed to carry out the acquisition of an image sequence representing a valid dataset for the 3D reconstruction of the captured scene. Both the drone flight and the choice of the viewpoints for shooting a picture are automatically controlled by the developed application, which runs on a tablet wirelessly connected to the drone, and controls the entire process in real time. The system and the acquisition workflow have been conceived with the aim to keep the user intervention minimal and as simple as possible, requiring no particular skill to the user. The system has been experimentally tested on several subjects of different shapes and sizes, showing the ability to follow the requested trajectory with good robustness against any flight perturbations. The collected images are provided to a scene reconstruction software, which generates a 3D model of the acquired subject. The quality of the obtained reconstructions, in terms of accuracy and richness of details, have proved the reliability and efficacy of the proposed system.


Sign in / Sign up

Export Citation Format

Share Document