The Study for Automatic 3D Reconstruction of Endoscopic Video Image

Author(s):  
Toshiaki Nagakura ◽  
Kenji Okazaki ◽  
Tomoki Michida ◽  
Motohiro Hirao ◽  
Masako Kawai
2021 ◽  
Vol 7 (2) ◽  
pp. 335-338
Author(s):  
Sina Walluscheck ◽  
Thomas Wittenberg ◽  
Volker Bruns ◽  
Thomas Eixelberger ◽  
Ralf Hackner

Abstract For the image-based documentation of a colonoscopy procedure, a 3D-reconstuction of the hollow colon structure from endoscopic video streams is desirable. To obtain this reconstruction, 3D information about the colon has to be extracted from monocular colonoscopy image sequences. This information can be provided by estimating depth through shape-from-motion approaches, using the image information from two successive image frames and the exact knowledge of their disparity. Nevertheless, during a standard colonoscopy the spatial offset between successive frames is continuously changing. Thus, in this work deep convolutional neural networks (DCNNs) are applied in order to obtain piecewise depth maps and point clouds of the colon. These pieces can then be fused for a partial 3D reconstruction.


2010 ◽  
pp. 2214-2219
Author(s):  
Adrian R.W. Hatfield

Flexible fibre optic endoscopes were developed in the mid-1960s, leading to the growth of gastrointestinal endoscopy as we now know it. The recent availability of cheaper, miniaturized colour chips has led to the development of video endoscopes, providing an excellent clear view that does not deteriorate with age (as it does with fibre optic devices). With improvements in software, the endoscopic video image can be magnified: modern instruments will zoom up to 25 × magnification, and mucosal detail can also be enhanced electronically so that small lesions a few millimetres in size can be seen quite clearly. The modern video endoscope image can be instantly printed out and archived digitally on a computer system....


2009 ◽  
Author(s):  
Atsushi Yamada ◽  
Kento Nishibori ◽  
Yuichiro Hayashi ◽  
Junichi Tokuda ◽  
Nobuhiko Hata ◽  
...  

This document describes the surgical robot console system based on 3D Slicer for image-guided surgery. Considering the image-guided surgery, since image workstations have complex User Interface (UI) and extra functions, it is supposed that such UI is not suitable for surgeon who is the robot operator. The proposed robot console is designed as a simple UI for the robot operator, which can display the endoscopic video image, the sensor data, the robot status and simple images for guiding the surgery. Therefore, we expect that the surgeon can concentrate on the operation itself by utilizing the robot console. On the other hand, since the robot console system is based on 3D Slicer, the robot console can use the abundant image operation functions. Moreover, it can use the flexibility of tool connectivities by using the OpenIGTLink protocol. In addition, since the video image is captured by using the multifunctional library OpenCV , we can expect the extensibility about the function of the proposed system.


2012 ◽  
Vol 26 (9) ◽  
pp. 1216-1220 ◽  
Author(s):  
Tomokazu Sazuka ◽  
Yoichi Kambara ◽  
Takuro Ishii ◽  
Kazuyoshi Nakamura ◽  
Shinichi Sakamoto ◽  
...  

2018 ◽  
Vol 63 (4) ◽  
pp. 461-466 ◽  
Author(s):  
Quentin Péntek ◽  
Simon Hein ◽  
Arkadiusz Miernik ◽  
Alexander Reiterer

Abstract Bladder cancer is likely to recur after resection. For this reason, bladder cancer survivors often undergo follow-up cystoscopy for years after treatment to look for bladder cancer recurrence. 3D modeling of the bladder could provide more reliable cystoscopic documentation by giving an overall picture of the organ and tumor positions. However, 3D reconstruction of the urinary bladder based on endoscopic images is challenging. This is due to the small field of view of the endoscope, considerable image distortion, and occlusion by urea, blood or particles. In this paper, we will demonstrate a method for the conversion of uncalibrated, monocular, endoscopic videos of the bladder into a 3D model using structure-from-motion (SfM). First of all, frames are extracted from video sequences. Distortions are then corrected in a calibration procedure. Finally, the 3D reconstruction algorithm generates a sparse surface approximation of the bladder lining based on the corrected frames. This method was tested using an endoscopic video of a phantom that mimics the rich structure of the bladder. The reconstructed 3D model covered a large part of the object, with an average reprojection error of 1.15 pixels and a relative accuracy of 99.4%.


2009 ◽  
Vol 09 (04) ◽  
pp. 609-620 ◽  
Author(s):  
TATSUO IGARASHI ◽  
SATOKI ZENBUTSU ◽  
YUKIO NAYA ◽  
TAKURO ISHII ◽  
WEN-WEI YU ◽  
...  

We report a novel method of reconstructing the 3D structure of the prostatic urethra and measuring its elasticity using endoscopic video images, and discuss their relation to clinical relevancy. Information regarding pixel color and brightness in the endoscopic video image is converted to relative distance between the object and the light source. An opened, 3D image of the prostatic urethra is obtained from a video image captured by the endoscope as it is slowly pulled through the urethra. The elasticity of the urethra is determined by recording a video image of the endoscope fixed in the prostatic urethra, with and without irrigation under water pressure of approximately 80 cm H 2 O . Angulation of the prostatic urethra is estimated by the number of intersections between the outline of protruded prostate and the midline of the urethra in patients with severe voiding dysfunction scheduled for transurethral resection of prostate, and in those scheduled for transurethral resection of bladder tumor without apparent discomfort during urination. The number of intersections showed a relationship with voiding symptoms. In conclusion, reconstruction of the 3D structure of the prostatic urethra from endoscopic video images is a feasible method that shows promise for estimating the mechanism of voiding dysfunction.


Author(s):  
Jose-Maria Carazo ◽  
I. Benavides ◽  
S. Marco ◽  
J.L. Carrascosa ◽  
E.L. Zapata

Obtaining the three-dimensional (3D) structure of negatively stained biological specimens at a resolution of, typically, 2 - 4 nm is becoming a relatively common practice in an increasing number of laboratories. A combination of new conceptual approaches, new software tools, and faster computers have made this situation possible. However, all these 3D reconstruction processes are quite computer intensive, and the middle term future is full of suggestions entailing an even greater need of computing power. Up to now all published 3D reconstructions in this field have been performed on conventional (sequential) computers, but it is a fact that new parallel computer architectures represent the potential of order-of-magnitude increases in computing power and should, therefore, be considered for their possible application in the most computing intensive tasks.We have studied both shared-memory-based computer architectures, like the BBN Butterfly, and local-memory-based architectures, mainly hypercubes implemented on transputers, where we have used the algorithmic mapping method proposed by Zapata el at. In this work we have developed the basic software tools needed to obtain a 3D reconstruction from non-crystalline specimens (“single particles”) using the so-called Random Conical Tilt Series Method. We start from a pair of images presenting the same field, first tilted (by ≃55°) and then untilted. It is then assumed that we can supply the system with the image of the particle we are looking for (ideally, a 2D average from a previous study) and with a matrix describing the geometrical relationships between the tilted and untilted fields (this step is now accomplished by interactively marking a few pairs of corresponding features in the two fields). From here on the 3D reconstruction process may be run automatically.


Sign in / Sign up

Export Citation Format

Share Document