scholarly journals 3D Reconstruction of CT Scans For Visualization in Virtual Reality

Author(s):  
Naďa Tylová ◽  
Jan Egermaier ◽  
Jakub Tomeš ◽  
Jan Kohout ◽  
Jan Mareš
2021 ◽  
Vol 11 (18) ◽  
pp. 8590
Author(s):  
Zhihan Lv ◽  
Jing-Yan Wang ◽  
Neeraj Kumar ◽  
Jaime Lloret

Augmented Reality is a key technology that will facilitate a major paradigm shift in the way users interact with data and has only just recently been recognized as a viable solution for solving many critical needs [...]


Sensors ◽  
2020 ◽  
Vol 20 (10) ◽  
pp. 2962 ◽  
Author(s):  
Santiago González Izard ◽  
Ramiro Sánchez Torres ◽  
Óscar Alonso Plaza ◽  
Juan Antonio Juanes Méndez ◽  
Francisco José García-Peñalvo

The visualization of medical images with advanced techniques, such as augmented reality and virtual reality, represent a breakthrough for medical professionals. In contrast to more traditional visualization tools lacking 3D capabilities, these systems use the three available dimensions. To visualize medical images in 3D, the anatomical areas of interest must be segmented. Currently, manual segmentation, which is the most commonly used technique, and semi-automatic approaches can be time consuming because a doctor is required, making segmentation for each individual case unfeasible. Using new technologies, such as computer vision and artificial intelligence for segmentation algorithms and augmented and virtual reality for visualization techniques implementation, we designed a complete platform to solve this problem and allow medical professionals to work more frequently with anatomical 3D models obtained from medical imaging. As a result, the Nextmed project, due to the different implemented software applications, permits the importation of digital imaging and communication on medicine (dicom) images on a secure cloud platform and the automatic segmentation of certain anatomical structures with new algorithms that improve upon the current research results. A 3D mesh of the segmented structure is then automatically generated that can be printed in 3D or visualized using both augmented and virtual reality, with the designed software systems. The Nextmed project is unique, as it covers the whole process from uploading dicom images to automatic segmentation, 3D reconstruction, 3D visualization, and manipulation using augmented and virtual reality. There are many researches about application of augmented and virtual reality for medical image 3D visualization; however, they are not automated platforms. Although some other anatomical structures can be studied, we focused on one case: a lung study. Analyzing the application of the platform to more than 1000 dicom images and studying the results with medical specialists, we concluded that the installation of this system in hospitals would provide a considerable improvement as a tool for medical image visualization.


2013 ◽  
Vol 587 ◽  
pp. 412-415
Author(s):  
Serban Costin ◽  
Constantin Anton Micu ◽  
Stefan Cristea

Sever acetabular defects can usually be addressed with a standard acetabular cage. The procedure is more complicated, as standard devices have to be adapted to different shapes of acetabular defects, especially when they are bigger and widespread. The proposed cage tries to fix problems that standard cages sometimes do not resolve for example insufficient fixation due to the fact that the inferior flange did not engage the ischium, graft resorption and at the same time tries to remain affordable. The implant is being built based on the 3D reconstruction of the patient hip from CT scans with the help of additive technologies and uses standard components (cup, screws). The particular implant was designed and produced and it is available for patient implantation. The proposed solution offers a better fit alternative for the patient but realization of the implant is very time consuming, although a systematized process was tried and achieved.


Author(s):  
Kevin Lesniak ◽  
Conrad S. Tucker ◽  
Sven Bilen ◽  
Janis Terpenny ◽  
Chimay Anumba

Immersive virtual reality systems have the potential to transform the manner in which designers create prototypes and collaborate in teams. Using technologies such as the Oculus Rift or the HTC Vive, a designer can attain a sense of “presence” and “immersion” typically not experienced by traditional CAD-based platforms. However, one of the fundamental challenges of creating a high quality immersive virtual reality experience is actually creating the immersive virtual reality environment itself. Typically, designers spend a considerable amount of time manually designing virtual models that replicate physical, real world artifacts. While there exists the ability to import standard 3D models into these immersive virtual reality environments, these models are typically generic in nature and do not represent the designer’s intent. To mitigate these challenges, the authors of this work propose the real time translation of physical objects into an immersive virtual reality environment using readily available RGB-D sensing systems and standard networking connections. The emergence of commercial, off-the shelf RGB-D sensing systems such as the Microsoft Kinect, have enabled the rapid 3D reconstruction of physical environments. The authors present a methodology that employs 3D mesh reconstruction algorithms and real time rendering techniques to capture physical objects in the real world and represent their 3D reconstruction in an immersive virtual realilty environment with which the user can then interact. A case study involving a commodity RGB-D sensor and multiple computers connected through standard TCP internet connections is presented to demonstrate the viability of the proposed methodology.


2020 ◽  
Author(s):  
Oliver Mietzner ◽  
Andre Mastmeyer

AbstractThe ability to generate 3D patient models in a fast and reliable way, is of great importance, e.g. for the simulation of liver punctures in a virtual reality simulation [1], [2], [3], [4]. The aim is to automatically detect and segment abdominal structures in CT-scans. In particular among the selected organ group, the pancreas poses a challenge. We use a combination of random regression forests and U-Nets to detect bounding boxes and generate segmentation masks for five abdominal organs (liver, kidneys, spleen, pancreas). Training and testing is carried out on 50 CT-scans from various public sources. The results show Dice coefficients of up to 0.71.


2018 ◽  
Vol 6 (7_suppl4) ◽  
pp. 2325967118S0008
Author(s):  
Drew A. Lansdown ◽  
Robert Dawe ◽  
Gregory L. Cvetanovich ◽  
Nikhil N. Verma ◽  
Brian J. Cole ◽  
...  

Objectives: Glenoid bone loss is frequently present in the setting of recurrent shoulder instability. The magnitude of bone loss is an important determinant of the optimal surgical treatment. The current gold-standard for measurement of glenoid bone loss is three-dimensional (3D) reconstruction of a computed tomography (CT) scan. CT scans, however, carry an inherent risk of radiation and increased cost for a second modality. Magnetic resonance imaging (MRI) offers excellent soft tissue contrast and may allow resolution of bony structures to generate 3D reconstructions without a risk of ionizing radiation. We hypothesized that automated 3D MRI reconstruction would offer similar measurements of glenoid bone loss as recorded from a 3D CT scan in a clinical setting. Methods: A retrospective review was performed for fourteen patients who had both pre-operative MRI scan and CT scan of the shoulder. All MR scans were performed on a 1.5 T scanner (Siemens) utilizing a Dixon chemical shift separation sequence and the out-of-phase images with 0.90 mm slice thickness. Reconstructions of the glenoid were performed from axial images (Figure 1A) using an open-platform image processing system (3D Slicer; slicer.org). A single point on the glenoid was selected and a standard threshold was used to build a 3D model (Figure 1B). High-resolution CT scans underwent 3D reconstruction in Slicer based on Houndsfield Unit thresholding. Glenoid bone loss on both scans was measured with the Pico method by defining a circle of best fit using the inferior 2/3 of the glenoid and determining the percent area missing from this circle. Pearson’s correlation coefficient was utilized to determine the similarity between MR and CT based measurements. Statistical significance was defined as p<0.05. Results: The correlation between 3D MR and CT-based measurements of glenoid bone loss was excellent (r = 0.95, p<0.0001). The mean bone loss as measured by the 3D MR was 13.2 +- 7.2% and was 12.5 +- 8.6% for the 3D CT reconstruction (p=0.32). Bone loss in this cohort ranged from 3.7-25.4% on 3D MR and 1.4-26.0% on 3D CT. The root-mean-square difference between measurements was 2.7%. Conclusion: There was excellent agreement between automated 3D MR and 3D CT measurements of glenoid bone loss and minimal differences between these measurements. This reconstruction method requires minimal post-processing, no manual segmentation, and is obtained with widely-available MR sequences. This method has the potential to decrease the utilization for CT scans in determining glenoid bone loss. [Figure: see text]


2014 ◽  
Vol 12 (9) ◽  
pp. 912-917 ◽  
Author(s):  
Mirela Erić ◽  
Andraš Anderla ◽  
Darko Stefanović ◽  
Miodrag Drapšin

Sign in / Sign up

Export Citation Format

Share Document