scholarly journals Real-time volume rendering of four-dimensional images based on three-dimensional texture mapping

2001 ◽  
Vol 14 (S1) ◽  
pp. 202-204 ◽  
Author(s):  
Jinwoo Hwang ◽  
June Sic Kim ◽  
Jae Seok Kim ◽  
In Young Kim ◽  
Sun I. Kim
Author(s):  
Yanyang Zeng ◽  
Panpan Jia

The underwater acoustics is primary and most effective method for underwater object detection and the complex underwater acoustics battlefield environment can be visually described by the three-dimensional (3D) energy field. Through solving the 3D propagation models, the traditional underwater acoustics volume data can be obtained, but it is large amount of calculation. In this paper, a novel modeling approach, which transforms two-dimensional (2D) wave equation into 2D space and optimizes energy loss propagation model, is proposed. In this way, the information for the obtained volume data will not be lost too much. At the same time, it can meet the requirements of data processing for the real-time visualization. In the process of volume rendering, 3D texture mapping methods is used. The experimental results are evaluated on data size and frame rate, showing that our approach outperforms other approaches and the approach can achieve better results in real time and visual effects.


2001 ◽  
Author(s):  
Jinwoo Hwang ◽  
June-Sic Kim ◽  
Jae Seok Kim ◽  
In Young Kim ◽  
Sun Il Kim

Author(s):  
Daniel Jie Yuan Chin ◽  
Ahmad Sufril Azlan Mohamed ◽  
Khairul Anuar Shariff ◽  
Mohd Nadhir Ab Wahab ◽  
Kunio Ishikawa

Three-dimensional reconstruction plays an important role in assisting doctors and surgeons in diagnosing bone defects’ healing progress. Common three-dimensional reconstruction methods include surface and volume rendering. As the focus is on the shape of the bone, volume rendering is omitted. Many improvements have been made on surface rendering methods like Marching Cubes and Marching Tetrahedra, but not many on working towards real-time or near real-time surface rendering for large medical images, and studying the effects of different parameter settings for the improvements. Hence, in this study, an attempt towards near real-time surface rendering for large medical images is made. Different parameter values are experimented on to study their effect on reconstruction accuracy, reconstruction and rendering time, and the number of vertices and faces. The proposed improvement involving three-dimensional data smoothing with convolution kernel Gaussian size 0.5 and mesh simplification reduction factor of 0.1, is the best parameter value combination for achieving a good balance between high reconstruction accuracy, low total execution time, and a low number of vertices and faces. It has successfully increased the reconstruction accuracy by 0.0235%, decreased the total execution time by 69.81%, and decreased the number of vertices and faces by 86.57% and 86.61% respectively.


1997 ◽  
Vol 36 (01) ◽  
pp. 1-10 ◽  
Author(s):  
M. Haubner ◽  
A. Lösch ◽  
F. Eckstein ◽  
M. D. Seemann ◽  
W. van Eimeren ◽  
...  

Abstract:The most important rendering methods applied in medical imaging are surface and volume rendering techniques. Each approach has its own advantages and limitations: Fast surface-oriented methods are able to support real-time interaction and manipulation. The underlying representation, however, is dependent on intensive image processing to extract the object surfaces. In contrast, volume visualization is not necessarily based on extensive image processing and interpretation. No data reduction to geometric primitives, such as polygons, is required. Therefore, the process of volume rendering is currently not operating in real time. In order to provide the radiological diagnosis with additional information as well as to enable simulation and preoperative treatment planning we developed a new hybrid rendering method which combines the advantages of surface and volume presentation, and minimizes the limitations of these approaches. We developed a common data representation method for both techniques. A preprocessing module enables the construction of a data volume by interpolation as well as the calculation of object surfaces by semiautomatic image interpretation and surface construction. The hybrid rendering system is based on transparency and texture mapping features. It is embedded in a user-friendly open system which enables the support of new application fields such as virtual reality and stereolithography. The efficiency of our new method is described for 3-D subtraction angiography and the visualization of morpho-functional relationships.


Author(s):  
Johnny Kuo ◽  
Gregory Bredthauer ◽  
John Castellucci ◽  
Olaf Von Ramm

2009 ◽  
Author(s):  
Emad M. Boctor ◽  
Mohammad Matinfar ◽  
Omar Ahmad ◽  
Hassan Rivaz ◽  
Michael Choti ◽  
...  

2007 ◽  
Vol 6 (4) ◽  
pp. 245-252 ◽  
Author(s):  
Naka TAKATOSHI ◽  
Shigeyoshi YAMAMOTO ◽  
Yasuyo HATANO ◽  
Mamoru ENDO ◽  
Masashi YAMADA ◽  
...  

2019 ◽  
Vol 40 (Supplement_1) ◽  
Author(s):  
J D Kasprzak ◽  
M Kierepka ◽  
J Z Peruga ◽  
D Dudek ◽  
B Machura ◽  
...  

Abstract Background Three-dimensional (3D) echocardiographic data acquired from transesophageal (TEE) window are commonly used in planning and during percutaneous structural cardiac interventions (PSCI). Purpose We hypothesized that innovative, interactive mixed reality display can be integrated with procedural PSCI workflow to improve perception and interpretation of 3D data representing cardiac anatomy. Methods 3D TEE datasets were acquired before, during and after the completion of PSCI in 8 patients (occluders: 2 atrial appendage, 2 patent foramen ovale and 3 atrial septal implantations and percutaneous mitral commissurotomy). 30 Carthesian DICOM files were used to test the feasibility of mixed reality with commercially available head-mounted device (overlying hologram of 3D TEE data onto real-world view) as display for the interventional or imaging operator. Dedicated software was used for files conversion and 3D rendering of data to display device (in 1 case real-time Wi-Fi streaming from echocardiograph) and spatial manipulation of hologram during PSCI. Custom viewer was used to perform volume rendering and adjustment (cropping, transparency and shading control). Results Pre- and intraprocedural 3D TEE was performed in all 8 patients (5 women, age 40–83). Thirty selected 3DTEE datasets were successfully transferred and displayed in mixed reality head-mounted device as a holographic image overlying the real world view. The analysis was performed both before and during the procedure and compared with flatscreen 2-D display of the echocardiograph. In one case, real-time data transfer was successfully implemented during mitral balloon commissurotomy. The quality of visualization was judged as good without diagnostic content loss in all (100%) datasets. Both target structures and additional anatomical details were clearly presented including fenestrations of atrial septal defect, prominent Eustachian valve and earlier cardiac implants. Volume rendered views were touchlessly manipulated and displayed with a selection of intensity windows, transfer functions, and filters. Detail display was judged comparable to current 2-D volume-rendering on commercial workstations and touchless user interface - comfortable for optimization of views during PSCI. Conclusions Mixed reality display using a commercially available head-mounted device can be successfully integrated with preparation and execution of PSCI. The benefits of this solution include touchless image control and unobstructed real world viewing facilitating intraprocedural use, thus showing superiority over virtual or enhanced reality solutions. Expected progress includes integration of color flow data and optimization of real-time streaming option.


2012 ◽  
Vol 505 ◽  
pp. 282-286 ◽  
Author(s):  
Yao Wang ◽  
Li Xia Sun ◽  
Jiu Chen Fan

This paper introduces the modular structure of NC turning simulation system and the characters of OpenGL. OpenGL is the standard interface of three-dimensional graphic software, and its main functions include modeling, transformation, mode setting of color, light and material setting, texture mapping, double buffer animation, etc. The application of OpenGL in development for NC turning simulation system is described, at the same time, its effect on three aspects of solid modeling, realistic effect and animation demonstration is specially illustrated. Simulation results prove that the NC turning simulation system based on OpenGL runs reliably, and can effectively simulate the machining process in real time.


Sign in / Sign up

Export Citation Format

Share Document