scholarly journals Virtual dissection of the real brain: integration of photographic 3D models into virtual reality and its effect on neurosurgical resident education

2021 ◽  
Vol 51 (2) ◽  
pp. E16
Author(s):  
Tae Hoon Roh ◽  
Ji Woong Oh ◽  
Chang Ki Jang ◽  
Seonah Choi ◽  
Eui Hyun Kim ◽  
...  

OBJECTIVE Virtual reality (VR) is increasingly being used for education and surgical simulation in neurosurgery. So far, the 3D sources for VR simulation have been derived from medical images, which lack real color. The authors made photographic 3D models from dissected cadavers and integrated them into the VR platform. This study aimed to introduce a method of developing a photograph-integrated VR and to evaluate the educational effect of these models. METHODS A silicone-injected cadaver head was prepared. A CT scan of the specimen was taken, and the soft tissue and skull were segmented to 3D objects. The cadaver was dissected layer by layer, and each layer was 3D scanned by a photogrammetric method. The objects were imported to a free VR application and layered. Using the head-mounted display and controllers, the various neurosurgical approaches were demonstrated to neurosurgical residents. After performing hands-on virtual surgery with photographic 3D models, a feedback survey was collected from 31 participants. RESULTS Photographic 3D models were seamlessly integrated into the VR platform. Various skull base approaches were successfully performed with photograph-integrated VR. During virtual dissection, the landmark anatomical structures were identified based on their color and shape. Respondents rated a higher score for photographic 3D models than for conventional 3D models (4.3 ± 0.8 vs 3.2 ± 1.1, respectively; p = 0.001). They responded that performing virtual surgery with photographic 3D models would help to improve their surgical skills and to develop and study new surgical approaches. CONCLUSIONS The authors introduced photographic 3D models to the virtual surgery platform for the first time. Integrating photographs with the 3D model and layering technique enhanced the educational effect of the 3D models. In the future, as computer technology advances, more realistic simulations will be possible.

2014 ◽  
Vol 7 (1) ◽  
Author(s):  
Claudio Pensieri ◽  
Maddalena Pennacchini

Background: Virtual Reality (VR) was defined as a collection of technological devices: “a computer capable of interactive 3D visualization, a head-mounted display and data gloves equipped with one or more position trackers”. Today, lots of scientists define VR as a simulation of the real world based on computer graphics, a three dimensional world in which communities of real people interact, create content, items and services, producing real economic value through e-Commerce.Objective: To report the results of a systematic review of articles and reviews published about the theme: “Virtual Reality in Medicine”.Methods: We used the search query string: “Virtual Reality”, “Metaverse”, “Second Life”, “Virtual World”, “Virtual Life” in order to find out how many articles were written about these themes. For the “Meta-review” we used only “Virtual Reality” AND “Review”. We searched the following databases: Psycinfo, Journal of Medical Internet Research, Isiknowledge till September 2011 and Pubmed till February 2012. We included any source published in either print format or on the Internet, available in all languages, and containing texts that define or attempt to define VR in explicit terms.Results: We retrieved 3,443 articles on Pubmed in 2012 and 8,237 on Isiknowledge in 2011. This large number of articles covered a wide range of themes, but showed no clear consensus about VR. We identified 4 general uses of VR in Medicine, and searched for the existing reviews about them. We found 364 reviews in 2011, although only 197 were pertinent to our aims: 1. Communication Interface (11 Reviews); 2. Medical Education (49 reviews); 3. Surgical Simulation (49 Reviews) and 4. Psychotherapy (88 Reviews).Conclusion: We found a large number of articles, but no clear consensus about the meaning of the term VR in Medicine. We found numerous articles published on these topics and many of them have been reviewed. We decided to group these reviews in 4 areas in order to provide a systematic overview of the subject matter, and to enable those interested to learn more about these particular topics.


2021 ◽  
Vol 17 (3) ◽  
pp. 415-431 ◽  
Author(s):  
Martina Paatela-Nieminen

This article explores digital material/ism by examining student teachers’ experiences, processes and products with fully immersive virtual reality (VR) as part of visual art education. The students created and painted a virtual world, given the name Gretan puutarha (‘Greta’s Garden’), using the Google application Tilt Brush. They also applied photogrammetry techniques to scan 3D objects from the real world in order to create 3D models for their VR world. Additionally, they imported 2D photographs and drawings along with applied animated effects to construct their VR world digitally, thereby remixing elements from real life and fantasy. The students were asked open-ended questions to find out how they created art virtually and the results were analysed using Burdea’s VR concepts of immersion, interaction and imagination. Digital material was created intersubjectively and intermedially while it was also remixed with real and imaginary. Various webs of meanings were created, both intertextual and rhizomatic in nature.


2020 ◽  
Author(s):  
Siddavatam Rammohan Reddy

This paper focuses on to convert photographs into embossed 3D models and then bring them to life using a 3D printer. A Lithophane is a 3-dimensional generation of a 2-dimensional image and 3D representation of a photo can be seen only when it is illuminated from behind. Turning images into 3D objects give us more feeling and literally adds a new dimension. The lithophane can be manufactured by the way of an automated additive manufacturing process, such as 3-D printing. lithophanes are a simple way to enhance your favourite photos. 3D printed photos also known as 3D Printed lithophanes, are an extremely unique and creative application. The process adopted in lithophane is FDM technology, in which different the materials like PLA (polylactic acid), ABS (acrylonitrile butadiene styrene), etc. By heating the filament material to its melting point and it is deposited layer by layer. Combination of many layers will give us a final 3D Printed model.


Author(s):  
Thomas Kersten ◽  
Daniel Drenkhan ◽  
Simon Deggim

AbstractTechnological advancements in the area of Virtual Reality (VR) in the past years have the potential to fundamentally impact our everyday lives. VR makes it possible to explore a digital world with a Head-Mounted Display (HMD) in an immersive, embodied way. In combination with current tools for 3D documentation, modelling and software for creating interactive virtual worlds, VR has the means to play an important role in the conservation and visualisation of cultural heritage (CH) for museums, educational institutions and other cultural areas. Corresponding game engines offer tools for interactive 3D visualisation of CH objects, which makes a new form of knowledge transfer possible with the direct participation of users in the virtual world. However, to ensure smooth and optimal real-time visualisation of the data in the HMD, VR applications should run at 90 frames per second. This frame rate is dependent on several criteria including the amount of data or number of dynamic objects. In this contribution, the performance of a VR application has been investigated using different digital 3D models of the fortress Al Zubarah in Qatar with various resolutions. We demonstrate the influence on real-time performance by the amount of data and the hardware equipment and that developers of VR applications should find a compromise between the amount of data and the available computer hardware, to guarantee a smooth real-time visualisation with approx. 90 fps (frames per second). Therefore, CAD models offer a better performance for real-time VR visualisation than meshed models due to the significant reduced data volume.


Seminar.net ◽  
2018 ◽  
Vol 14 (1) ◽  
pp. 1-12
Author(s):  
Jonna Häkkilä ◽  
Ashley Colley ◽  
Jani Väyrynen ◽  
Antti-Jussi Yliharju

In this paper, we address the introduction of Virtual Reality (VR) tools to the education of industrial design (ID) university students. We present three cases of how we have introduced VR technology in different courses of the industrial design curriculum at the University of Lapland, Finland. As the first example (Case I), we introduced a VR simulation as an empathetic design tool to simulate visual disabilities. The second example (Case II) is reported from a course where students created concepts for a head mounted display (HMD) AR application in smart buildings, and tried out interaction with a HMD VR application. In the third example (Case III), VR was used as a display environment to exhibit students’ 3D industrial design concept models. We report our experiences and lessons learnt, as well as recorded student feedback from the trials. As salient findings, we report the general positive feedback, successful integration with the taught themes especially when connected to physical 3D models, as well as suggested improvements. Hindering the adoption of the technology from the teaching point of view, we report on the lack of infrastructure for multi-user groups in classrooms, the additional effort required to set up the technical system, and limited features supporting multimodality.


Neurosurgery ◽  
2003 ◽  
Vol 52 (3) ◽  
pp. 499-505 ◽  
Author(s):  
Antonio Bernardo ◽  
Mark C. Preul ◽  
Joseph M. Zabramski ◽  
Robert F. Spetzler

Abstract OBJECTIVE This project involves the development of a three-dimensional surgical simulator called interactive virtual dissection, which is designed to teach surgeons the visuospatial skills required to navigate through a transpetrosal approach. METHODS A robotically controlled microscope is used for surgical planning and data collection. The spatial anatomic data are recorded from sequentially deeper cadaveric head dissections as a series of superimposed anatomic pictures in stereoscopic digital format. The sequential series of images are then merged to form the final virtual representation. RESULTS The current three-dimensional virtual reality simulator allows the user to drill the petrous bone progressively deeper and to identify crucial structures much like an experienced surgeon drilling the petrous bone. The program allows surgeons and trainees to manipulate the virtual “surgical field” by interacting with the surgical anatomy. The interactive system functions on a desktop computer. CONCLUSION The ability to visualize and understand anatomic spatial relationships is crucial in surgical planning, as is a surgeon's confidence in performing the surgery. The virtual reality simulator does not replace the need for practicing surgery on cadavers. However, it is designed to facilitate, via stereoscopic projection, learning how to manipulate a drill in complicated or unfamiliar surgical approaches (e.g., a transpetrosal approach).


Author(s):  
Esin Onbasıog˘lu ◽  
Bas¸ar Atalay ◽  
Dionysis Goularas ◽  
Ahu H. Soydan ◽  
Koray K. S¸afak ◽  
...  

Virtual reality based surgical training have a great potential as an alternative to traditional training methods. In neurosurgery, state-of-the-art training devices are limited and the surgical experience accumulates only after so many surgical procedures. Incorrect surgical movements can be destructive; leaving patients paralyzed, comatose or dead. Traditional techniques for training in surgery use animals, phantoms, cadavers and real patients. Most of the training is based either on these or on observation behind windows. The aim of this research is the development of a novel virtual reality training system for neurosurgical interventions based on a real surgical microscope for a better visual and tactile realism. The simulation works by an accurate tissue modeling, a force feedback device and a representation of the virtual scene on the screen or directly on the oculars of the operating microscope. An intra-operative presentation of the preoperative three-dimensional data will be prepared in our laboratory and by using this existing platform virtual organs will be reconstructed from real patients’ images. VISPLAT is a platform for virtual surgery simulation. It is designed as a patient-specific system that provides a database where patient information and CT images are stored. It acts as a framework for modeling 3D objects from CT images, visualization of the surgical operations, haptic interaction and mechanistic material-removal models for surgical operations. It tries to solve the challenging problems in surgical simulation, such as real-time interaction with complex 3D datasets, photorealistic visualization, and haptic (force-feedback) modeling. Surgical training on this system for educational and preoperative planning purposes will increase the surgical success and provide a better quality of life for the patients. Surgical residents trained to perform surgery using virtual reality simulators will be more proficient and have fewer errors in the first operations than those who received no virtual reality simulated education. VISPLAT will help to accelerate the learning curve. In future VISPLAT will offer more sophisticated task training programs for minimally invasive surgery; this system will record errors and supply a way of measuring operative efficiency and performance, working both as an educational tool and a surgical planning platform quality.


2021 ◽  
Vol 11 (7) ◽  
pp. 3090
Author(s):  
Sangwook Yoo ◽  
Cheongho Lee ◽  
Seongah Chin

To experience a real soap bubble show, materials and tools are required, as are skilled performers who produce the show. However, in a virtual space where spatial and temporal constraints do not exist, bubble art can be performed without real materials and tools to give a sense of immersion. For this, the realistic expression of soap bubbles is an interesting topic for virtual reality (VR). However, the current performance of VR soap bubbles is not satisfying the high expectations of users. Therefore, in this study, we propose a physically based approach for reproducing the shape of the bubble by calculating the measured parameters required for bubble modeling and the physical motion of bubbles. In addition, we applied the change in the flow of the surface of the soap bubble measured in practice to the VR rendering. To improve users’ VR experience, we propose that they should experience a bubble show in a VR HMD (Head Mounted Display) environment.


Sensors ◽  
2021 ◽  
Vol 21 (14) ◽  
pp. 4663
Author(s):  
Janaina Cavalcanti ◽  
Victor Valls ◽  
Manuel Contero ◽  
David Fonseca

An effective warning attracts attention, elicits knowledge, and enables compliance behavior. Game mechanics, which are directly linked to human desires, stand out as training, evaluation, and improvement tools. Immersive virtual reality (VR) facilitates training without risk to participants, evaluates the impact of an incorrect action/decision, and creates a smart training environment. The present study analyzes the user experience in a gamified virtual environment of risks using the HTC Vive head-mounted display. The game was developed in the Unreal game engine and consisted of a walk-through maze composed of evident dangers and different signaling variables while user action data were recorded. To demonstrate which aspects provide better interaction, experience, perception and memory, three different warning configurations (dynamic, static and smart) and two different levels of danger (low and high) were presented. To properly assess the impact of the experience, we conducted a survey about personality and knowledge before and after using the game. We proceeded with the qualitative approach by using questions in a bipolar laddering assessment that was compared with the recorded data during the game. The findings indicate that when users are engaged in VR, they tend to test the consequences of their actions rather than maintaining safety. The results also reveal that textual signal variables are not accessed when users are faced with the stress factor of time. Progress is needed in implementing new technologies for warnings and advance notifications to improve the evaluation of human behavior in virtual environments of high-risk surroundings.


Sign in / Sign up

Export Citation Format

Share Document