scholarly journals 3D Visualization of Cultural Heritage Artefacts with Virtual Reality devices

Author(s):  
S. Gonizzi Barsanti ◽  
G. Caruso ◽  
L. L. Micoli ◽  
M. Covarrubias Rodriguez ◽  
G. Guidi

Although 3D models are useful to preserve the information about historical artefacts, the potential of these digital contents are not fully accomplished until they are not used to interactively communicate their significance to non-specialists. Starting from this consideration, a new way to provide museum visitors with more information was investigated. The research is aimed at valorising and making more accessible the Egyptian funeral objects exhibited in the Sforza Castle in Milan. The results of the research will be used for the renewal of the current exhibition, at the Archaeological Museum in Milan, by making it more attractive. A 3D virtual interactive scenario regarding the “path of the dead”, an important ritual in ancient Egypt, was realized to augment the experience and the comprehension of the public through interactivity. Four important artefacts were considered for this scope: two ushabty, a wooden sarcophagus and a heart scarab. The scenario was realized by integrating low-cost Virtual Reality technologies, as the Oculus Rift DK2 and the Leap Motion controller, and implementing a specific software by using Unity. The 3D models were implemented by adding responsive points of interest in relation to important symbols or features of the artefact. This allows highlighting single parts of the artefact in order to better identify the hieroglyphs and provide their translation. The paper describes the process for optimizing the 3D models, the implementation of the interactive scenario and the results of some test that have been carried out in the lab.

Author(s):  
Abdelhak Belhi ◽  
Abdelaziz Bouras

Museums and cultural institutions, in general, are in a constant challenge of adding more value to their collections. The attractiveness of assets is practically tightly related to their value obeying the offer and demand law. New digital visualization technologies are found to give more excitements, especially to the younger generation as it is proven by multiple studies. Nowadays, museums around the world are currently trying to promote their collections through new multimedia and digital technologies such as 3D modeling, Virtual Reality (VR), Augmented Reality (AR), serious games, etc. However, the difficulty and the resources required to implement such technologies present a real challenge. Through this poster, we propose a 3D acquisition and visualization framework aiming mostly at increasing the value of cultural collections. This framework preserves cost-effectiveness and time constraints while still introducing new ways of visualization and interaction with high-quality 3D models of cultural objects. Our framework leverages a new acquisition setup to simplify the visual capturing process by using consumer-level hardware. The acquired images are enhanced using frame interpolation and super-resolution. A photogrammetry tool is then used to generate the asset 3D model. This model is displayed in a screen attached to the leap motion controller, which allows hand interaction without having to deal with sophisticated controllers or headgear allowing almost natural interaction.


Author(s):  
Daniel Probst ◽  
Jean-Louis Reymond

The recent general availability of low-cost virtual reality headsets, and accompanying 3D engine support, presents an opportunity to bring the concept of chemical space into virtual environments. While virtual reality applications represent a category of widespread tools in other fields, their use in the visualization and exploration of abstract data such as chemical spaces has been experimental. In our previous work we established the concept of interactive 2D maps of chemical spaces, followed by interactive web-based 3D visualizations, culminating in the interactive web-based 3D visualization of extremely large chemical spaces. Virtual reality chemical spaces are a natural extension of these concepts. As 2D and 3D embeddings, and projections of high-dimensional chemical fingerprint spaces were shown to be valuable tools in chemical space visualization and exploration, existing pipelines of data mining and preparation can be extended to be used in virtual reality applications. Here we present an application based on the Unity engine and the virtual reality toolkit (VRTK), allowing for the interactive exploration of chemical space populated by Drugbank compounds in virtual reality. The source code of the application as well as the most recent build are available on GitHub.


Author(s):  
Kristen Jones

The field of virtual reality is quickly growing across many disciplines, if none more important than the field of archaeology and cultural heritage.  Numerous artifacts are uncovered each year by archaeological excavations around the world, and only a select few are displayed and recorded in museums while the rest remain hidden away in storage facilities.  The use of virtual reality photography provides a potential solution to this problem. This projects aims to optimize a computational workflow for digitally documenting these artifacts through an in depth analysis of the Diniacopoulos Collection of Greek and Egyptian artifacts in collaboration with the Art Conservation department at Queen’s University.  The Diniacopoulos Collection of artifacts has been held by Queen’s since their donation in 2001 by the estate of Olga Diniacopoulos.  This project combines studio Photogrammetry with a method known as Focus Stacking to optimize the quality of each image.  First, images of each object will be used to generate scaled photogrammetric models in Agisoft Photoscan. The same images used to create the 3D models can also be used to create lower-resolution virtual reality movies that are easily shared on websites using the GardenGnome ObjectVR software.  Utilizing another growing industry, 3D printing, takes this method one step further.  3D printing archaeological finds provides people with a tactile experience with the artifacts that would otherwise be kept safe inside museum cases or warehouses where the public has no access. These methods have applications is not only archaeology, but in a number of collaborative fields.


Neurosurgery ◽  
2019 ◽  
Vol 85 (2) ◽  
pp. E343-E349 ◽  
Author(s):  
David Bairamian ◽  
Shinuo Liu ◽  
Behzad Eftekhar

Abstract BACKGROUND Three-dimensional (3D) visualization of the neurovascular structures has helped preoperative surgical planning. 3D printed models and virtual reality (VR) devices are 2 options to improve 3D stereovision and stereoscopic depth perception of cerebrovascular anatomy for aneurysm surgery. OBJECTIVE To investigate and compare the practicality and potential of 3D printed and VR models in a neurosurgical education context. METHODS The VR angiogram was introduced through the development and testing of a VR smartphone app. Ten neurosurgical trainees from Australia and New Zealand participated in a 2-part interactive exercise using 3 3D printed and VR angiogram models followed by a questionnaire about their experience. In a separate exercise to investigate the learning curve effect on VR angiogram application, a qualified neurosurgeon was subjected to 15 exercises involving manipulating VR angiograms models. RESULTS VR angiogram outperformed 3D printed model in terms of resolution. It had statistically significant advantage in ability to zoom, resolution, ease of manipulation, model durability, and educational potential. VR angiogram had a higher questionnaire total score than 3D models. The 3D printed models had a statistically significant advantage in depth perception and ease of manipulation. The results were independent of trainee year level, sequence of the tests, or anatomy. CONCLUSION In selected cases with challenging cerebrovascular anatomy where stereoscopic depth perception is helpful, VR angiogram should be considered as a viable alternative to the 3D printed models for neurosurgical training and preoperative planning. An immersive virtual environment offers excellent resolution and ability to zoom, potentiating it as an untapped educational tool.


Sensors ◽  
2019 ◽  
Vol 19 (5) ◽  
pp. 1072 ◽  
Author(s):  
Tibor Guzsvinecz ◽  
Veronika Szucs ◽  
Cecilia Sik-Lanyi

As the need for sensors increases with the inception of virtual reality, augmented reality and mixed reality, the purpose of this paper is to evaluate the suitability of the two Kinect devices and the Leap Motion Controller. When evaluating the suitability, the authors’ focus was on the state of the art, device comparison, accuracy, precision, existing gesture recognition algorithms and on the price of the devices. The aim of this study is to give an insight whether these devices could substitute more expensive sensors in the industry or on the market. While in general the answer is yes, it is not as easy as it seems: There are significant differences between the devices, even between the two Kinects, such as different measurement ranges, error distributions on each axis and changing depth precision relative to distance.


2017 ◽  
Vol 27 (2) ◽  
pp. 25935
Author(s):  
Nayron Medeiros Soares ◽  
Gabriela Magalhães Pereira ◽  
Renata Italiano da Nóbrega Figueiredo ◽  
Gleydson Silva Morais ◽  
Sandy Gonzaga De Melo

*** Virtual reality therapy using the Leap Motion Controller for post-stroke upper limb rehabilitation ***AIMS: To evaluate the applicability of a virtual reality-based motion sensor for post-stroke upper limb rehabilitation.CASES DESCRIPTION: Three post-stroke patients were subjected to virtual reality training for rehabilitation of their upper limbs using the Leap Motion Controller technology and the game Playground 3D® for 3 consecutive days. On the first and last days, the Box and Blocks test, the De Melo Eye-Hand Coordination Test, and transcranial magnetic stimulation were applied. On the last day, the patients were evaluated with the Experience Evaluation Form. After the proposed training, a lower motor threshold was observed in both cerebral hemispheres, as well as better performance in the tests that evaluated hand and eye-hand coordination skills. The proposed therapy was well received by the patients.CONCLUSIONS: No adverse effects were observed, and promising and precise results were obtained for the virtual reality-based training using the Leap Motion Controller and Playground 3D®. The training allowed patients to have an active role in the rehabilitation of stroke-induced upper limb sequelae.


2021 ◽  
Vol 10 (6) ◽  
pp. 1201
Author(s):  
Maciej Błaszczyk ◽  
Redwan Jabbar ◽  
Bartosz Szmyd ◽  
Maciej Radek

We developed a practical and cost-effective method of production of a 3D-printed model of the arterial Circle of Willis of patients treated because of an intracranial aneurysm. We present and explain the steps necessary to produce a 3D model from medical image data, and express the significant value such models have in patient-specific pre-operative planning as well as education. A Digital Imaging and Communications in Medicine (DICOM) viewer is used to create 3D visualization from a patient’s Computed Tomography Angiography (CTA) images. After generating the reconstruction, we manually remove the anatomical components that we wish to exclude from the print by utilizing tools provided with the imaging software. We then export this 3D reconstructions file into a Standard Triangulation Language (STL) file which is then run through a “Slicer” software to generate a G-code file for the printer. After the print is complete, the supports created during the printing process are removed manually. The 3D-printed models we created were of good accuracy and scale. The median production time used for the models described in this manuscript was 4.4 h (range: 3.9–4.5 h). Models were evaluated by neurosurgical teams at local hospital for quality and practicality for use in urgent and non-urgent care. We hope we have provided readers adequate insight into the equipment and software they would require to quickly produce their own accurate and cost-effective 3D models from CT angiography images. It has become quite clear to us that the cost-benefit ratio in the production of such a simplified model is worthwhile.


Author(s):  
S. Gonizzi Barsanti ◽  
S. G. Malatesta ◽  
F. Lella ◽  
B. Fanini ◽  
F. Sala ◽  
...  

The best way to disseminate culture is, nowadays, the creation of scenarios with virtual and augmented reality that supply the visitors of museums with a powerful, interactive tool that allows to learn sometimes difficult concepts in an easy, entertaining way. 3D models derived from reality-based techniques are nowadays used to preserve, document and restore historical artefacts. These digital contents are also powerful instrument to interactively communicate their significance to non-specialist, making easier to understand concepts sometimes complicated or not clear. Virtual and Augmented Reality are surely a valid tool to interact with 3D models and a fundamental help in making culture more accessible to the wide public. These technologies can help the museum curators to adapt the cultural proposal and the information about the artefacts based on the different type of visitor’s categories. These technologies allow visitors to travel through space and time and have a great educative function permitting to explain in an easy and attractive way information and concepts that could prove to be complicated. The aim of this paper is to create a virtual scenario and an augmented reality app to recreate specific spaces in the Capitoline Museum in Rome as they were during Winckelmann’s time, placing specific statues in their original position in the 18th century.


10.2196/11925 ◽  
2019 ◽  
Vol 21 (5) ◽  
pp. e11925 ◽  
Author(s):  
Fernando Alvarez-Lopez ◽  
Marcelo Fabián Maina ◽  
Francesc Saigí-Rubió

Background The increasingly pervasive presence of technology in the operating room raises the need to study the interaction between the surgeon and computer system. A new generation of tools known as commercial off-the-shelf (COTS) devices enabling touchless gesture–based human-computer interaction is currently being explored as a solution in surgical environments. Objective The aim of this systematic literature review was to provide an account of the state of the art of COTS devices in the detection of manual gestures in surgery and to identify their use as a simulation tool for motor skills teaching in minimally invasive surgery (MIS). Methods For this systematic literature review, a search was conducted in PubMed, Excerpta Medica dataBASE, ScienceDirect, Espacenet, OpenGrey, and the Institute of Electrical and Electronics Engineers databases. Articles published between January 2000 and December 2017 on the use of COTS devices for gesture detection in surgical environments and in simulation for surgical skills learning in MIS were evaluated and selected. Results A total of 3180 studies were identified, 86 of which met the search selection criteria. Microsoft Kinect (Microsoft Corp) and the Leap Motion Controller (Leap Motion Inc) were the most widely used COTS devices. The most common intervention was image manipulation in surgical and interventional radiology environments, followed by interaction with virtual reality environments for educational or interventional purposes. The possibility of using this technology to develop portable low-cost simulators for skills learning in MIS was also examined. As most of the articles identified in this systematic review were proof-of-concept or prototype user testing and feasibility testing studies, we concluded that the field was still in the exploratory phase in areas requiring touchless manipulation within environments and settings that must adhere to asepsis and antisepsis protocols, such as angiography suites and operating rooms. Conclusions COTS devices applied to hand and instrument gesture–based interfaces in the field of simulation for skills learning and training in MIS could open up a promising field to achieve ubiquitous training and presurgical warm up.


Sign in / Sign up

Export Citation Format

Share Document