VRMiner

Author(s):  
H. Azzag ◽  
F. Picarougne ◽  
C. Guinot ◽  
G. Venturini

We present in this chapter a new 3D interactive method for visualizing multimedia data with virtual reality named VRMiner. We consider that an expert in a specific domain has collected a set of examples described with numeric and symbolic attributes but also with sounds, images, videos and Web sites or 3D models, and that this expert wishes to explore these data to understand their structure. We use a 3D stereoscopic display in order to let the expert easily visualize and observe the data. We add to this display contextual information such as texts and small images, voice synthesis and sound. Larger images, videos and Web sites are displayed on a second computer in order to ensure real time display. Navigating through the data is done in a very intuitive and precise way with a 3D sensor that simulates a virtual camera. Interactive requests can be formulated by the expert with a data glove that recognizes the hand gestures. We show how this tool has been successfully applied to several real world applications.

2008 ◽  
pp. 1557-1572
Author(s):  
H. Azzag ◽  
F. Picarougne ◽  
C. Guinot ◽  
G. Venturini

We present in this chapter a new 3D interactive method for visualizing multimedia data with virtual reality named VRMiner. We consider that an expert in a specific domain has collected a set of examples described with numeric and symbolic attributes but also with sounds, images, videos and Web sites or 3D models, and that this expert wishes to explore these data to understand their structure. We use a 3D stereoscopic display in order to let the expert easily visualize and observe the data. We add to this display contextual information such as texts and small images, voice synthesis and sound. Larger images, videos and Web sites are displayed on a second computer in order to ensure real time display. Navigating through the data is done in a very intuitive and precise way with a 3D sensor that simulates a virtual camera. Interactive requests can be formulated by the expert with a data glove that recognizes the hand gestures. We show how this tool has been successfully applied to several real world applications.


2009 ◽  
pp. 1151-1167
Author(s):  
H. Azzag ◽  
F. Picarougne ◽  
C. Guinot ◽  
G. Venturini

We present in this chapter a new 3D interactive method for visualizing multimedia data with virtual reality named VRMiner. We consider that an expert in a specific domain has collected a set of examples described with numeric and symbolic attributes but also with sounds, images, videos and Web sites or 3D models, and that this expert wishes to explore these data to understand their structure. We use a 3D stereoscopic display in order to let the expert easily visualize and observe the data. We add to this display contextual information such as texts and small images, voice synthesis and sound. Larger images, videos and Web sites are displayed on a second computer in order to ensure real time display. Navigating through the data is done in a very intuitive and precise way with a 3D sensor that simulates a virtual camera. Interactive requests can be formulated by the expert with a data glove that recognizes the hand gestures. We show how this tool has been successfully applied to several real world applications.


2015 ◽  
Vol 791 ◽  
pp. 119-124
Author(s):  
Mikuláš Hajduk ◽  
Juraj Kováč

The contribution deals with the generation of interactive spatial solutions to manufacturing systems by means of virtual reality. It characterizes experimental work aimed at creating 3D models of manufacturing systems through technical and software resources of virtual reality. For innovative work is considered an example of data gloves in a virtual manufacturing environment. Data glove is used for placement of models means of production in manufacturing three-dimensional space.


2013 ◽  
Vol 397-400 ◽  
pp. 2701-2704
Author(s):  
Yu Jia

Virtual Reality (VR) by simulation study for multimedia teaching system, which provides the user with a simulation of a real-world environment for users to derive useful knowledge. Virtual Reality direction of research prospects is very large, but the difficulty is very great. In this paper, university network multimedia teaching system implementation, mainly images and multimedia information how to synchronize voice problems, and more due to network dissemination of information arising from end to end demonstrations and presentations jitter problems. Articles for both network multimedia teaching system in the main problems are given to solve the problem of multimedia information synchronization solutions and receive multimedia data buffering strategy.


2021 ◽  
Vol 10 (7) ◽  
pp. 1511
Author(s):  
Katherine Nameth ◽  
Theresa Brown ◽  
Kim Bullock ◽  
Sarah Adler ◽  
Giuseppe Riva ◽  
...  

Binge-eating disorder (BED) and bulimia nervosa (BN) have adverse psychological and medical consequences. Innovative interventions, like the integration of virtual reality (VR) with cue-exposure therapy (VR-CET), enhance outcomes for refractory patients compared to cognitive behavior therapy (CBT). Little is known about the feasibility and acceptability of translating VR-CET into real-world settings. To investigate this question, adults previously treated for BED or BN with at least one objective or subjective binge episode/week were recruited from an outpatient university eating disorder clinic to receive up to eight weekly one-hour VR-CET sessions. Eleven of 16 (68.8%) eligible patients were enrolled; nine (82%) completed treatment; and 82% (9/11) provided follow-up data 7.1 (SD = 2.12) months post-treatment. Overall, participant and therapist acceptability of VR-CET was high. Intent-to-treat objective binge episodes (OBEs) decreased significantly from 3.3 to 0.9/week (p < 0.001). Post-treatment OBE 7-day abstinence rate for completers was 56%, with 22% abstinent for 28 days at follow-up. Among participants purging at baseline, episodes decreased from a mean of one to zero/week, with 100% abstinence maintained at follow-up. The adoption of VR-CET into real-world clinic settings appears feasible and acceptable, with a preliminary signal of effectiveness. Findings, including some loss of treatment gains during follow-up may inform future treatment development.


2020 ◽  
Vol 22 (Supplement_3) ◽  
pp. iii461-iii461
Author(s):  
Andrea Carai ◽  
Angela Mastronuzzi ◽  
Giovanna Stefania Colafati ◽  
Paul Voicu ◽  
Nicola Onorini ◽  
...  

Abstract Tridimensional (3D) rendering of volumetric neuroimaging is increasingly been used to assist surgical management of brain tumors. New technologies allowing immersive virtual reality (VR) visualization of obtained models offer the opportunity to appreciate neuroanatomical details and spatial relationship between the tumor and normal neuroanatomical structures to a level never seen before. We present our preliminary experience with the Surgical Theatre, a commercially available 3D VR system, in 60 consecutive neurosurgical oncology cases. 3D models were developed from volumetric CT scans and MR standard and advanced sequences. The system allows the loading of 6 different layers at the same time, with the possibility to modulate opacity and threshold in real time. Use of the 3D VR was used during preoperative planning allowing a better definition of surgical strategy. A tailored craniotomy and brain dissection can be simulated in advanced and precisely performed in the OR, connecting the system to intraoperative neuronavigation. Smaller blood vessels are generally not included in the 3D rendering, however, real-time intraoperative threshold modulation of the 3D model assisted in their identification improving surgical confidence and safety during the procedure. VR was also used offline, both before and after surgery, in the setting of case discussion within the neurosurgical team and during MDT discussion. Finally, 3D VR was used during informed consent, improving communication with families and young patients. 3D VR allows to tailor surgical strategies to the single patient, contributing to procedural safety and efficacy and to the global improvement of neurosurgical oncology care.


2013 ◽  
Vol 2013 ◽  
pp. 1-9 ◽  
Author(s):  
Michelle Wang ◽  
Denise Reid

Background. This pilot study investigated the efficacy of a novel virtual reality-cognitive rehabilitation (VR-CR) intervention to improvecontextual processing of objectsin children with autism. Previous research supports that children with autism show deficits in contextual processing, as well as deficits in its elementary components: abstraction and cognitive flexibility.Methods. Four children with autism participated in a multiple-baseline, single-subject study. The children were taught how tosee objects in contextby reinforcing attention to pivotal contextual information.Results. All children demonstrated statistically significant improvements in contextual processing and cognitive flexibility. Mixed results were found on the control test and changes in context-related behaviours.Conclusions. Larger-scale studies are warranted to determine the effectiveness and usability in comprehensive educational programs.


Sign in / Sign up

Export Citation Format

Share Document