Hybrid Design Tools in a Social Virtual Reality Using Networked Oculus Rift: A Feasibility Study in Remote Real-Time Interaction

Author(s):  
Robert E. Wendrich ◽  
Kris-Howard Chambers ◽  
Wadee Al-Halabi ◽  
Eric J. Seibel ◽  
Olaf Grevenstuk ◽  
...  

Hybrid Design Tool Environments (HDTE) allow designers and engineers to use real tangible tools and physical objects and/or artifacts to make and create real-time virtual representations and presentations on-the-fly. Manipulations of the real tangible objects (e.g., real wire mesh, clay, sketches, etc.) are translated into 2-D and/or 3-D digital CAD software and/or virtual instances. The HDTE is equipped with a Loosely Fitted Design Synthesizer (NXt-LFDS) to support this multi-user interaction and design processing. The current study explores for the first time, the feasibility of using a NXt-LFDS in a networked immersive multi-participant social virtual reality environment (VRE). Using Oculus Rift goggles and PC computers at each location linked via Skype, team members physically located in several countries had the illusion of being co-located in a single virtual world, where they used rawshaping technologies (RST) to design a woman’s purse in 3-D virtual representations. Hence, the possibility to print the purse out on the spot (i.e. anywhere within the networked loop) with a 2-D or 3D printer. Immersive affordable Virtual Reality (VR) technology (and 3-D AM) are in the process of becoming commercially available and widely used by mainstream consumers, a major development that could transform the collaborative design process. The results of the current feasibility study suggests that designing products may become considerably more individualized within collaborative multi-user settings and less inhibited during in the coming ‘Diamond Age’ [1] of VR, collaborative networks and with profound implications for the design (e.g. fashion) and engineering industry. This paper presents the proposed system architecture, a collaborative use-case scenario, and preliminary results of the interaction, coordination, cooperation, and communication with immersive VR.

Sensors ◽  
2019 ◽  
Vol 19 (5) ◽  
pp. 1121 ◽  
Author(s):  
Nassr Alsaeedi ◽  
Dieter Wloka

The aim of the study is to develop a real-time eyeblink detection algorithm that can detect eyeblinks during the closing phase for a virtual reality headset (VR headset) and accordingly classify the eye’s current state (open or closed). The proposed method utilises analysis of a motion vector for detecting eyelid closure, and a Haar cascade classifier (HCC) for localising the eye in the captured frame. When the downward motion vector (DMV) is detected, a cross-correlation between the current region of interest (eye in the current frame) and a template image for an open eye is used for verifying eyelid closure. A finite state machine is used for decision making regarding eyeblink occurrence and tracking the eye state in a real-time video stream. The main contributions of this study are, first, the ability of the proposed algorithm to detect eyeblinks during the closing or the pause phases before the occurrence of the reopening phase of the eyeblink. Second, realising the proposed approach by implementing a valid real-time eyeblink detection sensor for a VR headset based on a real case scenario. The sensor is used in the ongoing study that we are conducting. The performance of the proposed method was 83.9% for accuracy, 91.8% for precision and 90.40% for the recall. The processing time for each frame took approximately 11 milliseconds. Additionally, we present a new dataset for non-frontal eye monitoring configuration for eyeblink tracking inside a VR headset. The data annotations are also included, such that the dataset can be used for method validation and performance evaluation in future studies.


2022 ◽  
Vol 2 ◽  
Author(s):  
Elin A. Björling ◽  
Ada Kim ◽  
Katelynn Oleson ◽  
Patrícia Alves-Oliveira

Virtual reality (VR) offers potential as a collaborative tool for both technology design and human-robot interaction. We utilized a participatory, human-centered design (HCD) methodology to develop a collaborative, asymmetric VR game to explore teens’ perceptions of, and interactions with, social robots. Our paper illustrates three stages of our design process; ideation, prototyping, and usability testing with users. Through these stages we identified important design requirements for our mid-fidelity environment. We then describe findings from our pilot test of the mid-fidelity VR game with teens. Due to the unique asymmetric virtual reality design, we observed successful collaborations, and interesting collaboration styles across teens. This study highlights the potential for asymmetric VR as a collaborative design tool as well as an appropriate medium for successful teen-to-teen collaboration.


2019 ◽  
Vol 16 (2) ◽  
pp. 22-31
Author(s):  
Christian Zabel ◽  
Gernot Heisenberg

Getrieben durch populäre Produkte und Anwendungen wie Oculus Rift, Pokémon Go oder der Samsung Gear stößt Virtual Reality, Augmented Reality und auch Mixed Reality auf zunehmend großes Interesse. Obwohl die zugrunde liegenden Technologien bereits seit den 1990er Jahren eingesetzt werden, ist eine breitere Adoption erst seit relativ kurzer Zeit zu beobachten. In der Folge ist ein sich schnell entwickelndes Ökosystem für VR und AR entstanden (Berg & Vance, 2017). Aus einer (medien-) politischen Perspektive interessiert dabei, welche Standortfaktoren die Ansiedlung und Agglomeration dieser Firmen begünstigen. Da die Wertschöpfungsaktivitäten sowohl hinsichtlich der Zielmärkte als auch der Leistungserstellung (z. B. starker Einsatz von IT und Hardware in der Produkterstellung) von denen klassischer Medienprodukte deutlich abweichen, kann insbesondere gefragt werden, ob die VR-, MR- und AR-Unternehmen mit Blick auf die Ansiedlungspolitik als Teil der Medienbranche aufzufassen sind und somit auf die für Medienunternehmen besonders relevanten Faktoren in ähnlichem Maße reagieren. Der vorliegende Aufsatz ist das Ergebnis eines Forschungsprojekts im Auftrag des Mediennetzwerks NRW, einer Tochterfirma der Film- und Medienstiftung NRW.


2020 ◽  
Vol 6 (3) ◽  
pp. 127-130
Author(s):  
Max B. Schäfer ◽  
Kent W. Stewart ◽  
Nico Lösch ◽  
Peter P. Pott

AbstractAccess to systems for robot-assisted surgery is limited due to high costs. To enable widespread use, numerous issues have to be addressed to improve and/or simplify their components. Current systems commonly use universal linkage-based input devices, and only a few applicationoriented and specialized designs are used. A versatile virtual reality controller is proposed as an alternative input device for the control of a seven degree of freedom articulated robotic arm. The real-time capabilities of the setup, replicating a system for robot-assisted teleoperated surgery, are investigated to assess suitability. Image-based assessment showed a considerable system latency of 81.7 ± 27.7 ms. However, due to its versatility, the virtual reality controller is a promising alternative to current input devices for research around medical telemanipulation systems.


Sensors ◽  
2020 ◽  
Vol 21 (1) ◽  
pp. 26
Author(s):  
David González-Ortega ◽  
Francisco Javier Díaz-Pernas ◽  
Mario Martínez-Zarzuela ◽  
Míriam Antón-Rodríguez

Driver’s gaze information can be crucial in driving research because of its relation to driver attention. Particularly, the inclusion of gaze data in driving simulators broadens the scope of research studies as they can relate drivers’ gaze patterns to their features and performance. In this paper, we present two gaze region estimation modules integrated in a driving simulator. One uses the 3D Kinect device and another uses the virtual reality Oculus Rift device. The modules are able to detect the region, out of seven in which the driving scene was divided, where a driver is gazing at in every route processed frame. Four methods were implemented and compared for gaze estimation, which learn the relation between gaze displacement and head movement. Two are simpler and based on points that try to capture this relation and two are based on classifiers such as MLP and SVM. Experiments were carried out with 12 users that drove on the same scenario twice, each one with a different visualization display, first with a big screen and later with Oculus Rift. On the whole, Oculus Rift outperformed Kinect as the best hardware for gaze estimation. The Oculus-based gaze region estimation method with the highest performance achieved an accuracy of 97.94%. The information provided by the Oculus Rift module enriches the driving simulator data and makes it possible a multimodal driving performance analysis apart from the immersion and realism obtained with the virtual reality experience provided by Oculus.


2021 ◽  
pp. 104687812110082
Author(s):  
Omamah Almousa ◽  
Ruby Zhang ◽  
Meghan Dimma ◽  
Jieming Yao ◽  
Arden Allen ◽  
...  

Objective. Although simulation-based medical education is fundamental for acquisition and maintenance of knowledge and skills; simulators are often located in urban centers and they are not easily accessible due to cost, time, and geographic constraints. Our objective is to develop a proof-of-concept innovative prototype using virtual reality (VR) technology for clinical tele simulation training to facilitate access and global academic collaborations. Methodology. Our project is a VR-based system using Oculus Quest as a standalone, portable, and wireless head-mounted device, along with a digital platform to deliver immersive clinical simulation sessions. Instructor’s control panel (ICP) application is designed to create VR-clinical scenarios remotely, live-stream sessions, communicate with learners and control VR-clinical training in real-time. Results. The Virtual Clinical Simulation (VCS) system offers realistic clinical training in virtual space that mimics hospital environments. Those VR clinical scenarios are customizable to suit the need, with high-fidelity lifelike characters designed to deliver interactive and immersive learning experience. The real-time connection and live-stream between ICP and VR-training system enables interactive academic learning and facilitates access to tele simulation training. Conclusions. VCS system provides innovative solutions to major challenges associated with conventional simulation training such as access, cost, personnel, and curriculum. VCS facilitates the delivery of academic and interactive clinical training that is similar to real-life settings. Tele-clinical simulation systems like VCS facilitate necessary academic-community partnerships, as well as global education network between resource-rich and low-income countries.


2020 ◽  
Vol 22 (Supplement_3) ◽  
pp. iii461-iii461
Author(s):  
Andrea Carai ◽  
Angela Mastronuzzi ◽  
Giovanna Stefania Colafati ◽  
Paul Voicu ◽  
Nicola Onorini ◽  
...  

Abstract Tridimensional (3D) rendering of volumetric neuroimaging is increasingly been used to assist surgical management of brain tumors. New technologies allowing immersive virtual reality (VR) visualization of obtained models offer the opportunity to appreciate neuroanatomical details and spatial relationship between the tumor and normal neuroanatomical structures to a level never seen before. We present our preliminary experience with the Surgical Theatre, a commercially available 3D VR system, in 60 consecutive neurosurgical oncology cases. 3D models were developed from volumetric CT scans and MR standard and advanced sequences. The system allows the loading of 6 different layers at the same time, with the possibility to modulate opacity and threshold in real time. Use of the 3D VR was used during preoperative planning allowing a better definition of surgical strategy. A tailored craniotomy and brain dissection can be simulated in advanced and precisely performed in the OR, connecting the system to intraoperative neuronavigation. Smaller blood vessels are generally not included in the 3D rendering, however, real-time intraoperative threshold modulation of the 3D model assisted in their identification improving surgical confidence and safety during the procedure. VR was also used offline, both before and after surgery, in the setting of case discussion within the neurosurgical team and during MDT discussion. Finally, 3D VR was used during informed consent, improving communication with families and young patients. 3D VR allows to tailor surgical strategies to the single patient, contributing to procedural safety and efficacy and to the global improvement of neurosurgical oncology care.


2021 ◽  
pp. 135910452110261
Author(s):  
Sophie C Alsem ◽  
Anouk van Dijk ◽  
Esmée E Verhulp ◽  
Bram O De Castro

Evidence-based cognitive behavioral therapies (CBTs) for children with aggressive behavior problems have only modest effects. Research is needed into new methods to enhance CBT effectiveness. The aims of the present study were to (1) examine whether interactive virtual reality is a feasible treatment method for children with aggressive behavior problems; (2) investigate children’s appreciation of the method; and (3) explore whether children’s aggression decreased during the ten-session treatment. Six boys (8–12 years) participated at two clinical centers in the Netherlands. Newly developed weekly reports were collected on treatment feasibility (therapist-report), treatment appreciation (child report), and children’s aggression (child/parent report). Results supported treatment feasibility: therapists delivered on average 98% of the session content, provided more than the recommended practice time in virtual reality, experienced few technical issues, and were satisfied with their treatment delivery. Children highly appreciated the treatment. Parents reported decreases in children’s aggression over the treatment period (i.e., between week 1 and week 10), but children did not. The promising findings of this feasibility study warrant randomized controlled trials to determine whether interactive virtual reality enhances CBT effectiveness for children with aggressive behavior problems.


Sign in / Sign up

Export Citation Format

Share Document