A User Study of Virtual Reality for Visualizing Digitized Canadian Cultural Objects

Author(s):  
Miguel Angel Garcia-Ruiz ◽  
Pedro Cesar Santana-Mancilla ◽  
Laura Sanely Gaytan-Lugo

Algoma University holds an important collection of Canadian objects from the Anishinaabe culture dating from 1880. Some of those objects have been on display in the university's library, but most of them still remain stored in the university's archive, limiting opportunities to use them in teaching and learning activities. This chapter describes a research project focusing on digitizing and visualizing cultural artifacts using virtual reality (VR) technology, with the aim of supporting learning of Canadian heritage in cross-cultural courses. The chapter shows technical aspects of the objects' 3D digitization process and goes on to explain a user study with students watching a 3D model displayed on a low-cost VR headset. Results from the study show that visualization of the 3D model on the VR headset was effective, efficient, and satisfactory enough to use, motivating students to keep using it in further sessions. Technology integration of VR in educational settings is also analyzed and discussed.

2021 ◽  
Vol 3 (1) ◽  
pp. 6-7
Author(s):  
Kathryn MacCallum

Mixed reality (MR) provides new opportunities for creative and innovative learning. MR supports the merging of real and virtual worlds to produce new environments and visualisations where physical and digital objects co-exist and interact in real-time (MacCallum & Jamieson, 2017). The MR continuum links both virtual and augmented reality, whereby virtual reality (VR) enables learners to be immersed within a completely virtual world, while augmented reality (AR) blend the real and the virtual world. MR embraces the spectrum between the real and the virtual; the mix of the virtual and real worlds may vary depending on the application. The integration of MR into education provides specific affordances which make it specifically unique in supporting learning (Parson & MacCallum, 2020; Bacca, Baldiris, Fabregat, Graf & Kinshuk, 2014). These affordance enable students to support unique opportunities to support learning and develop 21st-century learning capabilities (Schrier, 2006; Bower, Howe, McCredie, Robinson, & Grover, 2014).   In general, most integration of MR in the classroom tend to be focused on students being the consumers of these experiences. However by enabling student to create their own experiences enables a wider range of learning outcomes to be incorporated into the learning experience. By enabling student to be creators and designers of their own MR experiences provides a unique opportunity to integrate learning across the curriculum and supports the develop of computational thinking and stronger digital skills. The integration of student-created artefacts has particularly been shown to provide greater engagement and outcomes for all students (Ananiadou & Claro, 2009).   In the past, the development of student-created MR experiences has been difficult, especially due to the steep learning curve of technology adoption and the overall expense of acquiring the necessary tools to develop these experiences. The recent development of low-cost mobile and online MR tools and technologies have, however, provided new opportunities to provide a scaffolded approach to the development of student-driven artefacts that do not require significant technical ability (MacCallum & Jamieson, 2017). Due to these advances, students can now create their own MR digital experiences which can drive learning across the curriculum.   This presentation explores how teachers at two high schools in NZ have started to explore and integrate MR into their STEAM classes.  This presentation draws on the results of a Teaching and Learning Research Initiative (TLRI) project, investigating the experiences and reflections of a group of secondary teachers exploring the use and adoption of mixed reality (augmented and virtual reality) for cross-curricular teaching. The presentation will explore how these teachers have started to engage with MR to support the principles of student-created digital experiences integrated into STEAM domains.


2017 ◽  
Author(s):  
Benjamin O'Sullivan ◽  
Fahad Alam ◽  
Clyde Matava

UNSTRUCTURED This article will provide a framework for producing immersive 360-degree videos for pediatric and adult patients in hospitals. This information may be useful to hospitals across the globe who may wish to produce similar videos for their patients. Advancements in immersive 360-degree technologies have allowed us to produce our own “virtual experience” where our children can prepare for anesthesia by “experiencing” all the sights and sounds of receiving and recovering from an anesthetic. We have shown that health care professionals, children, and their parents find this form of preparation valid, acceptable and fun. Perhaps more importantly, children and parents have self-reported that undertaking our virtual experience has led to a reduction in their anxiety when they go to the operating room. We provide definitions, and technical aspects to assist other health care professionals in the development of low-cost 360-degree videos.


2018 ◽  
Author(s):  
Yoshihito Masuoka ◽  
Hiroyuki Morikawa ◽  
Takashi Kawai ◽  
Toshio Nakagohri

BACKGROUND Virtual reality (VR) technology has started to gain attention as a form of surgical support in medical settings. Likewise, the widespread use of smartphones has resulted in the development of various medical applications; for example, Google Cardboard, which can be used to build simple head-mounted displays (HMDs). However, because of the absence of observed and reported outcomes of the use of three-dimensional (3D) organ models in relevant environments, we have yet to determine the effects of or issues with the use of such VR technology. OBJECTIVE The aim of this paper was to study the issues that arise while observing a 3D model of an organ that is created based on an actual surgical case through the use of a smartphone-based simple HMD. Upon completion, we evaluated and gathered feedback on the performance and usability of the simple observation environment we had created. METHODS We downloaded our data to a smartphone (Galaxy S6; Samsung, Seoul, Korea) and created a simple HMD system using Google Cardboard (Google). A total of 17 medical students performed 2 experiments: an observation conducted by a single observer and another one carried out by multiple observers using a simple HMD. Afterward, they assessed the results by responding to a questionnaire survey. RESULTS We received a largely favorable response in the evaluation of the dissection model, but also a low score because of visually induced motion sickness and eye fatigue. In an introspective report on simultaneous observations made by multiple observers, positive opinions indicated clear image quality and shared understanding, but displeasure caused by visually induced motion sickness, eye fatigue, and hardware problems was also expressed. CONCLUSIONS We established a simple system that enables multiple persons to observe a 3D model. Although the observation conducted by multiple observers was successful, problems likely arose because of poor smartphone performance. Therefore, smartphone performance improvement may be a key factor in establishing a low-cost and user-friendly 3D observation environment.


Author(s):  
Robin Horst ◽  
Ramtin Naraghi-Taghi-Off ◽  
Linda Rau ◽  
Ralf Dörner

AbstractEvery Virtual Reality (VR) experience has to end at some point. While there already exist concepts to design transitions for users to enter a virtual world, their return from the physical world should be considered, as well, as it is a part of the overall VR experience. We call the latter outro-transitions. In contrast to offboarding of VR experiences, that takes place after taking off VR hardware (e.g., HMDs), outro-transitions are still part of the immersive experience. Such transitions occur more frequently when VR is experienced periodically and for only short times. One example where transition techniques are necessary is in an auditorium where the audience has individual VR headsets available, for example, in a presentation using PowerPoint slides together with brief VR experiences sprinkled between the slides. The audience must put on and take off HMDs frequently every time they switch from common presentation media to VR and back. In a such a one-to-many VR scenario, it is challenging for presenters to explore the process of multiple people coming back from the virtual to the physical world at once. Direct communication may be constrained while VR users are wearing an HMD. Presenters need a tool to indicate them to stop the VR session and switch back to the slide presentation. Virtual visual cues can help presenters or other external entities (e.g., automated/scripted events) to request VR users to end a VR session. Such transitions become part of the overall experience of the audience and thus must be considered. This paper explores visual cues as outro-transitions from a virtual world back to the physical world and their utility to enable presenters to request VR users to end a VR session. We propose and investigate eight transition techniques. We focus on their usage in short consecutive VR experiences and include both established and novel techniques. The transition techniques are evaluated within a user study to draw conclusions on the effects of outro-transitions on the overall experience and presence of participants. We also take into account how long an outro-transition may take and how comfortable our participants perceived the proposed techniques. The study points out that they preferred non-interactive outro-transitions over interactive ones, except for a transition that allowed VR users to communicate with presenters. Furthermore, we explore the presenter-VR user relation within a presentation scenario that uses short VR experiences. The study indicates involving presenters that can stop a VR session was not only negligible but preferred by our participants.


2021 ◽  
Author(s):  
Valentin Holzwarth ◽  
Johannes Schneider ◽  
Joshua Handali ◽  
Joy Gisler ◽  
Christian Hirt ◽  
...  

AbstractInferring users’ perceptions of Virtual Environments (VEs) is essential for Virtual Reality (VR) research. Traditionally, this is achieved through assessing users’ affective states before and after being exposed to a VE, based on standardized, self-assessment questionnaires. The main disadvantage of questionnaires is their sequential administration, i.e., a user’s affective state is measured asynchronously to its generation within the VE. A synchronous measurement of users’ affective states would be highly favorable, e.g., in the context of adaptive systems. Drawing from nonverbal behavior research, we argue that behavioral measures could be a powerful approach to assess users’ affective states in VR. In this paper, we contribute by providing methods and measures evaluated in a user study involving 42 participants to assess a users’ affective states by measuring head movements during VR exposure. We show that head yaw significantly correlates with presence, mental and physical demand, perceived performance, and system usability. We also exploit the identified relationships for two practical tasks that are based on head yaw: (1) predicting a user’s affective state, and (2) detecting manipulated questionnaire answers, i.e., answers that are possibly non-truthful. We found that affective states can be predicted significantly better than a naive estimate for mental demand, physical demand, perceived performance, and usability. Further, manipulated or non-truthful answers can also be estimated significantly better than by a naive approach. These findings mark an initial step in the development of novel methods to assess user perception of VEs.


Author(s):  
Wilver Auccahuasi ◽  
Mónica Diaz ◽  
Fernando Sernaque ◽  
Edward Flores ◽  
Justiniano Aybar ◽  
...  

Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 397
Author(s):  
Qimeng Zhang ◽  
Ji-Su Ban ◽  
Mingyu Kim ◽  
Hae Won Byun ◽  
Chang-Hun Kim

We propose a low-asymmetry interface to improve the presence of non-head-mounted-display (non-HMD) users in shared virtual reality (VR) experiences with HMD users. The low-asymmetry interface ensures that the HMD and non-HMD users’ perception of the VR environment is almost similar. That is, the point-of-view asymmetry and behavior asymmetry between HMD and non-HMD users are reduced. Our system comprises a portable mobile device as a visual display to provide a changing PoV for the non-HMD user and a walking simulator as an in-place walking detection sensor to enable the same level of realistic and unrestricted physical-walking-based locomotion for all users. Because this allows non-HMD users to experience the same level of visualization and free movement as HMD users, both of them can engage as the main actors in movement scenarios. Our user study revealed that the low-asymmetry interface enables non-HMD users to feel a presence similar to that of the HMD users when performing equivalent locomotion tasks in a virtual environment. Furthermore, our system can enable one HMD user and multiple non-HMD users to participate together in a virtual world; moreover, our experiments show that the non-HMD user satisfaction increases with the number of non-HMD participants owing to increased presence and enjoyment.


2015 ◽  
Vol 24 (4) ◽  
pp. 298-321 ◽  
Author(s):  
Ernesto de la Rubia ◽  
Antonio Diaz-Estrella

Virtual reality has become a promising field in recent decades, and its potential now seems clearer than ever. With the development of handheld devices and wireless technologies, interest in virtual reality is also increasing. Therefore, there is an accompanying interest in inertial sensors, which can provide such advantages as small size and low cost. Such sensors can also operate wirelessly and be used in an increasing number of interactive applications. An example related to virtual reality is the ability to move naturally through virtual environments. This is the objective of the real-walking navigation technique, for which a number of advantages have previously been reported in terms of presence, object searching, and collision, among other concerns. In this article, we address the use of foot-mounted inertial sensors to achieve real-walking navigation in a wireless virtual reality system. First, an overall description of the problem is presented. Then, specific difficulties are identified, and a corresponding technique is proposed to overcome each: tracking of foot movements; determination of the user’s position; percentage estimation of the gait cycle, including oscillating movements of the head; stabilization of the velocity of the point of view; and synchronization of head and body yaw angles. Finally, a preliminary evaluation of the system is conducted in which data and comments from participants were collected.


2020 ◽  
Vol 5 (3) ◽  
pp. 32-38
Author(s):  
Fakhriddin Nuraliev ◽  
◽  
Ulugbek Giyosov

Since the last few decades, virtual reality (VR) and augmented reality (AR) interfaces have shown the potential to enhance teaching and learning, by combining physical and virtual worlds and leveraging the advantages of both. Conservative techniques of content presentation (fixed video, audio, scripts) lack personalization and interaction.


Sign in / Sign up

Export Citation Format

Share Document