scholarly journals Effect of Collaboration Mode and Position Arrangement on Immersive Analytics Tasks in Virtual Reality: A Pilot Study

2021 ◽  
Vol 11 (21) ◽  
pp. 10473
Author(s):  
Lei Chen ◽  
Hai-Ning Liang ◽  
Feiyu Lu ◽  
Jialin Wang ◽  
Wenjun Chen ◽  
...  

[Background] Virtual reality (VR) technology can provide unique immersive experiences for group users, and especially for analytics tasks with visual information in learning. Providing a shared control/view may improve the task performance and enhance the user experience during VR collaboration. [Objectives] Therefore, this research explores the effect of collaborative modes and user position arrangements on task performance, user engagement, and collaboration behaviors and patterns in a VR learning environment that supports immersive collaborative tasks. [Method] The study involved two collaborative modes (shared and non-shared view and control) and three position arrangements (side-by-side, corner-to-corner, and back-to-back). A user study was conducted with 30 participants divided into three groups (Single, Shared, and Non-Shared) using a VR application that allowed users to explore the structural and transformational properties of 3D geometric shapes. [Results] The results showed that the shared mode would lead to higher task performance than single users for learning analytics tasks in VR. Besides, the side-by-side position got a higher score and more favor for enhancing the collaborative experience. [Conclusion] The shared view would be more suitable for improving task performance in collaborative VR. In addition, the side-by-side position may provide a higher user experience when collaborating in learning VR. From these results, a set of guidelines for the design of collaborative visualizations for VR environments are distilled and presented at the end of the paper. All in all, although our experiment is based on a colocated setting with two users, the results are applicable to both colocated and distributed collaborative scenarios with two or more users.

Sensors ◽  
2021 ◽  
Vol 21 (24) ◽  
pp. 8403
Author(s):  
Lei Chen ◽  
Hai-Ning Liang ◽  
Jialin Wang ◽  
Yuanying Qu ◽  
Yong Yue

Large interactive displays can provide suitable workspaces for learners to conduct collaborative learning tasks with visual information in co-located settings. In this research, we explored the use of these displays to support collaborative engagement and exploratory tasks with visual representations. Our investigation looked at the effect of four factors (number of virtual workspaces within the display, number of displays, position arrangement of the collaborators, and collaborative modes of interaction) on learners’ knowledge acquisition, engagement level, and task performance. To this end, a user study was conducted with 72 participants divided into 6 groups using an interactive tool developed to support the collaborative exploration of 3D visual structures. The results of this study showed that learners with one shared workspace and one single display can achieve better user performance and engagement levels. In addition, the back-to-back position with learners sharing their view and control of the workspaces was the most favorable. It also led to improved learning outcomes and engagement levels during the collaboration process.


Author(s):  
Maryam Sadat Mirzaei ◽  
Qiang Zhang ◽  
Kourosh Meshgi ◽  
Toyoaki Nishida

We developed a story creation platform that allows for collaborative content creation in a 3D environment by utilizing avatars, animations, objects, and backgrounds. Our story envisioning platform provides a shared virtual space that promotes collaborative interaction for story construction, involving a high degree of learner input and control. It allows the L2 learners to perform as actors and directors to create the story and supports offline or online collaboration (online chatting). Using state-of-the-art technologies, the system creates 3D stories from text to be presented in virtual reality. The learner can choose premade assets and input the story script for conversion into story elements and timelines. Experiments with 35 intermediate learners of English on the usability of the system and user engagement confirmed the system’s effectiveness to promote learner collaboration, peer support, negotiation, opinion exchange, and critical thinking. Learners found the system to be a powerful tool to visualize their thoughts, and revise/expand their stories, according to questionnaire results. This system brings an interesting and intense language practice that encourages learners to actively participate in the learning process through collaboration.


Author(s):  
Weiyu Zhang ◽  
Se-Hoon Jeong ◽  
Martin Fishbein†

This study investigates how multitasking interacts with levels of sexually explicit content to influence an individual’s ability to recognize TV content. A 2 (multitasking vs. nonmultitasking) by 3 (low, medium, and high sexual content) between-subjects experiment was conducted. The analyses revealed that multitasking not only impaired task performance, but also decreased TV recognition. An inverted-U relationship between degree of sexually explicit content and recognition of TV content was found, but only when subjects were multitasking. In addition, multitasking interfered with subjects’ ability to recognize audio information more than their ability to recognize visual information.


2020 ◽  
Author(s):  
David Harris ◽  
Mark Wilson ◽  
Tim Holmes ◽  
Toby de Burgh ◽  
Samuel James Vine

Head-mounted eye tracking has been fundamental for developing an understanding of sporting expertise, as the way in which performers sample visual information from the environment is a major determinant of successful performance. There is, however, a long running tension between the desire to study realistic, in-situ gaze behaviour and the difficulties of acquiring accurate ocular measurements in dynamic and fast-moving sporting tasks. Here, we describe how immersive technologies, such as virtual reality, offer an increasingly compelling approach for conducting eye movement research in sport. The possibility of studying gaze behaviour in representative and realistic environments, but with high levels of experimental control, could enable significant strides forward for eye tracking in sport and improve understanding of how eye movements underpin sporting skills. By providing a rationale for virtual reality as an optimal environment for eye tracking research, as well as outlining practical considerations related to hardware, software and data analysis, we hope to guide researchers and practitioners in the use of this approach.


Author(s):  
Robin Horst ◽  
Ramtin Naraghi-Taghi-Off ◽  
Linda Rau ◽  
Ralf Dörner

AbstractEvery Virtual Reality (VR) experience has to end at some point. While there already exist concepts to design transitions for users to enter a virtual world, their return from the physical world should be considered, as well, as it is a part of the overall VR experience. We call the latter outro-transitions. In contrast to offboarding of VR experiences, that takes place after taking off VR hardware (e.g., HMDs), outro-transitions are still part of the immersive experience. Such transitions occur more frequently when VR is experienced periodically and for only short times. One example where transition techniques are necessary is in an auditorium where the audience has individual VR headsets available, for example, in a presentation using PowerPoint slides together with brief VR experiences sprinkled between the slides. The audience must put on and take off HMDs frequently every time they switch from common presentation media to VR and back. In a such a one-to-many VR scenario, it is challenging for presenters to explore the process of multiple people coming back from the virtual to the physical world at once. Direct communication may be constrained while VR users are wearing an HMD. Presenters need a tool to indicate them to stop the VR session and switch back to the slide presentation. Virtual visual cues can help presenters or other external entities (e.g., automated/scripted events) to request VR users to end a VR session. Such transitions become part of the overall experience of the audience and thus must be considered. This paper explores visual cues as outro-transitions from a virtual world back to the physical world and their utility to enable presenters to request VR users to end a VR session. We propose and investigate eight transition techniques. We focus on their usage in short consecutive VR experiences and include both established and novel techniques. The transition techniques are evaluated within a user study to draw conclusions on the effects of outro-transitions on the overall experience and presence of participants. We also take into account how long an outro-transition may take and how comfortable our participants perceived the proposed techniques. The study points out that they preferred non-interactive outro-transitions over interactive ones, except for a transition that allowed VR users to communicate with presenters. Furthermore, we explore the presenter-VR user relation within a presentation scenario that uses short VR experiences. The study indicates involving presenters that can stop a VR session was not only negligible but preferred by our participants.


2021 ◽  
Author(s):  
Valentin Holzwarth ◽  
Johannes Schneider ◽  
Joshua Handali ◽  
Joy Gisler ◽  
Christian Hirt ◽  
...  

AbstractInferring users’ perceptions of Virtual Environments (VEs) is essential for Virtual Reality (VR) research. Traditionally, this is achieved through assessing users’ affective states before and after being exposed to a VE, based on standardized, self-assessment questionnaires. The main disadvantage of questionnaires is their sequential administration, i.e., a user’s affective state is measured asynchronously to its generation within the VE. A synchronous measurement of users’ affective states would be highly favorable, e.g., in the context of adaptive systems. Drawing from nonverbal behavior research, we argue that behavioral measures could be a powerful approach to assess users’ affective states in VR. In this paper, we contribute by providing methods and measures evaluated in a user study involving 42 participants to assess a users’ affective states by measuring head movements during VR exposure. We show that head yaw significantly correlates with presence, mental and physical demand, perceived performance, and system usability. We also exploit the identified relationships for two practical tasks that are based on head yaw: (1) predicting a user’s affective state, and (2) detecting manipulated questionnaire answers, i.e., answers that are possibly non-truthful. We found that affective states can be predicted significantly better than a naive estimate for mental demand, physical demand, perceived performance, and usability. Further, manipulated or non-truthful answers can also be estimated significantly better than by a naive approach. These findings mark an initial step in the development of novel methods to assess user perception of VEs.


2021 ◽  
pp. 146144482110127
Author(s):  
Marcus Carter ◽  
Ben Egliston

Virtual reality (VR) is an emerging technology with the potential to extract significantly more data about learners and the learning process. In this article, we present an analysis of how VR education technology companies frame, use and analyse this data. We found both an expansion and acceleration of what data are being collected about learners and how these data are being mobilised in potentially discriminatory and problematic ways. Beyond providing evidence for how VR represents an intensification of the datafication of education, we discuss three interrelated critical issues that are specific to VR: the fantasy that VR data is ‘perfect’, the datafication of soft-skills training, and the commercialisation and commodification of VR data. In the context of the issues identified, we caution the unregulated and uncritical application of learning analytics to the data that are collected from VR training.


Author(s):  
Laura Broeker ◽  
Harald Ewolds ◽  
Rita F. de Oliveira ◽  
Stefan Künzell ◽  
Markus Raab

AbstractThe aim of this study was to examine the impact of predictability on dual-task performance by systematically manipulating predictability in either one of two tasks, as well as between tasks. According to capacity-sharing accounts of multitasking, assuming a general pool of resources two tasks can draw upon, predictability should reduce the need for resources and allow more resources to be used by the other task. However, it is currently not well understood what drives resource-allocation policy in dual tasks and which resource allocation policies participants pursue. We used a continuous tracking task together with an audiomotor task and manipulated advance visual information about the tracking path in the first experiment and a sound sequence in the second experiments (2a/b). Results show that performance predominantly improved in the predictable task but not in the unpredictable task, suggesting that participants did not invest more resources into the unpredictable task. One possible explanation was that the re-investment of resources into another task requires some relationship between the tasks. Therefore, in the third experiment, we covaried the two tasks by having sounds 250 ms before turning points in the tracking curve. This enabled participants to improve performance in both tasks, suggesting that resources were shared better between tasks.


Sign in / Sign up

Export Citation Format

Share Document