scholarly journals Towards estimating affective states in Virtual Reality based on behavioral data

2021 ◽  
Author(s):  
Valentin Holzwarth ◽  
Johannes Schneider ◽  
Joshua Handali ◽  
Joy Gisler ◽  
Christian Hirt ◽  
...  

AbstractInferring users’ perceptions of Virtual Environments (VEs) is essential for Virtual Reality (VR) research. Traditionally, this is achieved through assessing users’ affective states before and after being exposed to a VE, based on standardized, self-assessment questionnaires. The main disadvantage of questionnaires is their sequential administration, i.e., a user’s affective state is measured asynchronously to its generation within the VE. A synchronous measurement of users’ affective states would be highly favorable, e.g., in the context of adaptive systems. Drawing from nonverbal behavior research, we argue that behavioral measures could be a powerful approach to assess users’ affective states in VR. In this paper, we contribute by providing methods and measures evaluated in a user study involving 42 participants to assess a users’ affective states by measuring head movements during VR exposure. We show that head yaw significantly correlates with presence, mental and physical demand, perceived performance, and system usability. We also exploit the identified relationships for two practical tasks that are based on head yaw: (1) predicting a user’s affective state, and (2) detecting manipulated questionnaire answers, i.e., answers that are possibly non-truthful. We found that affective states can be predicted significantly better than a naive estimate for mental demand, physical demand, perceived performance, and usability. Further, manipulated or non-truthful answers can also be estimated significantly better than by a naive approach. These findings mark an initial step in the development of novel methods to assess user perception of VEs.

Sports ◽  
2018 ◽  
Vol 6 (3) ◽  
pp. 71 ◽  
Author(s):  
David Neumann ◽  
Robyn Moffitt

Engaging in physical exercise in a virtual reality (VR) environment has been reported to improve physical effort and affective states. However, these conclusions might be influenced by experimental design factors, such as comparing VR environments against a non-VR environment without actively controlling for the presence of visual input in non-VR conditions. The present study addressed this issue to examine affective and attentional states in a virtual running task. Participants (n = 40), completed a 21 min run on a treadmill at 70% of Vmax. One group of participants ran in a computer-generated VR environment that included other virtual runners while another group ran while viewing neutral images. Participants in both conditions showed a pattern of reduced positive affect and increased tension during the run with a return to high positive affect after the run. In the VR condition, higher levels of immersive tendencies and attention/absorption in the virtual environment were associated with more positive affect after the run. In addition, participants in the VR condition focused attention more on external task-relevant stimuli and less to internal states than participants in the neutral images condition. However, the neutral images condition produced less negative affect and more enjoyment after the run than the VR condition. The finding suggest that the effects of exercising in a VR environment will depend on individual difference factors (e.g., attention/absorption in the virtual world) but it may not always be better than distracting attention away from exercise-related cues.


Author(s):  
Robin Horst ◽  
Ramtin Naraghi-Taghi-Off ◽  
Linda Rau ◽  
Ralf Dörner

AbstractEvery Virtual Reality (VR) experience has to end at some point. While there already exist concepts to design transitions for users to enter a virtual world, their return from the physical world should be considered, as well, as it is a part of the overall VR experience. We call the latter outro-transitions. In contrast to offboarding of VR experiences, that takes place after taking off VR hardware (e.g., HMDs), outro-transitions are still part of the immersive experience. Such transitions occur more frequently when VR is experienced periodically and for only short times. One example where transition techniques are necessary is in an auditorium where the audience has individual VR headsets available, for example, in a presentation using PowerPoint slides together with brief VR experiences sprinkled between the slides. The audience must put on and take off HMDs frequently every time they switch from common presentation media to VR and back. In a such a one-to-many VR scenario, it is challenging for presenters to explore the process of multiple people coming back from the virtual to the physical world at once. Direct communication may be constrained while VR users are wearing an HMD. Presenters need a tool to indicate them to stop the VR session and switch back to the slide presentation. Virtual visual cues can help presenters or other external entities (e.g., automated/scripted events) to request VR users to end a VR session. Such transitions become part of the overall experience of the audience and thus must be considered. This paper explores visual cues as outro-transitions from a virtual world back to the physical world and their utility to enable presenters to request VR users to end a VR session. We propose and investigate eight transition techniques. We focus on their usage in short consecutive VR experiences and include both established and novel techniques. The transition techniques are evaluated within a user study to draw conclusions on the effects of outro-transitions on the overall experience and presence of participants. We also take into account how long an outro-transition may take and how comfortable our participants perceived the proposed techniques. The study points out that they preferred non-interactive outro-transitions over interactive ones, except for a transition that allowed VR users to communicate with presenters. Furthermore, we explore the presenter-VR user relation within a presentation scenario that uses short VR experiences. The study indicates involving presenters that can stop a VR session was not only negligible but preferred by our participants.


Author(s):  
Mircea Zloteanu ◽  
Eva G. Krumhuber ◽  
Daniel C. Richardson

AbstractPeople are accurate at classifying emotions from facial expressions but much poorer at determining if such expressions are spontaneously felt or deliberately posed. We explored if the method used by senders to produce an expression influences the decoder’s ability to discriminate authenticity, drawing inspiration from two well-known acting techniques: the Stanislavski (internal) and Mimic method (external). We compared spontaneous surprise expressions in response to a jack-in-the-box (genuine condition), to posed displays of senders who either focused on their past affective state (internal condition) or the outward expression (external condition). Although decoders performed better than chance at discriminating the authenticity of all expressions, their accuracy was lower in classifying external surprise compared to internal surprise. Decoders also found it harder to discriminate external surprise from spontaneous surprise and were less confident in their decisions, perceiving these to be similarly intense but less genuine-looking. The findings suggest that senders are capable of voluntarily producing genuine-looking expressions of emotions with minimal effort, especially by mimicking a genuine expression. Implications for research on emotion recognition are discussed.


2021 ◽  
Vol 27 (4) ◽  
Author(s):  
Francisco Lara

AbstractCan Artificial Intelligence (AI) be more effective than human instruction for the moral enhancement of people? The author argues that it only would be if the use of this technology were aimed at increasing the individual's capacity to reflectively decide for themselves, rather than at directly influencing behaviour. To support this, it is shown how a disregard for personal autonomy, in particular, invalidates the main proposals for applying new technologies, both biomedical and AI-based, to moral enhancement. As an alternative to these proposals, this article proposes a virtual assistant that, through dialogue, neutrality and virtual reality technologies, can teach users to make better moral decisions on their own. The author concludes that, as long as certain precautions are taken in its design, such an assistant could do this better than a human instructor adopting the same educational methodology.


Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 397
Author(s):  
Qimeng Zhang ◽  
Ji-Su Ban ◽  
Mingyu Kim ◽  
Hae Won Byun ◽  
Chang-Hun Kim

We propose a low-asymmetry interface to improve the presence of non-head-mounted-display (non-HMD) users in shared virtual reality (VR) experiences with HMD users. The low-asymmetry interface ensures that the HMD and non-HMD users’ perception of the VR environment is almost similar. That is, the point-of-view asymmetry and behavior asymmetry between HMD and non-HMD users are reduced. Our system comprises a portable mobile device as a visual display to provide a changing PoV for the non-HMD user and a walking simulator as an in-place walking detection sensor to enable the same level of realistic and unrestricted physical-walking-based locomotion for all users. Because this allows non-HMD users to experience the same level of visualization and free movement as HMD users, both of them can engage as the main actors in movement scenarios. Our user study revealed that the low-asymmetry interface enables non-HMD users to feel a presence similar to that of the HMD users when performing equivalent locomotion tasks in a virtual environment. Furthermore, our system can enable one HMD user and multiple non-HMD users to participate together in a virtual world; moreover, our experiments show that the non-HMD user satisfaction increases with the number of non-HMD participants owing to increased presence and enjoyment.


Vision ◽  
2021 ◽  
Vol 5 (2) ◽  
pp. 17
Author(s):  
Maria Elisa Della-Torre ◽  
Daniele Zavagno ◽  
Rossana Actis-Grosso

E-motions are defined as those affective states the expressions of which—conveyed either by static faces or body posture—embody a dynamic component and, consequently, convey a higher sense of dynamicity than other emotional expressions. An experiment is presented, aimed at testing whether e-motions are perceived as such also by individuals with autism spectrum disorders (ASDs), which have been associated with impairments in emotion recognition and in motion perception. To this aim we replicate with ASD individuals a study, originally conducted with typically developed individuals (TDs), in which we showed to both ASD and TD participants 14 bodiless heads and 14 headless bodies taken from eleven static artworks and four drawings. The Experiment was divided into two sessions. In Session 1 participants were asked to freely associate each stimulus to an emotion or an affective state (Task 1, option A); if they were unable to find a specific emotion, the experimenter showed them a list of eight possible emotions (words) and asked them to choose one from such list, that best described the affective state portrayed in the image (Task 1, option B). After their choice, they were asked to rate the intensity of the perceived emotion on a seven point Likert scale (Task 2). In Session 2 participants were requested to evaluate the degree of dynamicity conveyed by each stimulus on a 7 point Likert scale. Results showed that ASDs and TDs shared a similar range of verbal expressions defining emotions; however, ASDs (i) showed an impairment in the ability to spontaneously assign an emotion to a headless body, and (ii) they more frequently used terms denoting negative emotions (for both faces and bodies) as compared to neutral emotions, which in turn were more frequently used by TDs. No difference emerged between the two groups for positive emotions, with happiness being the emotion better recognized in both faces and in bodies. Although overall there are no significant differences between the two groups with respect to the emotions assigned to the images and the degree of perceived dynamicity, the interaction Artwork x Group showed that for some images ASDs assigned a different value than TDs to perceived dynamicity. Moreover, two images were interpreted by ASDs as conveying completely different emotions than those perceived by TDs. Results are discussed in light of the ability of ASDs to resolve ambiguity, and of possible different cognitive styles characterizing the aesthetical/emotional experience.


2013 ◽  
Vol 13 (1) ◽  
pp. 18-28 ◽  
Author(s):  
Chao-Hung Lin ◽  
Jyun-Yuan Chen ◽  
Shun-Siang Hsu ◽  
Yun-Huan Chung

Tourist maps are designed to direct tourists to tourist attractions in unfamiliar areas. A well-designed tourist map can provide tourists with sufficient and intuitive information about places of interest. Thus, providing up-to-date information on places of interest and selecting their representative icons are fundamental and important in automatic generation of tourist maps. In this article, approaches for determining places of interest and for determining their representative icons are introduced. In contrast to general digital tourist maps that use text, simple shapes, or three-dimensional models, we use photos that offer abundant visual features of places of interest as icons in tourist maps. The photos are automatically extracted from a repository of photos downloaded from photo-sharing communities. Tourist attractions and their corresponding image icons are determined by means of photo voting and photo quality assessment. Qualitative analyses, including a user study and experiments in several areas with numerous tourist attractions, indicated that the proposed method can generate visually pleasant and elaborate tourist maps. In addition, the analyses indicated that the map produced by our method is better than maps generated by related methods and is comparable to hand-designed tourist maps.


2016 ◽  
Vol 6 (2) ◽  
pp. 1 ◽  
Author(s):  
Michael Fartoukh ◽  
Lucile Chanquoy

<p>We analysed the influence of classroom activities on children’s affective states. Children perform many different activities in the course of an ordinary school day, some of which may trigger changes in their affective state and thus in the availability of their cognitive resources and their degree of motivation. To observe the effects of two such activities (listening to a text and performing a dictation) on affective state, according to grade, we asked 39 third graders and 40 fifth graders to specify their affective state at several points in the day. Results showed that this state varied from one activity to another, and was also dependent on grade level. Third graders differed from fifth graders in the feelings elicited by the activities. The possible implications of these findings for the field of educational psychology and children’s academic performance are discussed.</p>


Author(s):  
M. Doležal ◽  
M. Vlachos ◽  
M. Secci ◽  
S. Demesticha ◽  
D. Skarlatos ◽  
...  

<p><strong>Abstract.</strong> Underwater archaeological discoveries bring new challenges to the field, but such sites are more difficult to reach and, due to natural influences, they tend to deteriorate fast. Photogrammetry is one of the most powerful tools used for archaeological fieldwork. Photogrammetric techniques are used to document the state of the site in digital form for later analysis, without the risk of damaging any of the artefacts or the site itself. To achieve best possible results with the gathered data, divers should come prepared with the knowledge of measurements and photo capture methods. Archaeologists use this technology to record discovered arteacts or even the whole archaeological sites. Data gathering underwater brings several problems and limitations, so specific steps should be taken to get the best possible results, and divers should well be prepared before starting work at an underwater site. Using immersive virtual reality, we have developed an educational software to introduce maritime archaeology students to photogrammetry techniques. To test the feasibility of the software, a user study was performed and evaluated by experts. In the software, the user is tasked to put markers on the site, measure distances between them, and then take photos of the site, from which the 3D mesh is generated offline. Initial results show that the system is useful for understanding the basics of underwater photogrammetry.</p>


PLoS ONE ◽  
2021 ◽  
Vol 16 (10) ◽  
pp. e0258103
Author(s):  
Andreas Bueckle ◽  
Kilian Buehling ◽  
Patrick C. Shih ◽  
Katy Börner

Working with organs and extracted tissue blocks is an essential task in many medical surgery and anatomy environments. In order to prepare specimens from human donors for further analysis, wet-bench workers must properly dissect human tissue and collect metadata for downstream analysis, including information about the spatial origin of tissue. The Registration User Interface (RUI) was developed to allow stakeholders in the Human Biomolecular Atlas Program (HuBMAP) to register tissue blocks—i.e., to record the size, position, and orientation of human tissue data with regard to reference organs. The RUI has been used by tissue mapping centers across the HuBMAP consortium to register a total of 45 kidney, spleen, and colon tissue blocks, with planned support for 17 organs in the near future. In this paper, we compare three setups for registering one 3D tissue block object to another 3D reference organ (target) object. The first setup is a 2D Desktop implementation featuring a traditional screen, mouse, and keyboard interface. The remaining setups are both virtual reality (VR) versions of the RUI: VR Tabletop, where users sit at a physical desk which is replicated in virtual space; VR Standup, where users stand upright while performing their tasks. All three setups were implemented using the Unity game engine. We then ran a user study for these three setups involving 42 human subjects completing 14 increasingly difficult and then 30 identical tasks in sequence and reporting position accuracy, rotation accuracy, completion time, and satisfaction. All study materials were made available in support of future study replication, alongside videos documenting our setups. We found that while VR Tabletop and VR Standup users are about three times as fast and about a third more accurate in terms of rotation than 2D Desktop users (for the sequence of 30 identical tasks), there are no significant differences between the three setups for position accuracy when normalized by the height of the virtual kidney across setups. When extrapolating from the 2D Desktop setup with a 113-mm-tall kidney, the absolute performance values for the 2D Desktop version (22.6 seconds per task, 5.88 degrees rotation, and 1.32 mm position accuracy after 8.3 tasks in the series of 30 identical tasks) confirm that the 2D Desktop interface is well-suited for allowing users in HuBMAP to register tissue blocks at a speed and accuracy that meets the needs of experts performing tissue dissection. In addition, the 2D Desktop setup is cheaper, easier to learn, and more practical for wet-bench environments than the VR setups.


Sign in / Sign up

Export Citation Format

Share Document