scholarly journals Continuity in intuition and insight: from real to naturalistic virtual environment

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
M. Eskinazi ◽  
I. Giannopulu

AbstractIntuition and insight can be deployed on the same continuum. Intuition is the unconscious ability to create links between information; insight is a process by which a sudden comprehension and resolution of a situation arises (i.e. euréka). In the present study, real and virtual environments were used to trigger intuition and insight. The study hypothesised that immersion in real primed environments would facilitate the emergence of intuition and insight in a virtual environment. Forty nine healthy participants were randomly assigned to two groups: “primed” and “non primed.” “Primed” participants were immersed in a real environment with olfactory and visual cues; “non primed” participants did not receive any cues. All participants were exposed to a 3D naturalistic virtual environment which represented a district in Paris via a Head Mounted Display (HMD). Locations presented in the virtual scene (i.e. café places) were related to both olfactory and visual primes (i.e. café) and were based on the continuity between real and virtual environments. Once immersed in the virtual environment, all participants were instructed to use their intuition to envision the selected locations during which Skin Conductance Responses (SCRs) and verbal declarations were recorded. When initiation (a) and immersion (b) phases in the virtual environment were considered, “primed” participants had higher SCRs during the immersion phase than the initiation phase in the virtual environment. They showed higher SRCs during the first part of the virtual immersion than “non primed” participants. During the phenomenological interview, “primed” participants reported a higher number of correct intuitive answers than “non primed” participants. Moreover, “primed” participants “with” insight had higher SCRs during real environment immersion than “primed” participants “without” insight. The findings are consistent with the idea that intuitive decisions in various tasks are based on the activation of pre-existing knowledge, which is unconsciously retrieved, but nevertheless can elicit an intuitive impression of coherence and can generate insight.

1996 ◽  
Vol 5 (3) ◽  
pp. 330-345 ◽  
Author(s):  
Edward J. Rinalducci

This paper provides an overview of the literature on the visual system, placing special emphasis on those visual characteristics regarded as necessary to produce adequate visual fidelity in virtual environments. These visual cues apply to the creation of various virtual environments including those involving flying, driving, sailing, or walking. A variety of cues are examined, in particular, motion, color, stereopsis, pictorial and secondary cues, physiological cues, texture, vertical development, luminance, field-of-view, and spatial resolution. Conclusions and recommendations for research are also presented.


2022 ◽  
Author(s):  
Jonathan Kelly ◽  
Taylor Doty ◽  
Morgan Ambourn ◽  
Lucia Cherep

Distances in virtual environments (VEs) viewed on a head-mounted display (HMD) are typically underperceived relative to the intended distance. This paper presents an experiment comparing perceived egocentric distance in a real environment with that in a matched VE presented in the Oculus Quest and Oculus Quest 2. Participants made verbal judgments and blind walking judgments to an object on the ground. Both the Quest and Quest 2 produced underperception compared to the real environment. Verbal judgments in the VE were 86\% and 79\% of real world judgments in the Quest and Quest 2, respectively. Blind walking judgments were 78% and 79% of real world judgments in the Quest and Quest 2, respectively. This project shows that significant underperception of distance persists even in modern HMDs.


1999 ◽  
Vol 8 (4) ◽  
pp. 469-473 ◽  
Author(s):  
Jeffrey S. Pierce ◽  
Randy Pausch ◽  
Christopher B. Sturgill ◽  
Kevin D. Christiansen

For entertainment applications, a successful virtual experience based on a head-mounted display (HMD) needs to overcome some or all of the following problems: entering a virtual world is a jarring experience, people do not naturally turn their heads or talk to each other while wearing an HMD, putting on the equipment is hard, and people do not realize when the experience is over. In the Electric Garden at SIGGRAPH 97, we presented the Mad Hatter's Tea Party, a shared virtual environment experienced by more than 1,500 SIGGRAPH attendees. We addressed these HMD-related problems with a combination of back story, see-through HMDs, virtual characters, continuity of real and virtual objects, and the layout of the physical and virtual environments.


2006 ◽  
Vol 18 (4) ◽  
pp. 467-475 ◽  
Author(s):  
Marcia K. O’Malley ◽  
◽  
Gina Upperman

The performance levels of human subjects in size identification and size discrimination experiments in both real and virtual environments are presented. The virtual environments are displayed with a PHANToM desktop three degree-of-freedom haptic interface. Results indicate that performance of the size identification and size discrimination tasks in the virtual environment is comparable to that in the real environment, implying that the haptic device does a good job of simulating reality for these tasks. Additionally, performance in the virtual environment was measured at below maximum machine performance levels for two machine parameters. The tabulated scores for the perception tasks in a sub-optimal virtual environment were found to be comparable to that in the real environment, supporting previous claims that haptic interface hardware may be able to convey, for these perceptual tasks, sufficient perceptual information to the user with relatively low levels of machine quality in terms of the following parameters: maximum endpoint force and maximum virtual surface stiffness. Results are comparable to those found for similar experiments conducted with other haptic interface hardware, further supporting this claim. Finally, it was found that varying maximum output force and virtual surface stiffness simultaneously does not have a compounding effect that significantly affects performance for size discrimination tasks.


2021 ◽  
Author(s):  
Andres Pinilla ◽  
Jaime Garcia ◽  
William Raffe ◽  
Jan-Niklas Voigt-Antons ◽  
Sebastian Möller

One of the challenges during the post-COVID pandemic era will be to foster social connections between people. Previous research suggests that people who is able to regulate their emotions tends to have better social connections with others. Additional studies indicate that it is possible to train the ability to regulate emotions voluntarily, using a procedure that involves three steps: (1) asking participants to evoke an autobiographical memory associated with a positive emotion; (2) analyze participants’ brain activity in real-time to estimate their emotional state; and (3) provide visual feedback about the emotions evoked with the autobiographical memory. However, there is not enough research on how to provide the visual feedback required for the third step. Therefore, this manuscript introduces five virtual environments that can be used to provide emotional visual feedback. Each virtual environment was designed based on evidence found in previous studies, suggesting that there are visual cues, such as colors, shapes and motion patterns, that tend to be associated with emotions. In each virtual environment, the visual cues changed, intending to represent five emotional categories. An experiment was conducted to analyze the emotions that participants associated with the virtual environments. The results indicate that each environment is associated with the emotional categories that they were meant to represent.


Author(s):  
Daniel Mellet-d'Huart

This chapter addresses the questions of why, when, and how to use virtual reality to support learning processes for human beings. It focuses therefore on what can and cannot be done in a real environment versus what can and cannot be done in a virtual environment, as well as on how using virtual reality can make some types of learning easier as long as certain conditions are fulfilled. These conditions include the shifting of some inner beliefs and the choice of an accurate paradigm. The paradigm of enaction will be presented as an example of an accurate paradigm for virtual reality. Some conceptual keys and landmarks for design will be proposed in the context of the Trinologic metamodel developed by the author. Such metamodels should facilitate the connection between human actions, learning, and the characteristics of the outer world, whether this world is real or virtual.


Author(s):  
Doug A. Bowman ◽  
Ameya Datey ◽  
Young Sam Ryu ◽  
Umer Farooq ◽  
Omar Vasnaik

Although a wide range of display devices is used in virtual environment (VE) systems, no guidelines exist to choose an appropriate display for a particular VE application. Our goal in this research is to develop such guidelines on the basis of empirical results. In this paper, we present a preliminary experiment comparing human behavior and performance between a head-mounted display (HMD) and a four-sided spatially immersive display (SID). In particular, we studied users' preferences for real vs. virtual turns in the VE. The results indicate that subjects have a significant preference for real turns in the HMD and for virtual turns in the SID. The experiment also found that females are more likely to choose real turns than males. We suggest that HMDs are an appropriate choice when users perform frequent turns and require spatial orientation.


2008 ◽  
Vol 17 (2) ◽  
pp. 176-198 ◽  
Author(s):  
Victoria Interrante ◽  
Brian Ries ◽  
Jason Lindquist ◽  
Michael Kaeding ◽  
Lee Anderson

Ensuring veridical spatial perception in immersive virtual environments (IVEs) is an important yet elusive goal. In this paper, we present the results of two experiments that seek further insight into this problem. In the first of these experiments, initially reported in Interrante, Ries, Lindquist, and Anderson (2007), we seek to disambiguate two alternative hypotheses that could explain our recent finding (Interrante, Anderson, and Ries, 2006a) that participants appear not to significantly underestimate egocentric distances in HMD-based IVEs, relative to in the real world, in the special case that they unambiguously know, through first-hand observation, that the presented virtual environment is a high-fidelity 3D model of their concurrently occupied real environment. Specifically, we seek to determine whether people are able to make similarly veridical judgments of egocentric distances in these matched real and virtual environments because (1) they are able to use metric information gleaned from their exposure to the real environment to calibrate their judgments of sizes and distances in the matched virtual environment, or because (2) their prior exposure to the real environment enabled them to achieve a heightened sense of presence in the matched virtual environment, which leads them to act on the visual stimulus provided through the HMD as if they were interpreting it as a computer-mediated view of an actual real environment, rather than just as a computer-generated picture, with all of the uncertainties that that would imply. In our second experiment, we seek to investigate the extent to which augmenting a virtual environment model with faithfully-modeled replicas of familiar objects might enhance people's ability to make accurate judgments of egocentric distances in that environment.


Author(s):  
Ryan A. Pavlik ◽  
Judy M. Vance ◽  
Greg R. Luecke

Ground-based haptic devices provide the capability of adding force feedback to virtual environments; however, the physical workspace of such devices is very limited due to the fixed base. By mounting a haptic device on a mobile robot, rather than a fixed stand, the reachable volume can be extended to function in full-scale virtual environments. This work presents the hardware, software, and integration developed to use such a mobile base with a Haption Virtuose™ 6D35-45. A mobile robot with a Mecanum-style omni-directional drive base and an Arduino-compatible microcontroller development board communicates with software on a host computer to provide a VRPN-based control and data acquisition interface. The position of the mobile robot in the physical space is tracked using an optical tracking system. The SPARTA virtual assembly software was extended to 1) apply transformations to the haptic device data based on the tracked base position, and 2) capture the error between the haptic device’s end effector and the center of its workspace and command the robot over VRPN to minimize this error. The completed system allows use of the haptic device in a wide area projection screen or head-mounted display virtual environment, providing smooth free-space motion and stiff display of forces to the user throughout the entire space. The availability of haptics in large immersive environments can contribute to future advances in virtual assembly planning, factory simulation, and other operations where haptics is an essential part of the simulation experience.


Sign in / Sign up

Export Citation Format

Share Document