scholarly journals Fidelity Metrics for Virtual Environment Simulations Based on Spatial Memory Awareness States

2003 ◽  
Vol 12 (3) ◽  
pp. 296-310 ◽  
Author(s):  
Katerina Mania ◽  
Tom Troscianko ◽  
Rycharde Hawkes ◽  
Alan Chalmers

This paper describes a methodology based on human judgments of memory awareness states for assessing the simulation fidelity of a virtual environment (VE) in relation to its real scene counterpart. To demonstrate the distinction between task performance-based approaches and additional human evaluation of cognitive awareness states, a photorealistic VE was created. Resulting scenes displayed on a head-mounted display (HMD) with or without head tracking and desktop monitor were then compared to the real-world task situation they represented, investigating spatial memory after exposure. Participants described how they completed their spatial recollections by selecting one of four choices of awareness states after retrieval in an initial test and a retention test a week after exposure to the environment. These reflected the level of visual mental imagery involved during retrieval, the familiarity of the recollection and also included guesses, even if informed. Experimental results revealed variations in the distribution of participants' awareness states across conditions while, in certain cases, task performance failed to reveal any. Experimental conditions that incorporated head tracking were not associated with visually induced recollections. Generally, simulation of task performance does not necessarily lead to simulation of the awareness states involved when completing a memory task. The general premise of this research focuses on how tasks are achieved, rather than only on what is achieved. The extent to which judgments of human memory recall, memory awareness states, and presence in the physical and VE are similar provides a fidelity metric of the simulation in question.

2019 ◽  
Author(s):  
Umesh Vivekananda ◽  
Daniel Bush ◽  
James A Bisby ◽  
Sallie Baxendale ◽  
Roman Rodionov ◽  
...  

AbstractHippocampal theta oscillations have been implicated in spatial memory function in both rodents and humans. What is less clear is how hippocampal theta interacts with higher frequency oscillations during spatial memory function, and how this relates to subsequent behaviour. Here we asked ten human epilepsy patients undergoing intracranial EEG recording to perform a desk-top virtual reality spatial memory task, and found that increased theta power in two discrete bands (‘low’ 2-5Hz and ‘high’ 6-9Hz) during cued retrieval was associated with improved task performance. Similarly, increased coupling between ‘low’ theta phase and gamma amplitude during the same period was associated with improved task performance. These results support a role of theta oscillations and theta-gamma phase-amplitude coupling in human spatial memory function.


1999 ◽  
Vol 8 (4) ◽  
pp. 435-448 ◽  
Author(s):  
Karl-Erik Bystrom ◽  
Woodrow Barfield

This paper describes a study on the sense of presence and task performance in a virtual environment as affected by copresence (one subject working alone versus two subjects working as partners), level of control (control of movement and control of navigation through the virtual environment), and head tracking. Twenty subjects navigated through six versions of a virtual environment and were asked to identify changes in locations of objects within the environment. After each trial, subjects completed a questionnaire designed to assess their level of presence within the virtual environment. Results indicated that collaboration did not increase the sense of presence in the virtual environment, but did improve the quality of the experience in the virtual environment. Level of control did not affect the sense of presence, but subjects did prefer to control both movement and navigation. Head tracking did not affect the sense of presence, but did contribute to the spatial realism of the virtual environment. Task performance was affected by the presence of another individual, by head tracking, and by level of control, with subjects performing significantly more poorly when they were both alone and without control and head tracking. In addition, a factor analysis indicated that questions designed to assess the subjects' experience in the virtual environment could be grouped into three factors: (1) presence in the virtual environment, (2) quality of the virtual environment, and (3) task difficulty.


1999 ◽  
Vol 6 (1) ◽  
pp. 54-61 ◽  
Author(s):  
Norton W. Milgram ◽  
Beth Adams ◽  
Heather Callahan ◽  
Elizabeth Head ◽  
Bill Mackay ◽  
...  

Allocentric spatial memory was studied in dogs of varying ages and sources using a landmark discrimination task. The primary goal of this study was to develop a protocol to test landmark discrimination learning in the dog. Using a modified version of a landmark test developed for use in monkeys, we successfully trained dogs to make a spatial discrimination on the basis of the position of a visual landmark relative to two identical discriminanda. Task performance decreased, however, as the distance between the landmark and the “discriminandum” was increased. A subgroup of these dogs was also tested on a delayed nonmatching to position spatial memory task (DNMP), which relies on egocentric spatial cues. These findings suggest that dogs can acquire both allocentric and egocentric spatial tasks. These data provide a useful tool for evaluating the ability of canines to use allocentric cues in spatial learning.


2021 ◽  
Vol 2 ◽  
Author(s):  
Juno Kim ◽  
Stephen Palmisano ◽  
Wilson Luu ◽  
Shinichi Iwasaki

Humans rely on multiple senses to perceive their self-motion in the real world. For example, a sideways linear head translation can be sensed either by lamellar optic flow of the visual scene projected on the retina of the eye or by stimulation of vestibular hair cell receptors found in the otolith macula of the inner ear. Mismatches in visual and vestibular information can induce cybersickness during head-mounted display (HMD) based virtual reality (VR). In this pilot study, participants were immersed in a virtual environment using two recent consumer-grade HMDs: the Oculus Go (3DOF angular only head tracking) and the Oculus Quest (6DOF angular and linear head tracking). On each trial they generated horizontal linear head oscillations along the interaural axis at a rate of 0.5 Hz. This head movement should generate greater sensory conflict when viewing the virtual environment on the Oculus Go (compared to the Quest) due to the absence of linear tracking. We found that perceived scene instability always increased with the degree of linear visual-vestibular conflict. However, cybersickness was not experienced by 7/14 participants, but was experienced by the remaining participants in at least one of the stereoscopic viewing conditions (six of whom also reported cybersickness in monoscopic viewing conditions). No statistical difference in spatial presence was found across conditions, suggesting that participants could tolerate considerable scene instability while retaining the feeling of being there in the virtual environment. Levels of perceived scene instability, spatial presence and cybersickness were found to be similar between the Oculus Go and the Oculus Quest with linear tracking disabled. The limited effect of linear coupling on cybersickness, compared with its strong effect on perceived scene instability, suggests that perceived scene instability may not always be associated with cybersickness. However, perceived scene instability does appear to provide explanatory power over the cybersickness observed in stereoscopic viewing conditions.


2012 ◽  
Vol 510 (1) ◽  
pp. 58-61 ◽  
Author(s):  
Marcel M. van Gaalen ◽  
Ana L. Relo ◽  
Bernhard K. Mueller ◽  
Gerhard Gross ◽  
Mario Mezler

Author(s):  
Patrick Bonin ◽  
Margaux Gelin ◽  
Betty Laroche ◽  
Alain Méot ◽  
Aurélia Bugaiska

Abstract. Animates are better remembered than inanimates. According to the adaptive view of human memory ( Nairne, 2010 ; Nairne & Pandeirada, 2010a , 2010b ), this observation results from the fact that animates are more important for survival than inanimates. This ultimate explanation of animacy effects has to be complemented by proximate explanations. Moreover, animacy currently represents an uncontrolled word characteristic in most cognitive research ( VanArsdall, Nairne, Pandeirada, & Cogdill, 2015 ). In four studies, we therefore investigated the “how” of animacy effects. Study 1 revealed that words denoting animates were recalled better than those referring to inanimates in an intentional memory task. Study 2 revealed that adding a concurrent memory load when processing words for the animacy dimension did not impede the animacy effect on recall rates. Study 3A was an exact replication of Study 2 and Study 3B used a higher concurrent memory load. In these two follow-up studies, animacy effects on recall performance were again not altered by a concurrent memory load. Finally, Study 4 showed that using interactive imagery to encode animate and inanimate words did not alter the recall rate of animate words but did increase the recall of inanimate words. Taken together, the findings suggest that imagery processes contribute to these effects.


Author(s):  
Maryam Daniali ◽  
Dario D. Salvucci ◽  
Maria T. Schultheis

Concussions are common cognitive impairments, but their effects on task performance in general, and on driving in particular, are not well understood. To better understand the effects of concussion on driving, we investigated previously gathered data on twenty-two people with a concussion, driving in a virtual-reality driving simulator (VRDS), and twenty-two non-concussed matched drivers. Participants were asked to per-form a behavioral task (either coin sorting or a verbal memory task) while driving. In this study, we chose a few common metrics from the VRDS and tracked their changes through time for each participant. Our pro-posed method—namely, the use of convolutional neural networks for classification and analysis—can accu-rately classify concussed driving and extract local features on driving sequences that translate to behavioral driving signatures. Overall, our method improves identification and understanding of clinically relevant driv-ing behaviors for concussed individuals and should generalize well to other types of impairments.


2021 ◽  
Vol 11 (7) ◽  
pp. 935
Author(s):  
Ying Xing Feng ◽  
Masashi Kiguchi ◽  
Wei Chun Ung ◽  
Sarat Chandra Dass ◽  
Ahmad Fadzil Mohd Hani ◽  
...  

The effect of stress on task performance is complex, too much or too little stress negatively affects performance and there exists an optimal level of stress to drive optimal performance. Task difficulty and external affective factors are distinct stressors that impact cognitive performance. Neuroimaging studies showed that mood affects working memory performance and the correlates are changes in haemodynamic activity in the prefrontal cortex (PFC). We investigate the interactive effects of affective states and working memory load (WML) on working memory task performance and haemodynamic activity using functional near-infrared spectroscopy (fNIRS) neuroimaging on the PFC of healthy participants. We seek to understand if haemodynamic responses could tell apart workload-related stress from situational stress arising from external affective distraction. We found that the haemodynamic changes towards affective stressor- and workload-related stress were more dominant in the medial and lateral PFC, respectively. Our study reveals distinct affective state-dependent modulations of haemodynamic activity with increasing WML in n-back tasks, which correlate with decreasing performance. The influence of a negative effect on performance is greater at higher WML, and haemodynamic activity showed evident changes in temporal, and both spatial and strength of activation differently with WML.


Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 397
Author(s):  
Qimeng Zhang ◽  
Ji-Su Ban ◽  
Mingyu Kim ◽  
Hae Won Byun ◽  
Chang-Hun Kim

We propose a low-asymmetry interface to improve the presence of non-head-mounted-display (non-HMD) users in shared virtual reality (VR) experiences with HMD users. The low-asymmetry interface ensures that the HMD and non-HMD users’ perception of the VR environment is almost similar. That is, the point-of-view asymmetry and behavior asymmetry between HMD and non-HMD users are reduced. Our system comprises a portable mobile device as a visual display to provide a changing PoV for the non-HMD user and a walking simulator as an in-place walking detection sensor to enable the same level of realistic and unrestricted physical-walking-based locomotion for all users. Because this allows non-HMD users to experience the same level of visualization and free movement as HMD users, both of them can engage as the main actors in movement scenarios. Our user study revealed that the low-asymmetry interface enables non-HMD users to feel a presence similar to that of the HMD users when performing equivalent locomotion tasks in a virtual environment. Furthermore, our system can enable one HMD user and multiple non-HMD users to participate together in a virtual world; moreover, our experiments show that the non-HMD user satisfaction increases with the number of non-HMD participants owing to increased presence and enjoyment.


Sign in / Sign up

Export Citation Format

Share Document