Evaluating Visual Perception by Tracking Eye Movement in Architectural Space During Virtual Reality Experiences

Author(s):  
Nayeon Kim ◽  
Hyunsoo Lee
Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2193
Author(s):  
Juan Luis Higuera-Trujillo ◽  
Carmen Llinares ◽  
Eduardo Macagno

Humans respond cognitively and emotionally to the built environment. The modern possibility of recording the neural activity of subjects during exposure to environmental situations, using neuroscientific techniques and virtual reality, provides a promising framework for future design and studies of the built environment. The discipline derived is termed “neuroarchitecture”. Given neuroarchitecture’s transdisciplinary nature, it progresses needs to be reviewed in a contextualised way, together with its precursor approaches. The present article presents a scoping review, which maps out the broad areas on which the new discipline is based. The limitations, controversies, benefits, impact on the professional sectors involved, and potential of neuroarchitecture and its precursors’ approaches are critically addressed.


Author(s):  
Jordan Sasser ◽  
Fernando Montalvo ◽  
Rhyse Bendell ◽  
P. A. Hancock ◽  
Daniel S. McConnell

Prior research has indicated that perception of acceleration may be a direct process. This direct process may be conceptually linked to the ecological approach to visual perception and a further extension of direct social perception. The present study examines the effects of perception of acceleration in virtual reality on participants’ perceived attributes (perceived intelligence and animacy) of a virtual human-like robot agent and perceived agent competitive/cooperativeness. Perceptual judgments were collected after experiencing one of the five different conditions dependent on the participant’s acceleration: mirrored acceleration, faster acceleration, slowed acceleration, varied acceleration resulting in a win, and varied acceleration resulting in a loss. Participants experienced each condition twice in a counterbalanced fashion. The focus of the experiment was to determine whether different accelerations influenced perceptual judgments of the observers. Results suggest that faster acceleration was perceived as more competitive and slower acceleration was reported as low in animacy and perceived intelligence.


2020 ◽  
Vol 31 (3) ◽  
pp. 675-691 ◽  
Author(s):  
Jella Pfeiffer ◽  
Thies Pfeiffer ◽  
Martin Meißner ◽  
Elisa Weiß

How can we tailor assistance systems, such as recommender systems or decision support systems, to consumers’ individual shopping motives? How can companies unobtrusively identify shopping motives without explicit user input? We demonstrate that eye movement data allow building reliable prediction models for identifying goal-directed and exploratory shopping motives. Our approach is validated in a real supermarket and in an immersive virtual reality supermarket. Several managerial implications of using gaze-based classification of information search behavior are discussed: First, the advent of virtual shopping environments makes using our approach straightforward as eye movement data are readily available in next-generation virtual reality devices. Virtual environments can be adapted to individual needs once shopping motives are identified and can be used to generate more emotionally engaging customer experiences. Second, identifying exploratory behavior offers opportunities for marketers to adapt marketing communication and interaction processes. Personalizing the shopping experience and profiling customers’ needs based on eye movement data promises to further increase conversion rates and customer satisfaction. Third, eye movement-based recommender systems do not need to interrupt consumers and thus do not take away attention from the purchase process. Finally, our paper outlines the technological basis of our approach and discusses the practical relevance of individual predictors.


2012 ◽  
Vol 25 (0) ◽  
pp. 171-172
Author(s):  
Fumio Mizuno ◽  
Tomoaki Hayasaka ◽  
Takami Yamaguchi

Humans have the capability to flexibly adapt to visual stimulation, such as spatial inversion in which a person wears glasses that display images upside down for long periods of time (Ewert, 1930; Snyder and Pronko, 1952; Stratton, 1887). To investigate feasibility of extension of vision and the flexible adaptation of the human visual system with binocular rivalry, we developed a system that provides a human user with the artificial oculomotor ability to control their eyes independently for arbitrary directions, and we named the system Virtual Chameleon having to do with Chameleons (Mizuno et al., 2010, 2011). The successful users of the system were able to actively control visual axes by manipulating 3D sensors held by their both hands, to watch independent fields of view presented to the left and right eyes, and to look around as chameleons do. Although it was thought that those independent fields of view provided to the user were formed by eye movements control corresponding to pursuit movements on human, the system did not have control systems to perform saccadic movements and compensatory movements as numerous animals including human do. Fluctuations in dominance and suppression with binocular rivalry are irregular, but it is possible to bias these fluctuations by boosting the strength of one rival image over the other (Blake and Logothetis, 2002). It was assumed that visual stimuli induced by various eye movements affect predominance. Therefore, in this research, we focused on influenced of patterns of eye movements on visual perception with binocular rivalry, and implemented functions to produce saccadic movements in Virtual Chameleon.


1900 ◽  
Vol 7 (5) ◽  
pp. 454-465 ◽  
Author(s):  
Raymond Dodge

2012 ◽  
Vol 5 (1) ◽  
pp. 1-10
Author(s):  
Mateusz Woźniak

Brain system responsible for visual perception has been extensively studied. Visual system analyses a wide variety of stimuli in order to let us create adaptive representation of surrounding world. But among vast amounts of processed information come visual cues describing our own bodies. These cues constitute our so-called body-image. We tend to perceive it as a relatively stable structure but recent research, especially within the domain of virtual reality, introduces doubts to this assumption. New problems appear concerning perceiving others’ and our own bodies in virtual space and how does it influence our experience of ourselves and true reality. Recent studies show that how we see our avatars influence how we behave in artificial worlds. It introduces a brand new way of thinking about human embodiment. Virtual reality allows us to transcend beyond the casual visual-sensory-motor integration and create new ways to experience embodiment, temporarily replacing permanent body image with almost any imaginable digital one. Santrauka Smegenų sistema, atsakinga už vizualųjį suvokimą, yra nuodugniai ištirta. Vizualioji sistema analizuoja plačią akstinų įvairovę, padedančią mums sukurti adaptuotą supančio pasaulio reprezentaciją. Tačiau tarp didelio kiekio apdorotos informacijos kyla vizualiosios užuominos, atvaizduojančios mūsų pačių kūnus. Šios užuominos steigia vadinamąjį kūną-atvaizdą. Mes linkstame jį suvokti kaip sąlygiškai stabilią struktūrą, tačiau dabartiniai tyrimai, o ypač tie, kurie vykdomi virtualiojoje realybėje, tokia prielaida verčia suabejoti. Kyla naujų problemų, suvokiant kitų ir mūsų pačių kūnus virtualiojoje erdvėje bei kokios įtakos tai turi mūsų pačių savęs ir tikrosios realybės patyrimui. Nūdieniai tyrinėjimai atskleidžia, kad tai, kaip mes suvokiame savąjį kūniškumą, turi įtakos tam, kaip elgiamės dirbtiniuose pasauliuose. Tai steigia visiškai naują žmogiškojo kūniškumo suvokimo būdą. Virtualioji realybė leidžia mums peržengti paprastą vizualinęjutiminę-motorinę integraciją ir kurti naujus būdus patirti kūniškumą, palaipsniui pakeičiant ilgalaikį kūno atvaizdą bet kokiu įsivaizduojamu skaitmeniniu.


2009 ◽  
Vol 18 (6) ◽  
pp. 413-420 ◽  
Author(s):  
Marcos Hilsenrat ◽  
Miriam Reiner

Unaware haptic perception is often inferred but rarely demonstrated empirically. In this paper we present evidence for the effects of unaware haptic stimuli on users' motor interaction with virtual objects. Using a 3D hapto-visual virtual reality, we ran a texture-difference recognition test in which subjects glided a pen-like stylus along a virtual surface with varying roughness. We found that subjects were not aware of changes in texture roughness below a threshold limit, yet the normal force they applied changed. Subjects did not recognize on a cognitive level changes in the sensory cues, but behaved as if they did. These results suggest that performance can be affected through subliminal cues. Based on results from visual perception studies, we also tested the impact of context background conditions on the perception of unaware cues. We measured the threshold of awareness to changes in texture for several reference stimuli. We found that indeed, as in visual perception, this threshold for discriminating between the roughness of surfaces increases when the texture gets smoother, that is, sensitivity changes as a function of the background context. The implications of this work are mainly in the design of VR, especially for the remote manipulation of objects.


Author(s):  
Lester C. Loschky ◽  
George W. McConkie

This study investigated perceptual disruptions in gaze-contingent multi-resolutional displays (GCMRDs) due to delays in updating the image after an eye movement. GCMRDs can be used to save processing resources and transmission bandwidth in many single-user display applications such as virtual reality, simulators, video-telephony, remote piloting, and teleoperation. The current study found that image update delays after an eye movement could be as long as 60 ms without significantly increasing the detectability of image degradation and/or transients due to the update. This is good news for designers of GCMRD applications, since it is ample time to update their displays after an eye movement without disrupting perception.


Sign in / Sign up

Export Citation Format

Share Document