gaze interaction
Recently Published Documents


TOTAL DOCUMENTS

89
(FIVE YEARS 28)

H-INDEX

14
(FIVE YEARS 3)

2021 ◽  
Author(s):  
Chiara Mazzocconi ◽  
Vladislav Maraev ◽  
Vidya Somashekarappa ◽  
Christine Howes
Keyword(s):  

2021 ◽  
Author(s):  
Thiago Donizetti dos Santos ◽  
Vagner Figueredo de Santana

2021 ◽  
Vol 5 (CHI PLAY) ◽  
pp. 1-24
Author(s):  
Argenis Ramirez Gomez ◽  
Michael Lankes

Gaze interaction has been growing fast as a compelling tool for control and immersion for gameplay. Here, we present a conceptual framework focusing on the aesthetic player experience and the potential interpretation (meaning) players could give to playing with gaze interaction capabilities. The framework is illustrated by a survey of state of the art research-based and commercial games. We complement existing frameworks by reflecting on gaze interaction in games as the attention relationship between the player (the subject) and the game (the object) with four dimensions: Identity; Mapping; Attention; and Direction. The framework serves as a design and inquiry toolbox to analyse and communicate gaze mechanics in games, reflect on the complexity of gaze interaction, and formulate new research questions. We visualise the resulting design space, highlighting future opportunities for gaze interaction design and HCI gaze research through the framework's lens. We deem, this novel approach advocates for the design of gaze-based interactions revealing the richness of gaze input in future meaningful game experiences.


2021 ◽  
Author(s):  
Olga Dal Monte ◽  
Siqi Fan ◽  
Nicholas Fagan ◽  
Cheng-Chi Chu ◽  
Michael Zhou ◽  
...  

Social gaze interaction powerfully shapes interpersonal communication in humans and other primates. However, little is known about the neural underpinnings of these social behavioral exchanges. Here, we studied neural responses associated with naturalistic, face-to-face, social gaze interactions between pairs of macaques. We examined spiking activity in a large number of neurons spanning four different brain regions involved in social behaviors - the amygdala, orbitofrontal cortex, anterior cingulate cortex, and dorsomedial prefrontal cortex. We observed widespread single-cell representations of social gaze interaction functionalities in these brain regions - social discriminability, social gaze monitoring, and mutual eye contact selectivity. Many of these neurons discriminated looking at social versus non-social stimuli with rich temporal heterogeneity, or parametrically tracked the gaze positions of oneself or the conspecific. Furthermore, many neurons displayed selectivity for mutual eye contact as a function of the initiator or follower of mutual gaze events. Crucially, a significant proportion of neurons coded for more than one of these three signatures of social gaze interaction, supporting the recruitment of partially overlapping neuronal ensembles. Our findings emphasize integrated contributions of the amygdala and prefrontal circuits within the social interaction networks in processing real-life social interactions.


Author(s):  
Oskar Palinko ◽  
Trine Ungermann Fredskild ◽  
Eva Tansem Andersen ◽  
Conny Heidtmann ◽  
Andreas Risskov Sorensen ◽  
...  
Keyword(s):  

2021 ◽  
Vol 12 ◽  
Author(s):  
Arne Hartz ◽  
Björn Guth ◽  
Mathis Jording ◽  
Kai Vogeley ◽  
Martin Schulte-Rüther

To navigate the social world, humans heavily rely on gaze for non-verbal communication as it conveys information in a highly dynamic and complex, yet concise manner: For instance, humans utilize gaze effortlessly to direct and infer the attention of a possible interaction partner. Many traditional paradigms in social gaze research though rely on static ways of assessing gaze interaction, e.g., by using images or prerecorded videos as stimulus material. Emerging gaze contingent paradigms, in which algorithmically controlled virtual characters can respond flexibly to the gaze behavior of humans, provide high ecological validity. Ideally, these are based on models of human behavior which allow for precise, parameterized characterization of behavior, and should include variable interactive settings and different communicative states of the interacting agents. The present study provides a complete definition and empirical description of a behavioral parameter space of human gaze behavior in extended gaze encounters. To this end, we (i) modeled a shared 2D virtual environment on a computer screen in which a human could interact via gaze with an agent and simultaneously presented objects to create instances of joint attention and (ii) determined quantitatively the free model parameters (temporal and probabilistic) of behavior within this environment to provide a first complete, detailed description of the behavioral parameter space governing joint attention. This knowledge is essential to enable the modeling of interacting agents with a high degree of ecological validity, be it for cognitive studies or applications in human-robot interaction.


Author(s):  
Sofia Papavlasopoulou ◽  
Kshitij Sharma ◽  
David Melhart ◽  
Jasper Schellekens ◽  
Serena Lee-Cultura ◽  
...  
Keyword(s):  

2021 ◽  
Vol 2 ◽  
Author(s):  
Allison Jing ◽  
Kieran May ◽  
Gun Lee ◽  
Mark Billinghurst

Gaze is one of the predominant communication cues and can provide valuable implicit information such as intention or focus when performing collaborative tasks. However, little research has been done on how virtual gaze cues combining spatial and temporal characteristics impact real-life physical tasks during face to face collaboration. In this study, we explore the effect of showing joint gaze interaction in an Augmented Reality (AR) interface by evaluating three bi-directional collaborative (BDC) gaze visualisations with three levels of gaze behaviours. Using three independent tasks, we found that all bi-directional collaborative BDC visualisations are rated significantly better at representing joint attention and user intention compared to a non-collaborative (NC) condition, and hence are considered more engaging. The Laser Eye condition, spatially embodied with gaze direction, is perceived significantly more effective as it encourages mutual gaze awareness with a relatively low mental effort in a less constrained workspace. In addition, by offering additional virtual representation that compensates for verbal descriptions and hand pointing, BDC gaze visualisations can encourage more conscious use of gaze cues coupled with deictic references during co-located symmetric collaboration. We provide a summary of the lessons learned, limitations of the study, and directions for future research.


Sign in / Sign up

Export Citation Format

Share Document