Social modulation of on-screen looking behaviour

2020 ◽  
Author(s):  
Jill A. Dosso ◽  
Nicola C Anderson ◽  
Basil Wahn ◽  
Gini Choi ◽  
Alan Kingstone

While passive social information (e.g. pictures of people) routinely draws one's eyes, our willingness to look at live others is more nuanced. People tend not to stare at strangers and will modify their gaze behaviour to avoid sending undesirable social signals; yet they often continue to monitor others covertly “out of the corner of their eyes.” What this means for looks that are being made near to live others is unknown. Will the eyes be drawn towards the other person, or pushed away? We evaluate changes in two elements of gaze control: image-independent principles guiding how we look (e.g. biases to make eye movements along the cardinal directions) and image-dependent principles guiding what we look at (e.g. a preference for meaningful content within a scene). Participants were asked to freely view semantically unstructured (fractals) and semantically structured (rotated landscape) images, half of which were located in the space near to a live other. We found that eye movements were horizontally displaced away from a visible other starting at 700 msec after stimulus onset when fractals but not landscapes were viewed. These data suggest that the avoidance of looking towards live others extends to the near space around them, at least in the absence of semantically meaningful gaze targets.

2018 ◽  
Vol 71 (10) ◽  
pp. 2162-2173 ◽  
Author(s):  
Ross G Macdonald ◽  
Benjamin W Tatler

People communicate using verbal and non-verbal cues, including gaze cues. Gaze allocation can be influenced by social factors; however, most research on gaze cueing has not considered these factors. The presence of social roles was manipulated in a natural, everyday collaborative task while eye movements were measured. In pairs, participants worked together to make a cake. Half of the pairs were given roles (“Chef” or “Gatherer”) and the other half were not. Across all participants we found, contrary to the results of static-image experiments, that participants spent very little time looking at each other, challenging the generalisability of the conclusions from lab-based paradigms. However, participants were more likely than not to look at their partner when receiving an instruction, highlighting the typical coordination of gaze cues and verbal communication in natural interactions. The mean duration of instances in which the partners looked at each other (partner gaze) was longer in the roles condition, and these participants were quicker to align their gaze with their partners (shared gaze). In addition, we found some indication that when hearing spoken instructions, listeners in the roles condition looked at the speaker more than listeners in the no roles condition. We conclude that social context can affect our gaze behaviour during a social interaction.


2007 ◽  
Vol 1 (2) ◽  
Author(s):  
John Semmlow ◽  
Yung-Fu Chen ◽  
Tara L. Alvarez ◽  
Claude Pedrono

If two targets are carefully aligned so that they fall along the cyclopean axis, the required eye movement will be symmetrical with the two eyes turning equally inward or outward. When such “pure vergence stimuli” are used only a “pure vergence movement” is required, yet almost all responses include saccadic eye movements, a rapid tandem movement of the eyes. When saccades occur, they must either produce an error in the desired symmetrical response or correct an error from an asymmetrical vergence response. A series of eye movement responses to pure convergence stimuli (4.0 deg step stimuli) were measured in 12 subjects and the occurrence, timing and amplitude of saccades was measured. Early saccades (within 400 msec of the stimulus onset) appeared in 80% to 100% of the responses. In most subjects, the first saccade increased the asymmetry of the response, taking the eyes away from the midline position. In three subjects, these asymmetry-inducing saccades brought one eye, the preferred or dominant eye, close to the target, but in the other subjects these asymmetry-inducing saccades were probably due to the distraction caused by the transient diplopic image generated by a pure vergence stimulus. While many of these asymmetry-inducing saccades showed saccade-like enhancements of vergence, they were, with the exception of two subjects, primarily divergent and did not facilitate the ongoing convergence movement. All subjects had some responses where the first saccade improved response symmetry, correcting an asymmetry brought about by unequal vergence movements in the two eyes. In five subjects, large symmetry-inducing saccades corrected an asymmetrical vergence response, bringing the eyes back to the midline (to within a few tenths of a degree).


Author(s):  
Demian Scherer ◽  
Dirk Wentura

Abstract. Recent theories assume a mutual facilitation in case of semantic overlap for concepts being activated simultaneously. We provide evidence for this claim using a semantic priming paradigm. To test for mutual facilitation of related concepts, a perceptual identification task was employed, presenting prime-target pairs briefly and masked, with an SOA of 0 ms (i.e., prime and target were presented concurrently, one above the other). Participants were instructed to identify the target. In Experiment 1, a cue defining the target was presented at stimulus onset, whereas in Experiment 2 the cue was not presented before the offset of stimuli. Accordingly, in Experiment 2, a post-cue task was merged with the perceptual identification task. We obtained significant semantic priming effects in both experiments. This result is compatible with the view that two concepts can both be activated in parallel and can mutually facilitate each other if they are related.


2020 ◽  
Author(s):  
David Harris ◽  
Mark Wilson ◽  
Tim Holmes ◽  
Toby de Burgh ◽  
Samuel James Vine

Head-mounted eye tracking has been fundamental for developing an understanding of sporting expertise, as the way in which performers sample visual information from the environment is a major determinant of successful performance. There is, however, a long running tension between the desire to study realistic, in-situ gaze behaviour and the difficulties of acquiring accurate ocular measurements in dynamic and fast-moving sporting tasks. Here, we describe how immersive technologies, such as virtual reality, offer an increasingly compelling approach for conducting eye movement research in sport. The possibility of studying gaze behaviour in representative and realistic environments, but with high levels of experimental control, could enable significant strides forward for eye tracking in sport and improve understanding of how eye movements underpin sporting skills. By providing a rationale for virtual reality as an optimal environment for eye tracking research, as well as outlining practical considerations related to hardware, software and data analysis, we hope to guide researchers and practitioners in the use of this approach.


2018 ◽  
Author(s):  
Fatima Maria Felisberti

Visual field asymmetries (VFA) in the encoding of groups rather than individual faces has been rarely investigated. Here, eye movements (dwell time (DT) and fixations (Fix)) were recorded during the encoding of three groups of four faces tagged with cheating, cooperative, or neutral behaviours. Faces in each of the three groups were placed in the upper left (UL), upper right (UR), lower left (LL), or lower right (LR) quadrants. Face recognition was equally high in the three groups. In contrast, the proportion of DT and Fix were higher for faces in the left than the right hemifield and in the upper rather than the lower hemifield. The overall time spent looking at the UL was higher than in the other quadrants. The findings are relevant to the understanding of VFA in face processing, especially groups of faces, and might be linked to environmental cues and/or reading habits.


2021 ◽  
Vol 4 (1) ◽  
pp. 71-95
Author(s):  
Juha Lång ◽  
Hana Vrzakova ◽  
Lauri Mehtätalo

  One of the main rules of subtitling states that subtitles should be formatted and timed so that viewers have enough time to read and understand the text but also to follow the picture. In this paper we examine the factors that influence the time viewers spend looking at subtitles. We concentrate on the lexical and structural properties of subtitles. The participant group (N = 14) watched a television documentary with Russian narration and Finnish subtitles (the participants’ native language), while their eye movements were tracked. Using a linear mixed-effects model, we identified significant effects of subtitle duration and character count on the time participants spent looking at the subtitles. The model also revealed significant inter-individual differences, despite the fact that the participant group was seemingly homogeneous. The findings underline the complexity of subtitled audiovisual material as a stimulus of cognitive processing. We provide a starting point for more comprehensive modelling of the factors involved in gaze behaviour when watching subtitled content. Lay summary Subtitles have become a popular method for watching foreign series and films even in countries that have traditionally used dubbing in this regard. Because subtitles are visible to the viewer a short, limited time, they should be composed so that they are easy to read, and that the viewer has time to also follow the image. Nevertheless, the factors that have impact the time it takes to read a subtitle is not very well known. We wanted to find out what makes people who are watching subtitled television shows spend more time gazing at the subtitles? To answer this question, we recorded the eye movements of 14 participants when they were watching a short, subtitled television documentary. We created a statistical model of the gaze behavior from the eye movement data and found that both the length of the subtitle and the time the subtitle is visible are separate contributing factors. We also discovered large differences between individual viewers. Our conclusion is that people process subtitled content in very different ways, but there are some common tendencies. Our model can be seen as solid starting point for comprehensive modelling of gaze behavior of people watching subtitled audiovisual material.


2021 ◽  
Author(s):  
Marek A. Pedziwiatr ◽  
Elisabeth von dem Hagen ◽  
Christoph Teufel

Humans constantly move their eyes to explore the environment and obtain information. Competing theories of gaze guidance consider the factors driving eye movements within a dichotomy between low-level visual features and high-level object representations. However, recent developments in object perception indicate a complex and intricate relationship between features and objects. Specifically, image-independent object-knowledge can generate objecthood by dynamically reconfiguring how feature space is carved up by the visual system. Here, we adopt this emerging perspective of object perception, moving away from the simplifying dichotomy between features and objects in explanations of gaze guidance. We recorded eye movements in response to stimuli that appear as meaningless patches on initial viewing but are experienced as coherent objects once relevant object-knowledge has been acquired. We demonstrate that gaze guidance differs substantially depending on whether observers experienced the same stimuli as meaningless patches or organised them into object representations. In particular, fixations on identical images became object-centred, less dispersed, and more consistent across observers once exposed to relevant prior object-knowledge. Observers' gaze behaviour also indicated a shift from exploratory information-sampling to a strategy of extracting information mainly from selected, object-related image areas. These effects were evident from the first fixations on the image. Importantly, however, eye-movements were not fully determined by object representations but were best explained by a simple model that integrates image-computable features and high-level, knowledge-dependent object representations. Overall, the results show how information sampling via eye-movements in humans is guided by a dynamic interaction between image-computable features and knowledge-driven perceptual organisation.


2018 ◽  
Vol 11 (2) ◽  
Author(s):  
Sarah Vandemoortele ◽  
Kurt Feyaerts ◽  
Mark Reybrouck ◽  
Geert De Bièvre ◽  
Geert Brône ◽  
...  

Few investigations into the nonverbal communication in ensemble playing have focused on gaze behaviour up to now. In this study, the gaze behaviour of musicians playing in trios was recorded using the recently developed technique of mobile eye-tracking. Four trios (clarinet, violin, piano) were recorded while rehearsing and while playing several runs through the same musical fragment. The current article reports on an initial exploration of the data in which we describe how often gazing at the partner occurred. On the one hand, we aim to identify possible contrasting cases. On the other, we look for tendencies across the run-throughs. We discuss the quantified gaze behaviour in relation to the existing literature and the current research design.


1999 ◽  
Vol 81 (6) ◽  
pp. 3105-3109 ◽  
Author(s):  
T. Belton ◽  
R. A. McCrea

Contribution of the cerebellar flocculus to gaze control during active head movements. The flocculus and ventral paraflocculus are adjacent regions of the cerebellar cortex that are essential for controlling smooth pursuit eye movements and for altering the performance of the vestibulo-ocular reflex (VOR). The question addressed in this study is whether these regions of the cerebellum are more globally involved in controlling gaze, regardless of whether eye or active head movements are used to pursue moving visual targets. Single-unit recordings were obtained from Purkinje (Pk) cells in the floccular region of squirrel monkeys that were trained to fixate and pursue small visual targets. Cell firing rate was recorded during smooth pursuit eye movements, cancellation of the VOR, combined eye-head pursuit, and spontaneous gaze shifts in the absence of targets. Pk cells were found to be much less sensitive to gaze velocity during combined eye–head pursuit than during ocular pursuit. They were not sensitive to gaze or head velocity during gaze saccades. Temporary inactivation of the floccular region by muscimol injection compromised ocular pursuit but had little effect on the ability of monkeys to pursue visual targets with head movements or to cancel the VOR during active head movements. Thus the signals produced by Pk cells in the floccular region are necessary for controlling smooth pursuit eye movements but not for coordinating gaze during active head movements. The results imply that individual functional modules in the cerebellar cortex are less involved in the global organization and coordination of movements than with parametric control of movements produced by a specific part of the body.


Author(s):  
Dzmitry A. Kaliukhovich ◽  
Nikolay V. Manyakov ◽  
Abigail Bangerter ◽  
Seth Ness ◽  
Andrew Skalkin ◽  
...  

Abstract Participants with autism spectrum disorder (ASD) (n = 121, mean [SD] age: 14.6 [8.0] years) and typically developing (TD) controls (n = 40, 16.4 [13.3] years) were presented with a series of videos representing biological motion on one side of a computer monitor screen and non-biological motion on the other, while their eye movements were recorded. As predicted, participants with ASD spent less overall time looking at presented stimuli than TD participants (P < 10–3) and showed less preference for biological motion (P < 10–5). Participants with ASD also had greater average latencies than TD participants of the first fixation on both biological (P < 0.01) and non-biological motion (P < 0.02). Findings suggest that individuals with ASD differ from TD individuals on multiple properties of eye movements and biological motion preference.


Sign in / Sign up

Export Citation Format

Share Document