scholarly journals Visual Attention in Real‐World Conversation: Gaze Patterns Are Modulated by Communication and Group Size

2020 ◽  
Author(s):  
Thomas Maran ◽  
Marco Furtner ◽  
Simon Liegl ◽  
Theo Ravet‐Brown ◽  
Lucas Haraped ◽  
...  
2020 ◽  
Author(s):  
dean mobbs ◽  
Ellen Tedeschi ◽  
Anastasia Buyalskaya ◽  
Brian Silston

According to Hamilton’s Selfish Herd Theory, a crucial survival benefit of group living is that it provides a ‘risk dilution’ function against predation. Despite a large literature on group living benefits in animals, few studies have been conducted on how group size alters subjective fear or threat perception in humans, and on what factors drive preferences for being in groups when facing threats. We conducted seven experiments (N=3,838) to test (A) if the presence of others decreases perception of threat under a variety of conditions. In studies 1 to 3, we experimentally manipulated group size in hypothetical and real-world situations, to show that fear responses decreased as group size increased. In studies 4 to 7 we again used a combination of hypothetical, virtual and real-world decisions to test (B) how internal states (e.g. anxiety) and external factors (e.g. threat level, availability of help) affected participants’ preference for groups. Participants consistently chose larger groups when threat and anxiety were high. Overall, our findings show that group size provides a salient signal of protection and safety.


2020 ◽  
Author(s):  
Timothy F. Brady ◽  
Viola S. Störmer ◽  
Anna Shafer-Skelton ◽  
Jamal Rodgers Williams ◽  
Angus F. Chapman ◽  
...  

Both visual attention and visual working memory tend to be studied with very simple stimuli and low-level paradigms, designed to allow us to understand the representations and processes in detail, or with fully realistic stimuli that make such precise understanding difficult but are more representative of the real world. In this chapter we argue for an intermediate approach in which visual attention and visual working memory are studied by scaling up from the simplest settings to more complex settings that capture some aspects of the complexity of the real-world, while still remaining in the realm of well-controlled stimuli and well-understood tasks. We believe this approach, which we have been taking in our labs, will allow a more generalizable set of knowledge about visual attention and visual working memory while maintaining the rigor and control that is typical of vision science and psychophysics studies.


2019 ◽  
Author(s):  
Gwendolyn L Rehrig ◽  
Candace Elise Peacock ◽  
Taylor Hayes ◽  
Fernanda Ferreira ◽  
John M. Henderson

The world is visually complex, yet we can efficiently describe it by extracting the information that is most relevant to convey. How do the properties of real-world scenes help us decide where to look and what to say? Image salience has been the dominant explanation for what drives visual attention and production as we describe displays, but new evidence shows scene meaning predicts attention better than image salience. Here we investigated the relevance of one aspect of meaning, graspability (the grasping interactions objects in the scene afford), given that affordances have been implicated in both visual and linguistic processing. We quantified image salience, meaning, and graspability for real-world scenes. In three eyetracking experiments, native English speakers described possible actions that could be carried out in a scene. We hypothesized that graspability would preferentially guide attention due to its task-relevance. In two experiments using stimuli from a previous study, meaning explained visual attention better than graspability or salience did, and graspability explained attention better than salience. In a third experiment we quantified image salience, meaning, graspability, and reach-weighted graspability for scenes that depicted reachable spaces containing graspable objects. Graspability and meaning explained attention equally well in the third experiment, and both explained attention better than salience. We conclude that speakers use object graspability to allocate attention to plan descriptions when scenes depict graspable objects within reach, and otherwise rely more on general meaning. The results shed light on what aspects of meaning guide attention during scene viewing in language production tasks.


2015 ◽  
Vol 9 (4) ◽  
Author(s):  
Songpo Li ◽  
Xiaoli Zhang ◽  
Fernando J. Kim ◽  
Rodrigo Donalisio da Silva ◽  
Diedra Gustafson ◽  
...  

Laparoscopic robots have been widely adopted in modern medical practice. However, explicitly interacting with these robots may increase the physical and cognitive load on the surgeon. An attention-aware robotic laparoscope system has been developed to free the surgeon from the technical limitations of visualization through the laparoscope. This system can implicitly recognize the surgeon's visual attention by interpreting the surgeon's natural eye movements using fuzzy logic and then automatically steer the laparoscope to focus on that viewing target. Experimental results show that this system can make the surgeon–robot interaction more effective, intuitive, and has the potential to make the execution of the surgery smoother and faster.


Author(s):  
Timothy F. Brady ◽  
Viola S. Störmer ◽  
Anna Shafer-Skelton ◽  
Jamal R. Williams ◽  
Angus F. Chapman ◽  
...  

Author(s):  
Samia Hussein

The present study examined the effect of scene context on guidance of attention during visual search in real‐world scenes. Prior research has demonstrated that when searching for an object, attention is usually guided to the region of a scene that most likely contains that target object. This study examined two possible mechanisms of attention that underlie efficient search: enhancement of attention (facilitation) and a deficiency of attention (inhibition). In this study, participants (N=20) were shown an object name and then required to search through scenes for the target while their eye movements were tracked. Scenes were divided into target‐relevant contextual regions (upper, middle, lower) and participants searched repeatedly in the same scene for different targets either in the same region or in different regions. Comparing repeated searches within the same scene across different regions, we expect to find that visual search is faster and more efficient (facilitation of attention) in regions of a scene where attention was previously deployed. At the same time, when searching across different regions, we expect searches to be slower and less efficient (inhibition of attention) because those regions were previously ignored. Results from this study help to better understand how mechanisms of visual attention operate within scene contexts during visual search. 


2005 ◽  
Vol 272 (1570) ◽  
pp. 1373-1377 ◽  
Author(s):  
Shinsuke Suzuki ◽  
Eizo Akiyama

The evolution of cooperation in social dilemmas has been of considerable concern in various fields such as sociobiology, economics and sociology. It might be that, in the real world, reputation plays an important role in the evolution of cooperation. Recently, studies that have addressed indirect reciprocity have revealed that cooperation can evolve through reputation, even though pairs of individuals interact only a few times. To our knowledge, most indirect reciprocity models have presumed dyadic interaction; no studies have attempted analysis of the evolution of cooperation in large communities where the effect of reputation is included. We investigate the evolution of cooperation in sizable groups in which the reputation of individuals affects the decision-making process. This paper presents the following: (i) cooperation can evolve in a four-person case, (ii) the evolution of cooperation becomes difficult as group size increases, even if the effect of reputation is included, and (iii) three kinds of final social states exist. In medium-sized communities, cooperative species can coexist in a stable manner with betrayal species.


2017 ◽  
Vol 26 (10) ◽  
pp. 4777-4789 ◽  
Author(s):  
Olivier Le Meur ◽  
Antoine Coutrot ◽  
Zhi Liu ◽  
Pia Rama ◽  
Adrien Le Roch ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document