visual salience
Recently Published Documents


TOTAL DOCUMENTS

150
(FIVE YEARS 44)

H-INDEX

23
(FIVE YEARS 5)

Perception ◽  
2021 ◽  
pp. 030100662110695
Author(s):  
María Silva-Gago ◽  
Flora Ioannidou ◽  
Annapaola Fedato ◽  
Timothy Hodgson ◽  
Emiliano Bruner

The study of lithic technology can provide information on human cultural evolution. This article aims to analyse visual behaviour associated with the exploration of ancient stone artefacts and how this relates to perceptual mechanisms in humans. In Experiment 1, we used eye tracking to record patterns of eye fixations while participants viewed images of stone tools, including examples of worked pebbles and handaxes. The results showed that the focus of gaze was directed more towards the upper regions of worked pebbles and on the basal areas for handaxes. Knapped surfaces also attracted more fixation than natural cortex for both tool types. Fixation distribution was different to that predicted by models that calculate visual salience. Experiment 2 was an online study using a mouse-click attention tracking technique and included images of unworked pebbles and ‘mixed’ images combining the handaxe's outline with the pebble's unworked texture. The pattern of clicks corresponded to that revealed using eye tracking and there were differences between tools and other images. Overall, the findings suggest that visual exploration is directed towards functional aspects of tools. Studies of visual attention and exploration can supply useful information to inform understanding of human cognitive evolution and tool use.


2021 ◽  
Vol 21 (9) ◽  
pp. 2095
Author(s):  
Cindy Xiong ◽  
Chase Stokes ◽  
Steve Franconeri
Keyword(s):  

2021 ◽  
Vol 1 ◽  
pp. 273
Author(s):  
Ilana Torres ◽  
Kathryn Slusarczyk ◽  
Malihe Alikhani ◽  
Matthew Stone

In image-text presentations from online discourse, pronouns can refer to entities depicted in images, even if these entities are not otherwise referred to in a text caption. While visual salience may be enough to allow a writer to use a pronoun to refer to a prominent entity in the image, coherence theory suggests that pronoun use is more restricted. Specifically, language users may need an appropriate coherence relation between text and imagery to license and resolve pronouns. To explore this hypothesis and better understand the relationship between image context and text interpretation, we annotated an image-text data set with coherence relations and pronoun information. We find that pronoun use reflects a complex interaction between the content of the pronoun, the grammar of the text, and the relation of text and image.


2021 ◽  
pp. 109467052110124
Author(s):  
Sarah Köcher ◽  
Sören Köcher

In this article, the authors demonstrate a tendency among consumers to use the arithmetic mode as a heuristic basis when drawing inferences from graphical displays of online rating distributions in such a way that service evaluations inferred from rating distributions systematically vary by the location of the mode. The rationale underlying this phenomenon is that the mode (i.e., the most frequent rating which is represented by the tallest bar in a graphical display) attracts consumers’ attention because of its visual salience and is thus disproportionately weighted when they draw conclusions. Across a series of eight studies, the authors provide strong empirical evidence for the existence of the mode heuristic, shed light on this phenomenon at the process level, and demonstrate how consumers’ inferences based on the mode heuristic depend on the visual salience of the mode. Together, the findings of these studies contribute to a better understanding of how service customers process and interpret graphical illustrations of online rating distributions and provide companies with a new key figure that—aside from rating volume, average ratings, and rating dispersion—should be incorporated in the monitoring, analyzing, and evaluating of review data.


Sensors ◽  
2021 ◽  
Vol 21 (9) ◽  
pp. 3099
Author(s):  
V. Javier Traver ◽  
Judith Zorío ◽  
Luis A. Leiva

Temporal salience considers how visual attention varies over time. Although visual salience has been widely studied from a spatial perspective, its temporal dimension has been mostly ignored, despite arguably being of utmost importance to understand the temporal evolution of attention on dynamic contents. To address this gap, we proposed Glimpse, a novel measure to compute temporal salience based on the observer-spatio-temporal consistency of raw gaze data. The measure is conceptually simple, training free, and provides a semantically meaningful quantification of visual attention over time. As an extension, we explored scoring algorithms to estimate temporal salience from spatial salience maps predicted with existing computational models. However, these approaches generally fall short when compared with our proposed gaze-based measure. Glimpse could serve as the basis for several downstream tasks such as segmentation or summarization of videos. Glimpse’s software and data are publicly available.


2021 ◽  
pp. 1-18
Author(s):  
Matthew D. Hilchey ◽  
Matthew Osborne ◽  
Dilip Soman

Abstract Regulators require lenders to display a subset of credit card features in summary tables before customers finalize a credit card choice. Some jurisdictions require some features to be displayed more prominently than others to help ensure that consumers are made aware of them. This approach could lead to untoward effects on choice, such that relevant but nonprominent product features do not factor in as significantly. To test this possibility, we instructed a random sample of 1615 adults to choose between two hypothetical credit cards whose features were shown side by side in tables. The sample was instructed to select the card that would result in the lowest financial charges, given a hypothetical scenario. Critically, we randomly varied whether the annual interest rates and fees were made visually salient by making one, both, or neither brighter than other features. The findings show that even among credit-savvy individuals, choice tends strongly toward the product that outperforms the other on a salient feature. As a result, we encourage regulators to consider not only whether a key feature should be made more salient, but also the guidelines regarding when a key feature should be displayed prominently during credit card acquisition.


Author(s):  
Alexander Krüger ◽  
Ingrid Scharlau

AbstractVisual salience is a key component of attentional selection, the process that guards the scarce resources needed for conscious recognition and perception. In previous works, we proposed a measure of visual salience based on a formal theory of visual selection. However, the strength of visual salience depends on the time course as well as local physical contrasts. Evidence from multiple experimental designs in the literature suggests that the strength of salience rises initially and declines after approximately 150 ms. The present article amends the theory-based salience measure beyond local physical contrasts to the time course of salience. It does so through a first experiment which reveals that—contrary to expectations—salience is not reduced during the first 150 ms after onset. Instead, the overall visual processing capacity is severely reduced, which corresponds to a reduced processing speed of all stimuli in the visual field. A second experiment confirms this conclusion by replicating the result. We argue that the slower stimulus processing may have been overlooked previously because the attentional selection mechanism had not yet been modeled in studies on the time course of salience.


2021 ◽  
Vol 13 (3) ◽  
pp. 338
Author(s):  
Shaobo Xia ◽  
Dong Chen ◽  
Jiju Peethambaran ◽  
Pu Wang ◽  
Sheng Xu

Tree localization in point clouds of forest scenes is critical in the forest inventory. Most of the existing methods proposed for TLS forest data are based on model fitting or point-wise features which are time-consuming, sensitive to data incompleteness and complex tree structures. Furthermore, these methods often require lots of preprocessing such as ground filtering and noise removal. The fast and easy-to-use top-based methods that are widely applied in processing ALS point clouds are not applicable in localizing trees in TLS point clouds due to the data incompleteness and complex canopy structures. The objective of this study is to make the top-based methods applicable to TLS forest point clouds. To this end, a novel point cloud transformation is presented, which enhances the visual salience of tree instances and makes the top-based methods adapting to TLS forest scenes. The input for the proposed method is the raw point clouds and no other pre-processing steps are needed. The new method is tested on an international benchmark and the experimental results demonstrate its necessity and effectiveness. Finally, the proposed method has the potential to benefit other object localization tasks in different scenes based on detailed analysis and tests.


Sign in / Sign up

Export Citation Format

Share Document