visual search
Recently Published Documents


TOTAL DOCUMENTS

5614
(FIVE YEARS 855)

H-INDEX

124
(FIVE YEARS 8)

2022 ◽  
Vol 22 (1) ◽  
pp. 7
Author(s):  
Avi M. Aizenman ◽  
Krista A. Ehinger ◽  
Farahnaz A. Wick ◽  
Ruggero Micheletto ◽  
Jungyeon Park ◽  
...  

Author(s):  
Sabrina Bouhassoun ◽  
Nicolas Poirel ◽  
Noah Hamlin ◽  
Gaelle E. Doucet

AbstractSelecting relevant visual information in complex scenes by processing either global information or local parts helps us act efficiently within our environment and achieve goals. A global advantage (faster global than local processing) and global interference (global processing interferes with local processing) comprise an evidentiary global precedence phenomenon in early adulthood. However, the impact of healthy aging on this phenomenon remains unclear. As such, we collected behavioral data during a visual search task, including three-levels hierarchical stimuli (i.e., global, intermediate, and local levels) with several hierarchical distractors, in 50 healthy adults (26 younger (mean age: 26 years) and 24 older (mean age: 62 years)). Results revealed that processing information presented at the global and intermediate levels was independent of age. Conversely, older adults were slower for local processing compared to the younger adults, suggesting lower efficiency to deal with visual distractors during detail-oriented visual search. Although healthy older adults continued exhibiting a global precedence phenomenon, they were disproportionately less efficient during local aspects of information processing, especially when multiple visual information was displayed. Our results could have important implications for many life situations by suggesting that visual information processing is impacted by healthy aging, even with similar visual stimuli objectively presented.


Author(s):  
Mike E. Le Pelley ◽  
Rhonda Ung ◽  
Chisato Mine ◽  
Steven B. Most ◽  
Poppy Watson ◽  
...  

AbstractExisting research demonstrates different ways in which attentional prioritization of salient nontarget stimuli is shaped by prior experience: Reward learning renders signals of high-value outcomes more likely to capture attention than signals of low-value outcomes, whereas statistical learning can produce attentional suppression of the location in which salient distractor items are likely to appear. The current study combined manipulations of the value and location associated with salient distractors in visual search to investigate whether these different effects of selection history operate independently or interact to determine overall attentional prioritization of salient distractors. In Experiment 1, high-value and low-value distractors most frequently appeared in the same location; in Experiment 2, high-value and low-value distractors typically appeared in distinct locations. In both experiments, effects of distractor value and location were additive, suggesting that attention-promoting effects of value and attention-suppressing effects of statistical location-learning independently modulate overall attentional priority. Our findings are consistent with a view that sees attention as mediated by a common priority map that receives and integrates separate signals relating to physical salience and value, with signal suppression based on statistical learning determined by physical salience, but not incentive salience.


2022 ◽  
Author(s):  
Aiqiang Lu ◽  
Dongmei Wang ◽  
Shengxi He ◽  
Qiuyi Zhongcheng ◽  
Wei Zhang ◽  
...  

PLoS ONE ◽  
2022 ◽  
Vol 17 (1) ◽  
pp. e0261882
Author(s):  
Tamara S. Satmarean ◽  
Elizabeth Milne ◽  
Richard Rowe

Aggression and trait anger have been linked to attentional biases toward angry faces and attribution of hostile intent in ambiguous social situations. Memory and emotion play a crucial role in social-cognitive models of aggression but their mechanisms of influence are not fully understood. Combining a memory task and a visual search task, this study investigated the guidance of attention allocation toward naturalistic face targets during visual search by visual working memory (WM) templates in 113 participants who self-reported having served a custodial sentence. Searches were faster when angry faces were held in working memory regardless of the emotional valence of the visual search target. Higher aggression and trait anger predicted increased working memory modulated attentional bias. These results are consistent with the Social-Information Processing model, demonstrating that internal representations bias attention allocation to threat and that the bias is linked to aggression and trait anger.


Author(s):  
Bethany Growns ◽  
James D. Dunn ◽  
Erwin J. A. T. Mattijssen ◽  
Adele Quigley-McBride ◽  
Alice Towler

AbstractVisual comparison—comparing visual stimuli (e.g., fingerprints) side by side and determining whether they originate from the same or different source (i.e., “match”)—is a complex discrimination task involving many cognitive and perceptual processes. Despite the real-world consequences of this task, which is often conducted by forensic scientists, little is understood about the psychological processes underpinning this ability. There are substantial individual differences in visual comparison accuracy amongst both professionals and novices. The source of this variation is unknown, but may reflect a domain-general and naturally varying perceptual ability. Here, we investigate this by comparing individual differences (N = 248 across two studies) in four visual comparison domains: faces, fingerprints, firearms, and artificial prints. Accuracy on all comparison tasks was significantly correlated and accounted for a substantial portion of variance (e.g., 42% in Exp. 1) in performance across all tasks. Importantly, this relationship cannot be attributed to participants’ intrinsic motivation or skill in other visual-perceptual tasks (visual search and visual statistical learning). This paper provides novel evidence of a reliable, domain-general visual comparison ability.


Author(s):  
T. van Biemen ◽  
R.R.D. Oudejans ◽  
G.J.P. Savelsbergh ◽  
F. Zwenk ◽  
D.L. Mann

In foul decision-making by football referees, visual search is important for gathering task-specific information to determine whether a foul has occurred. Yet, little is known about the visual search behaviours underpinning excellent on-field decisions. The aim of this study was to examine the on-field visual search behaviour of elite and sub-elite football referees when calling a foul during a match. In doing so, we have also compared the accuracy and gaze behaviour for correct and incorrect calls. Elite and sub-elite referees (elite: N = 5, Mage  ±  SD = 29.8 ± 4.7yrs, Mexperience  ±  SD = 14.8 ± 3.7yrs; sub-elite: N = 9, Mage  ±  SD = 23.1 ± 1.6yrs, Mexperience  ±  SD = 8.4 ± 1.8yrs) officiated an actual football game while wearing a mobile eye-tracker, with on-field visual search behaviour compared between skill levels when calling a foul (Nelite = 66; Nsub−elite = 92). Results revealed that elite referees relied on a higher search rate (more fixations of shorter duration) compared to sub-elites, but with no differences in where they allocated their gaze, indicating that elites searched faster but did not necessarily direct gaze towards different locations. Correct decisions were associated with higher gaze entropy (i.e. less structure). In relying on more structured gaze patterns when making incorrect decisions, referees may fail to pick-up information specific to the foul situation. Referee development programmes might benefit by challenging the speed of information pickup but by avoiding pre-determined gaze patterns to improve the interpretation of fouls and increase the decision-making performance of referees.


Author(s):  
Victoria Laxton ◽  
Andrew K. Mackenzie ◽  
David Crundall

2022 ◽  
Author(s):  
Qi Zhang ◽  
Zhibang Huang ◽  
Liang Li ◽  
Sheng Li

Visual search in a complex environment requires efficient discrimination between target and distractors. Training serves as an effective approach to improve visual search performance when the target does not automatically pop out from the distractors. In the present study, we trained subjects on a conjunction visual search task and examined the training effects in behavior and eye movement from Experiments 1 to 4. The results showed that training improved behavioral performance and reduced the number of saccades and overall scanning time. Training also increased the search initiation time before the first saccade and the proportion of trials in which the subjects correctly identified the target without any saccade, but these effects were modulated by stimulus' parameters. In Experiment 5, we replicated these training effects when eye movements and EEG signals were recorded simultaneously. The results revealed significant N2pc components after the stimulus onset (i.e., stimulus-locked) and before the first saccade (i.e., saccade-locked) when the search target was the trained one. These N2pc components can be considered as the neural signatures for the training-induced boost of covert attention to the trained target. The enhanced covert attention led to a beneficial tradeoff between search initiation time and the number of saccades as a small amount of increase in search initiation time could result in a larger reduction in scanning time. These findings suggest that the enhanced covert attention to target and optimized overt eye movements are coordinated together to facilitate visual search training.


2022 ◽  
pp. 202-230
Author(s):  
Renu Sharma ◽  
Mamta Mohan ◽  
Prabha Mariappan

This chapter gives an overview of how artificial intelligence is used by the retail sector to enhance customer experience and to improve profitability. It provides information about the role of the pandemic in stimulating AI adoption by retailers. It deliberates on how AI tools help retailers to engage customers online and in stores. Firms gain better understanding of customers, design immersive experiences, and enhance customer lifetime value using cost-effective technology solutions. It discusses popular AI algorithms like recommendation algorithm, association algorithm, classification algorithm, and predictive algorithm. Popular applications in retail include chatbots, visual search, voice search engine optimisation, in-store assistance, and virtual fitting rooms.


Sign in / Sign up

Export Citation Format

Share Document