scholarly journals The Top-Down Influences of Characteristic Sounds on Visual Search Performance in Realistic Scenes

2019 ◽  
Author(s):  
Ghazaleh Mahzouni
2021 ◽  
pp. 174702182098635
Author(s):  
Hana Yabuki ◽  
Stephanie C Goodhew

Visual search is a psychological function integral to most people’s daily lives. The extent to which visual search efficiency, and in particular the ability to use top-down attention in visual search, changes across the lifespan has been the focus of ongoing research. Here we sought to understand how the ability to frequently and dynamically change the target in a conjunction search task was affected by ageing. To do this, we compared visual search performance of a group of younger and older adults under conditions in which the target type was determined by a cue and could change on trial-to-trial basis (Intermixed), versus when the target type was fixed for a block of trials (Blocked). Although older adults were overall slower at the conjunction visual search task, and both groups were slower in the Intermixed compared with the Blocked Condition, older adults were not disproportionately affected by the Intermixed relative to the Blocked conditions. These results indicate that the ability to frequently change the target of visual search is preserved in older adults. This conclusion is consistent with an emerging consensus that many aspects of visual search and top-down contributions to it are preserved across the lifespan. It is also consistent with a growing body of work which challenges the neurocognitive theories of ageing that predict sweeping deficits in complex top-down components of cognition.


2019 ◽  
Author(s):  
Arnab Biswas ◽  
Devpriya Kumar

Searching for things is an essential part of our everyday life. The way we search gives us clues on how our cognitive processes function. Scientists have used the visual search task to study attention, perception, and memory. Visual search performance depends upon a combination of stimulus-driven, bottom-up information, goal-oriented, top-down information, and selection history bias. It is difficult to separate these factors due to their close interaction. Our current study presents a paradigm to isolate the effects of top-down factors in visual search. In our experiments, we asked subjects to perform two different search tasks. A part of the total trials in each of these tasks had the same bottom-up information. That is, they had the same target, distractor, and target-distractor arrangement. We controlled for selection history bias by having an equivalent proportion of target types for all tasks and randomized the trial-order for each subject. We compared the mean response times for the critical trials, which had identical bottom-up information shared across the two pairs of tasks. The results showed a significant difference in mean response times of critical trials for both our experiments. Thus, this paradigm allows us to compare the difference in top-down guidance when controlling for bottom-up factors. Pairwise comparison of top-down guidance for different features given the same bottom-up information allows us to ask interesting questions such as, “Visual search guidance for which features can or cannot be easily increased by top-down processes?” Answers to these questions can further shed light on the ecological and evolutionary importance of such features in perception.


1993 ◽  
Vol 77 (3) ◽  
pp. 867-881 ◽  
Author(s):  
Theo Boersema ◽  
Harm J. G. Zwaga ◽  
Kees Jorens

The effect distracting objects have on visual-search performance in real-life situations cannot readily be predicted from current search theories. The validity of an approach to close this gap was tested by comparing search performance for color slides of scenes in public buildings with performance for simplified computer-generated images derived from these slides. The target was always a blue rectangle in both the original slides of scenes and the computer simulations. The distractors were differently colored rectangles (not blue), and their number was varied systematically. Analysis showed a significant linear increase in search time with number of distractors, which challenges predictions of current search theories. An explanation for this contradiction is proposed. Also, search times for color slides were significantly longer than those for computer images; however, there was no significant interaction between type of stimulus and number of distractors. It is concluded that the simulated scenes yielded adequate predictions of the effect of distracting objects on search performance in real-life situations.


2015 ◽  
Vol 74 (1) ◽  
pp. 55-60 ◽  
Author(s):  
Alexandre Coutté ◽  
Gérard Olivier ◽  
Sylvane Faure

Computer use generally requires manual interaction with human-computer interfaces. In this experiment, we studied the influence of manual response preparation on co-occurring shifts of attention to information on a computer screen. The participants were to carry out a visual search task on a computer screen while simultaneously preparing to reach for either a proximal or distal switch on a horizontal device, with either their right or left hand. The response properties were not predictive of the target’s spatial position. The results mainly showed that the preparation of a manual response influenced visual search: (1) The visual target whose location was congruent with the goal of the prepared response was found faster; (2) the visual target whose location was congruent with the laterality of the response hand was found faster; (3) these effects have a cumulative influence on visual search performance; (4) the magnitude of the influence of the response goal on visual search is marginally negatively correlated with the rapidity of response execution. These results are discussed in the general framework of structural coupling between perception and motor planning.


Author(s):  
Gwendolyn Rehrig ◽  
Reese A. Cullimore ◽  
John M. Henderson ◽  
Fernanda Ferreira

Abstract According to the Gricean Maxim of Quantity, speakers provide the amount of information listeners require to correctly interpret an utterance, and no more (Grice in Logic and conversation, 1975). However, speakers do tend to violate the Maxim of Quantity often, especially when the redundant information improves reference precision (Degen et al. in Psychol Rev 127(4):591–621, 2020). Redundant (non-contrastive) information may facilitate real-world search if it narrows the spatial scope under consideration, or improves target template specificity. The current study investigated whether non-contrastive modifiers that improve reference precision facilitate visual search in real-world scenes. In two visual search experiments, we compared search performance when perceptually relevant, but non-contrastive modifiers were included in the search instruction. Participants (NExp. 1 = 48, NExp. 2 = 48) searched for a unique target object following a search instruction that contained either no modifier, a location modifier (Experiment 1: on the top left, Experiment 2: on the shelf), or a color modifier (the black lamp). In Experiment 1 only, the target was located faster when the verbal instruction included either modifier, and there was an overall benefit of color modifiers in a combined analysis for scenes and conditions common to both experiments. The results suggest that violations of the Maxim of Quantity can facilitate search when the violations include task-relevant information that either augments the target template or constrains the search space, and when at least one modifier provides a highly reliable cue. Consistent with Degen et al. (2020), we conclude that listeners benefit from non-contrastive information that improves reference precision, and engage in rational reference comprehension. Significance statement This study investigated whether providing more information than someone needs to find an object in a photograph helps them to find that object more easily, even though it means they need to interpret a more complicated sentence. Before searching a scene, participants were either given information about where the object would be located in the scene, what color the object was, or were only told what object to search for. The results showed that providing additional information helped participants locate an object in an image more easily only when at least one piece of information communicated what part of the scene the object was in, which suggests that more information can be beneficial as long as that information is specific and helps the recipient achieve a goal. We conclude that people will pay attention to redundant information when it supports their task. In practice, our results suggest that instructions in other contexts (e.g., real-world navigation, using a smartphone app, prescription instructions, etc.) can benefit from the inclusion of what appears to be redundant information.


Ergonomics ◽  
1992 ◽  
Vol 35 (3) ◽  
pp. 243-252 ◽  
Author(s):  
DOHYUNG KEE ◽  
EUI S. JUNG ◽  
MIN K. CHUNG

2016 ◽  
Vol 16 (10) ◽  
pp. 18 ◽  
Author(s):  
Anna E. Hughes ◽  
Rosy V. Southwell ◽  
Iain D. Gilchrist ◽  
David J. Tolhurst

Sign in / Sign up

Export Citation Format

Share Document