Self-Directed Speech Affects Visual Search Performance

2012 ◽  
Vol 65 (6) ◽  
pp. 1068-1085 ◽  
Author(s):  
Gary Lupyan ◽  
Daniel Swingley

People often talk to themselves, yet very little is known about the functions of this self-directed speech. We explore effects of self-directed speech on visual processing by using a visual search task. According to the label feedback hypothesis (Lupyan, 2007a), verbal labels can change ongoing perceptual processing—for example, actually hearing “chair” compared to simply thinking about a chair can temporarily make the visual system a better “chair detector”. Participants searched for common objects, while being sometimes asked to speak the target's name aloud. Speaking facilitated search, particularly when there was a strong association between the name and the visual target. As the discrepancy between the name and the target increased, speaking began to impair performance. Together, these results speak to the power of words to modulate ongoing visual processing.

2017 ◽  
Vol 117 (1) ◽  
pp. 388-402 ◽  
Author(s):  
Michael A. Cohen ◽  
George A. Alvarez ◽  
Ken Nakayama ◽  
Talia Konkle

Visual search is a ubiquitous visual behavior, and efficient search is essential for survival. Different cognitive models have explained the speed and accuracy of search based either on the dynamics of attention or on similarity of item representations. Here, we examined the extent to which performance on a visual search task can be predicted from the stable representational architecture of the visual system, independent of attentional dynamics. Participants performed a visual search task with 28 conditions reflecting different pairs of categories (e.g., searching for a face among cars, body among hammers, etc.). The time it took participants to find the target item varied as a function of category combination. In a separate group of participants, we measured the neural responses to these object categories when items were presented in isolation. Using representational similarity analysis, we then examined whether the similarity of neural responses across different subdivisions of the visual system had the requisite structure needed to predict visual search performance. Overall, we found strong brain/behavior correlations across most of the higher-level visual system, including both the ventral and dorsal pathways when considering both macroscale sectors as well as smaller mesoscale regions. These results suggest that visual search for real-world object categories is well predicted by the stable, task-independent architecture of the visual system. NEW & NOTEWORTHY Here, we ask which neural regions have neural response patterns that correlate with behavioral performance in a visual processing task. We found that the representational structure across all of high-level visual cortex has the requisite structure to predict behavior. Furthermore, when directly comparing different neural regions, we found that they all had highly similar category-level representational structures. These results point to a ubiquitous and uniform representational structure in high-level visual cortex underlying visual object processing.


SLEEP ◽  
2018 ◽  
Vol 41 (suppl_1) ◽  
pp. A252-A252
Author(s):  
E Giora ◽  
A Galbiati ◽  
M Zucconi ◽  
L Ferini-Strambi

Author(s):  
Jeffrey C. Joe ◽  
Casey R. Kovesdi ◽  
Andrea Mack ◽  
Tina Miyake

This study examined the relationship between how visual information is organized and people’s visual search performance. Specifically, we systematically varied how visual search information was organized (from well-organized to disorganized), and then asked participants to perform a visual search task involving finding and identifying a number of visual targets within the field of visual non-targets. We hypothesized that the visual search task would be easier when the information was well-organized versus when it was disorganized. We further speculated that visual search performance would be mediated by cognitive workload, and that the results could be generally described by the well-established speed-accuracy tradeoff phenomenon. This paper presents the details of the study we designed and our results.


1992 ◽  
Vol 74 (1) ◽  
pp. 67-76 ◽  
Author(s):  
Don Diener ◽  
Francine Linda Greenstein ◽  
P. Diane Turnbough

Groups of women differing in the severity of reported premenstrual symptoms were compared over two menstrual cycles on a digit-span task, a visual-search task, and a combination of the two. Neither group exhibited large performance changes during the premenstrual phase of the cycle. High-symptom women differed somewhat from low-symptom women in the effect of menstrual phase on digit-span performance, recalling slightly fewer series correctly during the premenstrual phase. The response latency of high-symptom women on the visual-search task was substantially longer than that of the low-symptom women regardless of menstrual phase. These results suggest that there may be stable differences between high-symptom and low-symptom subjects that are greater than the cyclical fluctuation within either group.


Author(s):  
P. Manivannan ◽  
Sara Czaja ◽  
Colin Drury ◽  
Chi Ming Ip

Visual search is an important component of many real world tasks such as industrial inspection and driving. Several studies have shown that age has an impact on visual search performance. In general older people demonstrate poorer performance on such tasks as compared to younger people. However, there is controversy regarding the source of the age-performance effect. The objective of this study was to examine the relationship between component abilities and visual search performance, in order to identify the locus of age-related performance differences. Six abilities including reaction time, working memory, selective attention and spatial localization were identified as important components of visual search performance. Thirty-two subjects ranging in age from 18 - 84 years, categorized in three different age groups (young, middle, and older) participated in the study. Their component abilities were measured and they performed a visual search task. The visual search task varied in complexity in terms of type of targets detected. Significant relationships were found between some of the component skills and search performance. Significant age effects were also observed. A model was developed using hierarchical multiple linear regression to explain the variance in search performance. Results indicated that reaction time, selective attention, and age were important predictors of search performance with reaction time and selective attention accounting for most of the variance.


Author(s):  
Brian D. Simpson ◽  
Robert S. Bolia ◽  
Richard L McKinley ◽  
Douglas S Brungart

The effects of hearing protection on sound localization were examined in the context of an auditory-cued visual search task. Participants were required to locate a visual target in a field of 5, 20, or 50 visual distractors randomly distributed throughout ±180° of azimuth and from approximately −70° to +90° in elevation. Four conditions were examined in which an auditory cue, spatially co-located with the visual target, was presented. In these conditions, participants wore (1) earplugs, (2) earmuffs, (3) both earplugs and earmuffs, or (4) no hearing protection. In addition, a control condition was examined in which no auditory cue was provided. Visual search times and head motion data suggest that the degree to which localization cues are disrupted with hearing protection devices varies with the type of device worn. Moreover, when both earplugs and earmuffs are worn, search times approach those found with no auditory cue, suggesting that sound localization cues are nearly completely eliminated in this condition.


2020 ◽  
Author(s):  
Nir Shalev ◽  
Sage Boettcher ◽  
Hannah Wilkinson ◽  
Gaia Scerif ◽  
Anna C. Nobre

It is believed that children have difficulties in guiding attention while facing distraction. However, developmental accounts of spatial attention rely on traditional search designs using static displays. In real life, dynamic environments can embed regularities that afford anticipation and benefit performance. We developed a dynamic visual-search task to test the ability of children to benefit from spatio-temporal regularities to detect goal-relevant targets appearing within an extended dynamic context amidst irrelevant distracting stimuli. We compared children and adults in detecting predictable vs. unpredictable targets fading in and out among competing distracting stimuli. While overall search performance was poorer in children, both groups detected more predictable targets. This effect was confined to task-relevant information. Additionally, we report how predictions are related to individual differences in attention. Altogether, our results indicate a striking capacity of prediction-led guidance towards task-relevant information in dynamic environments, refining traditional views about poor goal-driven attention in childhood.


2021 ◽  
pp. 174702182098635
Author(s):  
Hana Yabuki ◽  
Stephanie C Goodhew

Visual search is a psychological function integral to most people’s daily lives. The extent to which visual search efficiency, and in particular the ability to use top-down attention in visual search, changes across the lifespan has been the focus of ongoing research. Here we sought to understand how the ability to frequently and dynamically change the target in a conjunction search task was affected by ageing. To do this, we compared visual search performance of a group of younger and older adults under conditions in which the target type was determined by a cue and could change on trial-to-trial basis (Intermixed), versus when the target type was fixed for a block of trials (Blocked). Although older adults were overall slower at the conjunction visual search task, and both groups were slower in the Intermixed compared with the Blocked Condition, older adults were not disproportionately affected by the Intermixed relative to the Blocked conditions. These results indicate that the ability to frequently change the target of visual search is preserved in older adults. This conclusion is consistent with an emerging consensus that many aspects of visual search and top-down contributions to it are preserved across the lifespan. It is also consistent with a growing body of work which challenges the neurocognitive theories of ageing that predict sweeping deficits in complex top-down components of cognition.


1981 ◽  
Vol 53 (2) ◽  
pp. 411-418
Author(s):  
Lance A. Portnoff ◽  
Jerome A. Yesavage ◽  
Mary B. Acker

Disturbances in attention are among the most frequent cognitive abnormalities in schizophrenia. Recent research has suggested that some schizophrenics have difficulty with visual tracking, which is suggestive of attentional deficits. To investigate differential visual-search performance by schizophrenics, 15 chronic undifferentiated and 15 paranoid schizophrenics were compared with 15 normals on two tests measuring visual search in a systematic and an unsystematic stimulus mode. Chronic schizophrenics showed difficulty with both kinds of visual-search tasks. In contrast, paranoids had only a deficit in the systematic visual-search task. Their ability for visual search in an unsystematized stimulus array was equivalent to that of normals. Although replication and cross-validation is needed to confirm these findings, it appears that the two tests of visual search may provide a useful ancillary method for differential diagnosis between these two types of schizophrenia.


2021 ◽  
Author(s):  
Thomas L. Botch ◽  
Brenda D. Garcia ◽  
Yeo Bi Choi ◽  
Caroline E. Robertson

Visual search is a universal human activity in naturalistic environments. Traditionally, visual search is investigated under tightly controlled conditions, where head-restricted participants locate a minimalistic target in a cluttered array presented on a computer screen. Do classic findings of visual search extend to naturalistic settings, where participants actively explore complex, real-world scenes? Here, we leverage advances in virtual reality (VR) technology to relate individual differences in classic visual search paradigms to naturalistic search behavior. In a naturalistic visual search task, participants looked for an object within their environment via a combination of head-turns and eye-movements using a head-mounted display. Then, in a classic visual search task, participants searched for a target within a simple array of colored letters using only eye-movements. We tested how set size, a property known to limit visual search within computer displays, predicts the efficiency of search behavior inside immersive, real-world scenes that vary in levels of visual clutter. We found that participants' search performance was impacted by the level of visual clutter within real-world scenes. Critically, we also observed that individual differences in visual search efficiency in classic search predicted efficiency in real-world search, but only when the comparison was limited to the forward-facing field of view for real-world search. These results demonstrate that set size is a reliable predictor of individual performance across computer-based and active, real-world visual search behavior.


Sign in / Sign up

Export Citation Format

Share Document