Classification without Identification in Visual Search

1971 ◽  
Vol 23 (2) ◽  
pp. 178-186 ◽  
Author(s):  
Joan Brand

Six subjects scanned displays of random consonants for a single target which was (a) another consonant; (b) a given number; or (c) any number. A second group of six subjects took part in three comparable conditions with number displays, and letters or numbers as targets. Scanning time for a number in a letter display or a letter in a number display was more rapid than scanning for a target drawn from the same set as the background. Several unpractised subjects, and all the subjects who practised the task, were able to scan as fast through letters for “any number” as for a specific number, or conversely through digits. The finding of different scanning rates for two precisely physically specified targets, depending on which class they were drawn from, runs counter to an explanation of high-speed scanning in terms of the operation of visual feature analysers. It is suggested that familiar categorization responses may be immediate and may provide the basis for the discrimination of relevant from irrelevant items in rapid visual scanning.

1978 ◽  
Vol 47 (3) ◽  
pp. 803-808
Author(s):  
Deborah Lott Holmes ◽  
Lynne Werner Olsho ◽  
Richard Peper ◽  
Ann Schulte ◽  
Philip Green

54 subjects participated in a visual scanning study in which each subject was provided with only a single target set (of 1, 2, 4, 6, 8, or 10 letters in length). Eight sessions of 30 trials each were completed for each subject. Although there were slight differences in the rate at which performance improved over trials, this was not systematically related to size of target set. Moreover, even in the last session, there were large differences in performance in the different target sets. These findings suggest that Neisser's evidence for parallel preattentive processing in such tasks may have been confounded by his use of nested target sets and a within-subjects design.


2019 ◽  
Author(s):  
Liuba Papeo ◽  
Salvador Soto-Faraco

Humans can effectively search visual scenes by spatial location, visual feature or whole object. Here, we show that visual search can also benefit from fast appraisal of relations between individuals in human groups. Healthy adults searched for a facing (seemingly interacting) body-dyad among nonfacing dyads, or vice versa. We varied the task parameters to emphasize processing of targets or distractors. Facing-dyad targets were more likely to recruit attention than nonfacing-dyad targets (Experiments 1-2-4). Facing-dyad distractors were checked and rejected more efficiently than nonfacing-dyad distractors (Experiment 3). Moreover, search for an individual body was harder when it was embedded in a facing, than a nonfacing dyad (Experiment 5). We propose that fast grouping of interacting bodies in one attentional unit is the mechanism that accounts for efficient processing of dyads and for the inefficient access to individual parts within a dyad.


1980 ◽  
Vol 24 (1) ◽  
pp. 317-319
Author(s):  
Anita V. Kak ◽  
James L. Knight

Text page-layout may influence both reading speed and comprehension. Available data obtained from low-speed readers suggests a superiority for two-column format over full-page format. As reading speed increases, visual scanning strategies change. The appropriateness of layouts designed for low-speed strategies are evaluated in the context of high-speed reading.


2018 ◽  
Vol 30 (12) ◽  
pp. 1902-1915 ◽  
Author(s):  
Nick Berggren ◽  
Martin Eimer

Mental representations of target features (attentional templates) control the selection of candidate target objects in visual search. The question where templates are maintained remains controversial. We employed the N2pc component as an electrophysiological marker of template-guided target selection to investigate whether and under which conditions templates are held in visual working memory (vWM). In two experiments, participants memorized one or four shapes (low vs. high vWM load) before either being tested on their memory or performing a visual search task. When targets were defined by one of two possible colors (e.g., red or green), target N2pcs were delayed with high vWM load. This suggests that the maintenance of multiple shapes in vWM interfered with the activation of color-specific search templates, supporting the hypothesis that these templates are held in vWM. This was the case despite participants always searching for the same two target colors. In contrast, the speed of target selection in a task where a single target color remained relevant throughout was unaffected by concurrent load, indicating that a constant search template for a single feature may be maintained outside vWM in a different store. In addition, early visual N1 components to search and memory test displays were attenuated under high load, suggesting a competition between external and internal attention. The size of this attenuation predicted individual vWM performance. These results provide new electrophysiological evidence for impairment of top–down attentional control mechanisms by high vWM load, demonstrating that vWM is involved in the guidance of attentional target selection during search.


1976 ◽  
Vol 43 (3) ◽  
pp. 699-702
Author(s):  
Frank H. Farley ◽  
Shu-Jen Yen

The influence of target-word affective properties on information processing time in a high speed visual-search task was studied. The 24 words were embedded in random-letter matrices, with one word per matrix. Subjects (5 male, 5 female) were tested. Words extreme on emotionality (positive vs negative affect) yielded significantly longer latencies than neutral words. The results were discussed in the light of related list-learning and problem-solving research.


2020 ◽  
Vol 13 (5) ◽  
Author(s):  
Jacob G. Martin ◽  
Charles E. Davis ◽  
Maximilian Riesenhuber ◽  
Simon J. Thorpe

Here, we provide an analysis of the microsaccades that occurred during continuous visual search and targeting of small faces that we pasted either into cluttered background photos or into a simple gray background.  Subjects continuously used their eyes to target singular 3-degree upright or inverted faces in changing scenes.  As soon as the participant’s gaze reached the target face, a new face was displayed in a different and random location.  Regardless of the experimental context (e.g. background scene, no background scene), or target eccentricity (from 4 to 20 degrees of visual angle), we found that the microsaccade rate dropped to near zero levels within only 12 milliseconds after stimulus onset.  There were almost never any microsaccades after stimulus onset and before the first saccade to the face.  One subject completed 118 consecutive trials without a single microsaccade.  However, in about 20% of the trials, there was a single microsaccade that occurred almost immediately after the preceding saccade’s offset.  These microsaccades were task oriented because their facial landmark targeting distributions matched those of saccades within both the upright and inverted face conditions.  Our findings show that a single feedforward pass through the visual hierarchy for each stimulus is likely all that is needed to effectuate prolonged continuous visual search.  In addition, we provide evidence that microsaccades can serve perceptual functions like correcting saccades or effectuating task-oriented goals during continuous visual search.


2019 ◽  
Author(s):  
Mohammad Shahdloo ◽  
Emin Çelik ◽  
Tolga Çukur

AbstractHumans divide their attention among multiple visual targets in daily life, and visual search gets more difficult as the number of targets increases. The biased competition hypothesis (BC) has been put forth as an explanation for this phenomenon. BC suggests that brain responses during divided attention are a weighted linear combination of the responses during search for each target individually. Furthermore, this combination is biased by the intrinsic selectivity of cortical regions. Yet, it is unknown whether attentional modulations of semantic representations of cluttered and dynamic natural scenes are consistent with this hypothesis. Here, we investigated whether BC accounts for semantic representation during natural category-based visual search. Human subjects viewed natural movies, and their whole-brain BOLD responses were recorded while they attended to “humans”, “vehicles” (i.e. single-target attention tasks), or “both humans and vehicles” (i.e. divided attention) in separate runs. We computed a voxelwise linearity index to assess whether semantic representation during divided attention can be modeled as a weighted combination of representations during the two single-target attention tasks. We then examined the bias in weights of this linear combination across cortical ROIs. We find that semantic representations during divided attention are linear to a substantial degree, and that they are biased toward the preferred target in category-selective areas across ventral temporal cortex. Taken together, these results suggest that the biased competition hypothesis is a compelling account for attentional modulations of semantic representation across cortex.Significance StatementNatural vision is a complex task that involves splitting attention between multiple search targets. According to the biased competition hypothesis (BC), limited representational capacity of the cortex inevitably leads to a competition among representation of these targets and the competition is biased by intrinsic selectivity of cortical areas. Here we examined BC for semantic representation of hundreds of object and action categories in natural movies. We observed that: 1) semantic representation during simultaneous attention to two object categories is a weighted linear combination of representations during attention to each of them alone, and 2) the linear combination is biased toward semantic representation of the preferred object category in strongly category-selective areas. These findings suggest BC as a compelling account for attentional modulations of semantic representation across cortex in natural vision.


Sign in / Sign up

Export Citation Format

Share Document