Is early visual processing attention impenetrable?

1999 ◽  
Vol 22 (3) ◽  
pp. 400-400 ◽  
Author(s):  
Su-Ling Yeh ◽  
I-Ping Chen

Pylyshyn's effort in establishing the cognitive impenetrability of early vision is welcome. However, his view about the role of attention in early vision seems to be oversimplified. The allocation of focal attention manifests its effect among multiple stages in the early vision system, it is not just confined to the input and the output levels.

1999 ◽  
Vol 22 (3) ◽  
pp. 383-384
Author(s):  
Cyril Latimer

Pylyshyn makes a convincing case that early visual processing is cognitively impenetrable, and although I question the utility of binary oppositions such as penetrable/impenetrable, for the most part I am in agreement. The author does not provide explicit designations or denotations for the terms penetrable and impenetrable, which appear quite arbitrary. Furthermore, the use of focal attention smacks of an homunculus, and the account appears to slip too easily between the perceptual, the cognitive, and the neurophysiological.


2005 ◽  
Vol 58 (6) ◽  
pp. 1103-1118 ◽  
Author(s):  
Sarah V. Stevenage ◽  
Elizabeth A. Lee ◽  
Nick Donnelly

Two experiments are reported to test the proposition that facial familiarity influences processing on a face classification task. Thatcherization was used to generate distorted versions of familiar and unfamiliar individuals. Using both a 2AFC (which is “odd”?) task to pairs of images (Experiment 1) and an “odd/normal” task to single images (Experiment 2), results were consistent and indicated that familiarity with the target face facilitated the face classification decision. These results accord with the proposal that familiarity influences the early visual processing of faces. Results are evaluated with respect to four theoretical developments of Valentine's (1991) face-space model, and can be accommodated with the two models that assume familiarity to be encoded within a region of face space.


1993 ◽  
Vol 5 (5) ◽  
pp. 695-718 ◽  
Author(s):  
Yair Weiss ◽  
Shimon Edelman ◽  
Manfred Fahle

Performance of human subjects in a wide variety of early visual processing tasks improves with practice. HyperBF networks (Poggio and Girosi 1990) constitute a mathematically well-founded framework for understanding such improvement in performance, or perceptual learning, in the class of tasks known as visual hyperacuity. The present article concentrates on two issues raised by the recent psychophysical and computational findings reported in Poggio et al. (1992b) and Fahle and Edelman (1992). First, we develop a biologically plausible extension of the HyperBF model that takes into account basic features of the functional architecture of early vision. Second, we explore various learning modes that can coexist within the HyperBF framework and focus on two unsupervised learning rules that may be involved in hyperacuity learning. Finally, we report results of psychophysical experiments that are consistent with the hypothesis that activity-dependent presynaptic amplification may be involved in perceptual learning in hyperacuity.


2021 ◽  
Author(s):  
Mara De Rosa ◽  
Davide Crepaldi

Research on visual word identification has extensively investigated the role of morphemes, recurrent letter chunks that convey a fairly regular meaning (e.g.,lead-er-ship). Masked priming studies highlighted morpheme identification in complex (e.g., sing-er) and pseudo-complex (corn-er) words, as well as in nonwords (e.g., basket-y). The present study investigated whether such sensitivity to morphemes could be rooted in the visual system sensitivity to statistics of letter (co)occurrence. To this aim, we assessed masked priming as induced by nonword primes obtained by combining a stem (e.g.,bulb) with (i) naturally frequent, derivational suffixes (e.g.,-ment), (ii) non-morphological, equally frequent word endings (e.g.,-idge), and (iii) non-morphological, infrequent word endings (e.g.,-kle). In two additional tasks, we collected interpretability and word-likeness measures for morphologically-structured nonwords, to assess whether priming is modulated by such factors. Results indicate that masked priming is not affected by either the frequency or the morphological status of word endings. Our findings are in line with models of early visual processing based on automatic stem/word extraction, and rule out letter chunk frequency as a main player in the early stages of visual word identification. Nonword interpretability and word-likeness do not affect this pattern.


Author(s):  
Mara De Rosa ◽  
Davide Crepaldi

AbstractResearch on visual word identification has extensively investigated the role of morphemes, recurrent letter chunks that convey a fairly regular meaning (e.g., lead-er-ship). Masked priming studies highlighted morpheme identification in complex (e.g., sing-er) and pseudo-complex (corn-er) words, as well as in nonwords (e.g., basket-y). The present study investigated whether such sensitivity to morphemes could be rooted in the visual system sensitivity to statistics of letter (co)occurrence. To this aim, we assessed masked priming as induced by nonword primes obtained by combining a stem (e.g., bulb) with (i) naturally frequent, derivational suffixes (e.g., -ment), (ii) non-morphological, equally frequent word-endings (e.g., -idge), and (iii) non-morphological, infrequent word-endings (e.g., -kle). In two additional tasks, we collected interpretability and word-likeness measures for morphologically-structured nonwords, to assess whether priming is modulated by such factors. Results indicate that masked priming is not affected by either the frequency or the morphological status of word-endings, a pattern that was replicated in a second experiment including also lexical primes. Our findings are in line with models of early visual processing based on automatic stem/word extraction, and rule out letter chunk frequency as a main player in the early stages of visual word identification. Nonword interpretability and word-likeness do not affect this pattern.


1999 ◽  
Vol 22 (3) ◽  
pp. 341-365 ◽  
Author(s):  
Zenon Pylyshyn

Although the study of visual perception has made more progress in the past 40 years than any other area of cognitive science, there remain major disagreements as to how closely vision is tied to cognition. This target article sets out some of the arguments for both sides (arguments from computer vision, neuroscience, psychophysics, perceptual learning, and other areas of vision science) and defends the position that an important part of visual perception, corresponding to what some people have called early vision, is prohibited from accessing relevant expectations, knowledge, and utilities in determining the function it computes – in other words, it is cognitively impenetrable. That part of vision is complex and involves top-down interactions that are internal to the early vision system. Its function is to provide a structured representation of the 3-D surfaces of objects sufficient to serve as an index into memory, with somewhat different outputs being made available to other systems such as those dealing with motor control. The paper also addresses certain conceptual and methodological issues raised by this claim, such as whether signal detection theory and event-related potentials can be used to assess cognitive penetration of vision.A distinction is made among several stages in visual processing, including, in addition to the inflexible early-vision stage, a pre-perceptual attention-allocation stage and a post-perceptual evaluation, selection, and inference stage, which accesses long-term memory. These two stages provide the primary ways in which cognition can affect the outcome of visual perception. The paper discusses arguments from computer vision and psychology showing that vision is “intelligent” and involves elements of “problem solving.” The cases of apparently intelligent interpretation sometimes cited in support of this claim do not show cognitive penetration; rather, they show that certain natural constraints on interpretation, concerned primarily with optical and geometrical properties of the world, have been compiled into the visual system. The paper also examines a number of examples where instructions and “hints” are alleged to affect what is seen. In each case it is concluded that the evidence is more readily assimilated to the view that when cognitive effects are found, they have a locus outside early vision, in such processes as the allocation of focal attention and the identification of the stimulus.


2004 ◽  
Vol 63 (3) ◽  
pp. 143-149 ◽  
Author(s):  
Fred W. Mast ◽  
Charles M. Oman

The role of top-down processing on the horizontal-vertical line length illusion was examined by means of an ambiguous room with dual visual verticals. In one of the test conditions, the subjects were cued to one of the two verticals and were instructed to cognitively reassign the apparent vertical to the cued orientation. When they have mentally adjusted their perception, two lines in a plus sign configuration appeared and the subjects had to evaluate which line was longer. The results showed that the line length appeared longer when it was aligned with the direction of the vertical currently perceived by the subject. This study provides a demonstration that top-down processing influences lower level visual processing mechanisms. In another test condition, the subjects had all perceptual cues available and the influence was even stronger.


Sign in / Sign up

Export Citation Format

Share Document