Expert perceivers and perceptual learning

1999 ◽  
Vol 22 (3) ◽  
pp. 396-397 ◽  
Author(s):  
Paul T. Sowden

Expert perceivers may learn more than just where to apply visual processing, or which part of the output from the visual system to attend to. Their early visual system may be modified, as a result of their specific needs, through a process of early visual learning. We argue that this is, in effect, a form of long-term, indirect cognitive penetration of early vision.

2020 ◽  
pp. 287-296
Author(s):  
Daniel C. Javitt

Glutamate theories of schizophrenia were first proposed over 30 years ago and since that time have become increasingly accepted. Theories are supported by the ability of N-methyl-D-aspartate receptor (NMDAR) antagonists such as phencyclidine (PCP) or ketamine to induce symptoms that closely resemble those of schizophrenia. Moreover, NMDAR antagonists uniquely reproduce the level of negative symptoms and cognitive deficits observed in schizophrenia, suggesting that such models may be particularly appropriate to poor outcome forms of the disorder. As opposed to dopamine, which is most prominent within frontostriatal brain regions, glutamate neurons are present throughout cortex and subcortical structures. Thus, NMDAR theories predict widespread disturbances across cortical and thalamic pathways, including sensory brain regions. In auditory cortex, NMDAR play a critical role in the generation of mismatch negativity (MMN), which may therefore serve as a translational marker of NMDAR dysfunction across species. In the visual system, NMDAR play a critical role in function of the magnocellular visual system. Deficits in both auditory and visual processing contribute to social and communication deficits, which, in turn, lead to poor functional outcome. By contrast, NMDAR dysfunction within the frontohippocampal system may contribute to well described deficits in working memory, executive processing and long-term memory formation. Deficits in NMDAR function may be driven by disturbances in presynaptic glutamate release, impaired metabolism of NMDAR modulators such as glycine or D-serine, or intrinsic abnormalities in NMDAR themselves.


Perception ◽  
1997 ◽  
Vol 26 (1_suppl) ◽  
pp. 24-24 ◽  
Author(s):  
J H van Hateren

The first steps of processing in the visual system of the blowfly are well suited for studying the relationship between the properties of the environment and the function of visual processing (eg Srinivasan et al, 1982 Proceedings of the Royal Society, London B216 427; van Hateren, 1992 Journal of Comparative Physiology A171 157). Although the early visual system appears to be linear to some extent, there are also reports on functionally significant nonlinearities (Laughlin, 1981 Zeitschrift für Naturforschung36c 910). Recent theories using information theory for understanding the early visual system perform reasonably well, but not quite as well as the real visual system when confronted with natural stimuli [eg van Hateren, 1992 Nature (London)360 68]. The main problem seems to be that they lack a component that adapts with the right time course to changes in stimulus statistics (eg the local average light intensity). In order to study this problem of adaptation with a relatively simple, yet realistic, stimulus I recorded time series of natural intensities, and played them back via a high-brightness LED to the visual system of the blowfly ( Calliphora vicina). The power spectra of the intensity measurements and photoreceptor responses behave approximately as 1/ f, with f the temporal frequency, whilst those of second-order neurons (LMCs) are almost flat. The probability distributions of the responses of LMCs are almost gaussian and largely independent of the input contrast, unlike the distributions of photoreceptor responses and intensity measurements. These results suggest that LMCs are in effect executing a form of contrast normalisation in the time domain.


2019 ◽  
pp. 3-37
Author(s):  
Kevin Connolly

This introductory chapter explains perceptual learning as long-term changes in perception that are the result of practice or experience. It distinguishes perceptual learning from other nearby concepts, including perceptual development and cognitive penetration. It then delineates different kinds of perceptual learning. For instance, some kinds of perceptual learning involve changes in how one attends, while other cases involve a learned ability to differentiate two properties, or to perceive two properties as unified. The chapter uses this taxonomy to distinguish different cases of perceptual learning in the philosophical literature, including by contemporary philosophers such as Susanna Siegel, Christopher Peacocke, and Charles Siewert. Finally, it outlines the function of perceptual learning. Perceptual learning serves to offload onto our quick perceptual systems what would be a slower and more cognitively taxing task were it to be done in a controlled, deliberate manner. The upshot is that this frees up cognitive resources for other tasks.


1993 ◽  
Vol 5 (5) ◽  
pp. 695-718 ◽  
Author(s):  
Yair Weiss ◽  
Shimon Edelman ◽  
Manfred Fahle

Performance of human subjects in a wide variety of early visual processing tasks improves with practice. HyperBF networks (Poggio and Girosi 1990) constitute a mathematically well-founded framework for understanding such improvement in performance, or perceptual learning, in the class of tasks known as visual hyperacuity. The present article concentrates on two issues raised by the recent psychophysical and computational findings reported in Poggio et al. (1992b) and Fahle and Edelman (1992). First, we develop a biologically plausible extension of the HyperBF model that takes into account basic features of the functional architecture of early vision. Second, we explore various learning modes that can coexist within the HyperBF framework and focus on two unsupervised learning rules that may be involved in hyperacuity learning. Finally, we report results of psychophysical experiments that are consistent with the hypothesis that activity-dependent presynaptic amplification may be involved in perceptual learning in hyperacuity.


1999 ◽  
Vol 22 (3) ◽  
pp. 341-365 ◽  
Author(s):  
Zenon Pylyshyn

Although the study of visual perception has made more progress in the past 40 years than any other area of cognitive science, there remain major disagreements as to how closely vision is tied to cognition. This target article sets out some of the arguments for both sides (arguments from computer vision, neuroscience, psychophysics, perceptual learning, and other areas of vision science) and defends the position that an important part of visual perception, corresponding to what some people have called early vision, is prohibited from accessing relevant expectations, knowledge, and utilities in determining the function it computes – in other words, it is cognitively impenetrable. That part of vision is complex and involves top-down interactions that are internal to the early vision system. Its function is to provide a structured representation of the 3-D surfaces of objects sufficient to serve as an index into memory, with somewhat different outputs being made available to other systems such as those dealing with motor control. The paper also addresses certain conceptual and methodological issues raised by this claim, such as whether signal detection theory and event-related potentials can be used to assess cognitive penetration of vision.A distinction is made among several stages in visual processing, including, in addition to the inflexible early-vision stage, a pre-perceptual attention-allocation stage and a post-perceptual evaluation, selection, and inference stage, which accesses long-term memory. These two stages provide the primary ways in which cognition can affect the outcome of visual perception. The paper discusses arguments from computer vision and psychology showing that vision is “intelligent” and involves elements of “problem solving.” The cases of apparently intelligent interpretation sometimes cited in support of this claim do not show cognitive penetration; rather, they show that certain natural constraints on interpretation, concerned primarily with optical and geometrical properties of the world, have been compiled into the visual system. The paper also examines a number of examples where instructions and “hints” are alleged to affect what is seen. In each case it is concluded that the evidence is more readily assimilated to the view that when cognitive effects are found, they have a locus outside early vision, in such processes as the allocation of focal attention and the identification of the stimulus.


1999 ◽  
Vol 22 (3) ◽  
pp. 368-369 ◽  
Author(s):  
Jeffrey S. Bowers

According to Pylyshyn, the early visual system is able to categorize perceptual inputs into shape classes based on visual similarity criteria; it is also suggested that written words may be categorized within early vision. This speculation is contradicted by the fact that visually unrelated exemplars of a given letter (e.g., a/A) or word (e.g., read/READ) map onto common visual categories.


2004 ◽  
Vol 44 (17) ◽  
pp. 2083-2089 ◽  
Author(s):  
Tobi Delbrück ◽  
Shih-Chii Liu

2017 ◽  
Vol 117 (1) ◽  
pp. 388-402 ◽  
Author(s):  
Michael A. Cohen ◽  
George A. Alvarez ◽  
Ken Nakayama ◽  
Talia Konkle

Visual search is a ubiquitous visual behavior, and efficient search is essential for survival. Different cognitive models have explained the speed and accuracy of search based either on the dynamics of attention or on similarity of item representations. Here, we examined the extent to which performance on a visual search task can be predicted from the stable representational architecture of the visual system, independent of attentional dynamics. Participants performed a visual search task with 28 conditions reflecting different pairs of categories (e.g., searching for a face among cars, body among hammers, etc.). The time it took participants to find the target item varied as a function of category combination. In a separate group of participants, we measured the neural responses to these object categories when items were presented in isolation. Using representational similarity analysis, we then examined whether the similarity of neural responses across different subdivisions of the visual system had the requisite structure needed to predict visual search performance. Overall, we found strong brain/behavior correlations across most of the higher-level visual system, including both the ventral and dorsal pathways when considering both macroscale sectors as well as smaller mesoscale regions. These results suggest that visual search for real-world object categories is well predicted by the stable, task-independent architecture of the visual system. NEW & NOTEWORTHY Here, we ask which neural regions have neural response patterns that correlate with behavioral performance in a visual processing task. We found that the representational structure across all of high-level visual cortex has the requisite structure to predict behavior. Furthermore, when directly comparing different neural regions, we found that they all had highly similar category-level representational structures. These results point to a ubiquitous and uniform representational structure in high-level visual cortex underlying visual object processing.


2016 ◽  
Vol 16 (12) ◽  
pp. 554
Author(s):  
Antoine Barbot ◽  
Krystel Huxlin ◽  
Duje Tadin ◽  
Geunyoung Yoon

2021 ◽  
pp. 106808
Author(s):  
Luís Miguel Lacerda ◽  
Alki Liasis ◽  
Sian E Handley ◽  
Martin Tisdall ◽  
J Helen Cross ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document