scholarly journals Cognition does not affect perception: Evaluating the evidence for “top-down” effects

Author(s):  
Chaz Firestone ◽  
Brian J. Scholl

AbstractWhat determines what we see? In contrast to the traditional “modular” understanding of perception, according to which visual processing is encapsulated from higher-level cognition, a tidal wave of recent research alleges that states such as beliefs, desires, emotions, motivations, intentions, and linguistic representations exert direct, top-down influences on what we see. There is a growing consensus that such effects are ubiquitous, and that the distinction between perception and cognition may itself be unsustainable. We argue otherwise: None of these hundreds of studies – either individually or collectively – provides compelling evidence for true top-down effects on perception, or “cognitive penetrability.” In particular, and despite their variety, we suggest that these studies all fall prey to only a handful of pitfalls. And whereas abstract theoretical challenges have failed to resolve this debate in the past, our presentation of these pitfalls is empirically anchored: In each case, we show not only how certain studies could be susceptible to the pitfall (in principle), but also how several alleged top-down effects actually are explained by the pitfall (in practice). Moreover, these pitfalls are perfectly general, with each applying to dozens of other top-down effects. We conclude by extracting the lessons provided by these pitfalls into a checklist that future work could use to convincingly demonstrate top-down effects on visual perception. The discovery of substantive top-down effects of cognition on perception would revolutionize our understanding of how the mind is organized; but without addressing these pitfalls, no such empirical report will license such exciting conclusions.

2004 ◽  
Vol 12 (1) ◽  
pp. 45-64 ◽  
Author(s):  
MALCOLM JEEVES

Rapid developments in neuroscience over the past four decades continue to receive wide media attention. Each new reported advance points to ever tightening links between mind and brain. For many centuries, what is today called ‘mind-talk’ was familiar as ‘soul-talk’. Since, for some, the possession of a soul is what makes us human, the challenges of cognitive neuroscience directly address this. This paper affords the non-specialist a brief overview of some of the scientific evidence pointing to the ever tightening of the mind-brain links and explores its wider implications for our understanding of human nature. In particular it brings together the findings from so-called bottom-up research, in which we observe changes in behaviour and cognition resulting from experimental interventions in neural processes, with top-down research where we track changes in neural substrates accompanying habitual modes of cognition or behaviour. Further reflection alerts one to how the dualist views widely held by New Agers, some humanists and many religious people, contrast with the views of academic philosophers, theologians and biblical scholars, who agree in emphasizing the unity of the person.


Author(s):  
Martin V. Butz ◽  
Esther F. Kutter

This chapter addresses primary visual perception, detailing how visual information comes about and, as a consequence, which visual properties provide particularly useful information about the environment. The brain extracts this information systematically, and also separates redundant and complementary visual information aspects to improve the effectiveness of visual processing. Computationally, image smoothing, edge detectors, and motion detectors must be at work. These need to be applied in a convolutional manner over the fixated area, which are computations that are predestined to be solved by means of cortical columnar structures in the brain. On the next level, the extracted information needs to be integrated to be able to segment and detect object structures. The brain solves this highly challenging problem by incorporating top-down expectations and by integrating complementary visual information aspects, such as light reflections, texture information, line convergence information, shadows, and depth information. In conclusion, the need for integrating top-down visual expectations to form complete and stable perceptions is made explicit.


Author(s):  
Tao Mei ◽  
Wei Zhang ◽  
Ting Yao

Abstract Vision and language are two fundamental capabilities of human intelligence. Humans routinely perform tasks through the interactions between vision and language, supporting the uniquely human capacity to talk about what they see or hallucinate a picture on a natural-language description. The valid question of how language interacts with vision motivates us researchers to expand the horizons of computer vision area. In particular, “vision to language” is probably one of the most popular topics in the past 5 years, with a significant growth in both volume of publications and extensive applications, e.g. captioning, visual question answering, visual dialog, language navigation, etc. Such tasks boost visual perception with more comprehensive understanding and diverse linguistic representations. Going beyond the progresses made in “vision to language,” language can also contribute to vision understanding and offer new possibilities of visual content creation, i.e. “language to vision.” The process performs as a prism through which to create visual content conditioning on the language inputs. This paper reviews the recent advances along these two dimensions: “vision to language” and “language to vision.” More concretely, the former mainly focuses on the development of image/video captioning, as well as typical encoder–decoder structures and benchmarks, while the latter summarizes the technologies of visual content creation. The real-world deployment or services of vision and language are elaborated as well.


2016 ◽  
Vol 39 ◽  
Author(s):  
Diane M. Beck ◽  
John Clevenger

AbstractAlthough the authors do a valuable service by elucidating the pitfalls of inferring top-down effects, they overreach by claiming that vision is cognitively impenetrable. Their argument, and the entire question of cognitive penetrability, seems rooted in a discrete, stage-like model of the mind that is unsupported by neural data.


1999 ◽  
Vol 22 (3) ◽  
pp. 382-383
Author(s):  
Lester E. Krueger

Pylyshyn could have strengthened his case by avoiding side issues and by taking a sterner, firmer line on the unresolved (and perhaps unresolvable) problems plaguing the sensitivity (d') measure of top-down, cognitive effects, as well as the general (nearly utter!) lack of convincing evidence provided by proponents of the cognitive penetrability of visual perception.


2004 ◽  
Vol 63 (3) ◽  
pp. 143-149 ◽  
Author(s):  
Fred W. Mast ◽  
Charles M. Oman

The role of top-down processing on the horizontal-vertical line length illusion was examined by means of an ambiguous room with dual visual verticals. In one of the test conditions, the subjects were cued to one of the two verticals and were instructed to cognitively reassign the apparent vertical to the cued orientation. When they have mentally adjusted their perception, two lines in a plus sign configuration appeared and the subjects had to evaluate which line was longer. The results showed that the line length appeared longer when it was aligned with the direction of the vertical currently perceived by the subject. This study provides a demonstration that top-down processing influences lower level visual processing mechanisms. In another test condition, the subjects had all perceptual cues available and the influence was even stronger.


2011 ◽  
Vol 13 (2) ◽  
pp. 201-171
Author(s):  
Nāṣir Al-Dīn Abū Khaḍīr

The ʿUthmānic way of writing (al-rasm al-ʿUthmānī) is a science that specialises in the writing of Qur'anic words in accordance with a specific ‘pattern’. It follows the writing style of the Companions at the time of the third caliph, ʿUthmān b. ʿAffān, and was attributed to ʿUthmān on the basis that he was the one who ordered the collection and copying of the Qur'an into the actual muṣḥaf. This article aims to expound on the two fundamental functions of al-rasm al-ʿUthmānī: that of paying regard to the ‘correct’ pronunciation of the words in the muṣḥaf, and the pursuit of the preclusion of ambiguity which may arise in the mind of the reader and his auditor. There is a further practical aim for this study: to show the connection between modern orthography and the ʿUthmānic rasm in order that we, nowadays, are thereby able to overcome the problems faced by calligraphers and writers of the past in their different ages and cultures.


2020 ◽  
Author(s):  
Amandine Lassalle ◽  
Michael X Cohen ◽  
Laura Dekkers ◽  
Elizabeth Milne ◽  
Rasa Gulbinaite ◽  
...  

Background: People with an Autism Spectrum Condition diagnosis (ASD) are hypothesized to show atypical neural dynamics, reflecting differences in neural structure and function. However, previous results regarding neural dynamics in autistic individuals have not converged on a single pattern of differences. It is possible that the differences are cognitive-set-specific, and we therefore measured EEG in autistic individuals and matched controls during three different cognitive states: resting, visual perception, and cognitive control.Methods: Young adults with and without an ASD (N=17 in each group) matched on age (range 20 to 30 years), sex, and estimated Intelligence Quotient (IQ) were recruited. We measured their behavior and their EEG during rest, a task requiring low-level visual perception of gratings of varying spatial frequency, and the “Simon task” to elicit activity in the executive control network. We computed EEG power and Inter-Site Phase Clustering (ISPC; a measure of connectivity) in various frequency bands.Results: During rest, there were no ASD vs. controls differences in EEG power, suggesting typical oscillation power at baseline. During visual processing, without pre-baseline normalization, we found decreased broadband EEG power in ASD vs. controls, but this was not the case during the cognitive control task. Furthermore, the behavioral results of the cognitive control task suggest that autistic adults were better able to ignore irrelevant stimuli.Conclusions: Together, our results defy a simple explanation of overall differences between ASD and controls, and instead suggest a more nuanced pattern of altered neural dynamics that depend on which neural networks are engaged.


Sign in / Sign up

Export Citation Format

Share Document