scholarly journals The Visual System of Maps Portrayed in Films - A Study of the Representation of Nature in The Map Against the World and FENGSHUI

2019 ◽  
Vol null (26) ◽  
pp. 193-216
Author(s):  
우현정
Keyword(s):  
Author(s):  
Elisabeth Hein

The Ternus effect refers to an ambiguous apparent motion display in which two or three elements presented in succession and shifted horizontally by one position can be perceived as either a group of elements moving together or as one element jumping across the other(s). This chapter introduces the phenomenon and describes observations made by Pikler and Ternus in the beginning of the twentieth century. Next, reasons for continued interest in the Ternus effect are discussed and an overview of factors that influence it offered, including low-level image-based factors, for example luminance, as well as higher-level scene-based factors, for example perceptual grouping. The chapter ends with a discussion of theories regarding the mechanisms underlying the Ternus effect, providing insight into how the visual system is able to perceive coherent objects in the world despite discontinuities in the input (e.g., as a consequence of eye movements or object occlusion).


2016 ◽  
Vol 23 (5) ◽  
pp. 529-541 ◽  
Author(s):  
Sara Ajina ◽  
Holly Bridge

Damage to the primary visual cortex removes the major input from the eyes to the brain, causing significant visual loss as patients are unable to perceive the side of the world contralateral to the damage. Some patients, however, retain the ability to detect visual information within this blind region; this is known as blindsight. By studying the visual pathways that underlie this residual vision in patients, we can uncover additional aspects of the human visual system that likely contribute to normal visual function but cannot be revealed under physiological conditions. In this review, we discuss the residual abilities and neural activity that have been described in blindsight and the implications of these findings for understanding the intact system.


2018 ◽  
Author(s):  
Balaji Sriram ◽  
Alberto Cruz-Martin ◽  
Lillian Li ◽  
Pamela Reinagel ◽  
Anirvan Ghosh

ABSTRACTThe cortical code that underlies perception must enable subjects to perceive the world at timescales relevant for behavior. We find that mice can integrate visual stimuli very quickly (<100 ms) to reach plateau performance in an orientation discrimination task. To define features of cortical activity that underlie performance at these timescales, we measured single unit responses in the mouse visual cortex at timescales relevant to this task. In contrast to high contrast stimuli of longer duration, which elicit reliable activity in individual neurons, stimuli at the threshold of perception elicit extremely sparse and unreliable responses in V1 such that the activity of individual neurons do not reliably report orientation. Integrating information across neurons, however, quickly improves performance. Using a linear decoding model, we estimate that integrating information over 50-100 neurons is sufficient to account for behavioral performance. Thus, at the limits of perception the visual system is able to integrate information across a relatively small number of highly unreliable single units to generate reliable behavior.


Author(s):  
Anitha Pasupathy ◽  
Yasmine El-Shamayleh ◽  
Dina V. Popovkina

Humans and other primates rely on vision. Our visual system endows us with the ability to perceive, recognize, and manipulate objects, to avoid obstacles and dangers, to choose foods appropriate for consumption, to read text, and to interpret facial expressions in social interactions. To support these visual functions, the primate brain captures a high-resolution image of the world in the retina and, through a series of intricate operations in the cerebral cortex, transforms this representation into a percept that reflects the physical characteristics of objects and surfaces in the environment. To construct a reliable and informative percept, the visual system discounts the influence of extraneous factors such as illumination, occlusions, and viewing conditions. This perceptual “invariance” can be thought of as the brain’s solution to an inverse inference problem in which the physical factors that gave rise to the retinal image are estimated. While the processes of perception and recognition seem fast and effortless, it is a challenging computational problem that involves a substantial proportion of the primate brain.


Algorithms ◽  
2020 ◽  
Vol 13 (7) ◽  
pp. 167 ◽  
Author(s):  
Dan Malowany ◽  
Hugo Guterman

Computer vision is currently one of the most exciting and rapidly evolving fields of science, which affects numerous industries. Research and development breakthroughs, mainly in the field of convolutional neural networks (CNNs), opened the way to unprecedented sensitivity and precision in object detection and recognition tasks. Nevertheless, the findings in recent years on the sensitivity of neural networks to additive noise, light conditions, and to the wholeness of the training dataset, indicate that this technology still lacks the robustness needed for the autonomous robotic industry. In an attempt to bring computer vision algorithms closer to the capabilities of a human operator, the mechanisms of the human visual system was analyzed in this work. Recent studies show that the mechanisms behind the recognition process in the human brain include continuous generation of predictions based on prior knowledge of the world. These predictions enable rapid generation of contextual hypotheses that bias the outcome of the recognition process. This mechanism is especially advantageous in situations of uncertainty, when visual input is ambiguous. In addition, the human visual system continuously updates its knowledge about the world based on the gaps between its prediction and the visual feedback. CNNs are feed forward in nature and lack such top-down contextual attenuation mechanisms. As a result, although they process massive amounts of visual information during their operation, the information is not transformed into knowledge that can be used to generate contextual predictions and improve their performance. In this work, an architecture was designed that aims to integrate the concepts behind the top-down prediction and learning processes of the human visual system with the state-of-the-art bottom-up object recognition models, e.g., deep CNNs. The work focuses on two mechanisms of the human visual system: anticipation-driven perception and reinforcement-driven learning. Imitating these top-down mechanisms, together with the state-of-the-art bottom-up feed-forward algorithms, resulted in an accurate, robust, and continuously improving target recognition model.


Author(s):  
Matt Duncan

It seems like experience plays a positive—even essential—role in generating some knowledge. The problem is, it’s not clear what that role is. To see this, suppose that when my visual system takes in information about the world it skips the experience step and just immediately generates beliefs in me about my surroundings. A lot of philosophers think that I would still know, via perception, about the world around me. But then that raises the question: How does experience contribute to my having knowledge of my surroundings? Philosophers have given many different answers to this question. In this chapter I offer and defend a different answer that avoids the pitfalls of other answers. I argue that experience is, all by itself, a kind of knowledge—what Bertrand Russell calls “knowledge of things.” So I argue that experience helps generate knowledge simply by being knowledge.


Perception ◽  
10.1068/p5652 ◽  
2007 ◽  
Vol 36 (9) ◽  
pp. 1275-1289 ◽  
Author(s):  
Brian Rogers ◽  
Kenneth Brecher

Helmholtz's famous pincushioned chessboard figure has been used to make the point that straight lines in the world are not always perceived as straight and, conversely, that curved lines in the world can sometimes be seen as straight. However, there is little agreement as to the cause of these perceptual errors. Some authors have attributed the errors to the shape of the retina, or the amount of cortex devoted to the processing of images falling on different parts of the retina, while others have taken the effects to indicate that visual space itself is curved. Helmholtz himself claimed that the ‘uncurved lines on the visual globe’ corresponded to ‘direction circles’ defined as those arcs described by the line of fixation when the eye moves according to Listing's law. Careful re-reading of Helmholtz together with some additional observations lead us to the conclusion that two other factors are also involved in the effect: (i) a lack of information about the distance of peripherally viewed objects and (ii) the preference of the visual system for seeing the pincushion squares as similar in size.


Author(s):  
Dale Purves

The reason for using vision as an example in the previous three chapters is that more is known about the human visual system and visual psychophysics than about other neural systems. But this choice begs the question of whether other systems corroborate the evidence drawn from vision. Is the same empirical strategy used in other sensory systems to contend with the same problem (i.e., the inability of animals to measure the actual properties of the world)? Based on accumulated anatomical, physiological, and psychophysical information, audition is the best bet in addressing this question in another modality. This chapter examines whether the perception of sound can also be explained empirically as a way to deal with a world in which the physical parameters of sound sources can’t be apprehended.


2019 ◽  
Vol LXXX (4) ◽  
pp. 256-267
Author(s):  
Ewa Boksa ◽  
Renata Cuprych

Due to the fact that it is frequently difficult to identify their etiological origins, reading and writing difficulties have inconsistent terminology in the literature. This article is a review and attempts to initiate a discussion about visual dyslexia. The authors pose the question whether - in the context of new neuroimaging methods and the neurosciences broadly defined - there exist reading and writing difficulties that stem from impaired functioning of the visual system and whether they can be assigned to developmental dyslexia. If it is assumed that developmental dyslexia is linguistic in nature, these are phonological deficits that come to the fore in children entering the world of reading. These phonological processing deficits impair word decoding (word identification), making word recognition impossible, thus preventing access to higher-order linguistic processes, that is comprehending meaning from texts or building one’s own narratives.


Perception ◽  
1998 ◽  
Vol 27 (8) ◽  
pp. 889-935 ◽  
Author(s):  
Peter Lennie

The visual system has a parallel and hierarchical organization, evident at every stage from the retina onwards. Although the general benefits of parallel and hierarchical organization in the visual system are easily understood, it has not been easy to discern the function of the visual cortical modules. I explore the view that striate cortex segregates information about different attributes of the image, and dispatches it for analysis to different extrastriate areas. I argue that visual cortex does not undertake multiple relatively independent analyses of the image from which it assembles a unified representation that can be interrogated about the what and where of the world. Instead, occipital cortex is organized so that perceptually relevant information can be recovered at every level in the hierarchy, that information used in making decisions at one level is not passed on to the next level, and, with one rather special exception (area MT), through all stages of analysis all dimensions of the image remain intimately coupled in a retinotopic map. I then offer some explicit suggestions about the analyses undertaken by visual areas in occipital cortex, and conclude by examining some objections to the proposals.


Sign in / Sign up

Export Citation Format

Share Document