Dissociation between Visual Perception of Allocentric Distance and Visually Directed Walking of its Extent

Perception ◽  
10.1068/p5444 ◽  
2005 ◽  
Vol 34 (11) ◽  
pp. 1399-1416 ◽  
Author(s):  
Nobuo Kudoh

Walking without vision to previously viewed targets was compared with visual perception of allocentric distance in two experiments. Experimental evidence had shown that physically equal distances in a sagittal plane on the ground were perceptually underestimated as compared with those in a frontoparallel plane, even under full-cue conditions. In spite of this perceptual anisotropy of space, Loomis et al (1992 Journal of Experimental Psychology: Human Perception and Performance18 906–921) found that subjects could match both types of distances in a blind-walking task. In experiment 1 of the present study, subjects were required to reproduce the extent of allocentric distance between two targets by either walking towards the targets, or by walking in a direction incompatible with the locations of the targets. The latter condition required subjects to derive an accurate allocentric distance from information based on the perceived locations of the two targets. The walked distance in the two conditions was almost identical whether the two targets were presented in depth (depth-presentation condition) or in the frontoparallel plane (width-presentation condition). The results of a perceptual-matching task showed that the depth distances had to be much greater than the width distances in order to be judged to be equal in length (depth compression). In experiment 2, subjects were required to reproduce the extent of allocentric distance from the viewing point by blindly walking in a direction other than toward the targets. The walked distance in the depth-presentation condition was shorter than that in the width-presentation condition. This anisotropy in motor responses, however, was mainly caused by apparent overestimation of length oriented in width, not by depth compression. In addition, the walked distances were much better scaled than those in experiment 1. These results suggest that the perceptual and motor systems share a common representation of the location of targets, whereas a dissociation in allocentric distance exists between the two systems in full-cue conditions.

2020 ◽  
Author(s):  
Zixuan Wang ◽  
Yuki Murai ◽  
David Whitney

AbstractPerceiving the positions of objects is a prerequisite for most other visual and visuomotor functions, but human perception of object position varies from one individual to the next. The source of these individual differences in perceived position and their perceptual consequences are unknown. Here, we tested whether idiosyncratic biases in the underlying representation of visual space propagate across different levels of visual processing. In Experiment 1, using a position matching task, we found stable, observer-specific compressions and expansions within local regions throughout the visual field. We then measured Vernier acuity (Experiment 2) and perceived size of objects (Experiment 3) across the visual field and found that individualized spatial distortions were closely associated with variations in both visual acuity and apparent object size. Our results reveal idiosyncratic biases in perceived position and size, originating from a heterogeneous spatial resolution that carries across the visual hierarchy.


Author(s):  
Jakub Krukar ◽  
Charu Manivannan ◽  
Mehul Bhatt ◽  
Carl Schultz

Isovist analysis has been typically applied for the study of human perception in indoor built-up spaces. Albeit predominantly in 2D, recent works have explored isovist techniques in 3D. However, 3D applications of isovist analysis simply extrapolate the assumptions of its 2D counterpart, without questioning whether these assumptions remain valid in 3D. They do not: because human perception is embodied, the perception of vertical space differs from the perception of horizontal space. We present a user study demonstrating that an embodied 3D isovist that accounts for this phenomenon (formalised based on the notion of spatial artefacts) predicts human perception of space more accurately than the generic volumetric 3D isovist, specifically with respect to spaciousness and complexity. We suggest that the embodied 3D isovist should be used for 3D analyses in which human perception is of key interest.


Perception ◽  
1994 ◽  
Vol 23 (9) ◽  
pp. 1037-1048 ◽  
Author(s):  
Sachio Nakamizo ◽  
Koichi Shimono ◽  
Michiaki Kondo ◽  
Hiroshi Ono

Visual directions of the two stimuli in Panum's limiting case with different interstimulus and convergence distances confirmed the predictions from the reformulated Wells—Hering's laws of visual direction. In experiment 1, six observers each converged on the midpoint of the interstimulus axis at 30, 60, and 90 cm from the eyes and adjusted a probe on the fixation plane to be in the same visual direction as that of each stimulus. Visual direction of the far stimulus was always nonveridical whereas that of the near stimulus was veridical only when its retinal disparity was small. In experiment 2, three observers each converged on the intersection of mid-sagittal plane and (a) the frontoparallel plane of the near stimulus, (b) that of the midpoint between the two stimuli, or (c) that of the far stimulus. The midpoint of the interstimulus axis was 60 cm from the eyes. Visual direction of the far stimulus was veridical only with convergence at the far plane. Visual direction of the near stimulus was veridical with convergence at the near plane, and also, only when its retinal disparity was small, with convergence at the two other planes.


2020 ◽  
Vol 287 (1930) ◽  
pp. 20200825
Author(s):  
Zixuan Wang ◽  
Yuki Murai ◽  
David Whitney

Perceiving the positions of objects is a prerequisite for most other visual and visuomotor functions, but human perception of object position varies from one individual to the next. The source of these individual differences in perceived position and their perceptual consequences are unknown. Here, we tested whether idiosyncratic biases in the underlying representation of visual space propagate across different levels of visual processing. In Experiment 1, using a position matching task, we found stable, observer-specific compressions and expansions within local regions throughout the visual field. We then measured Vernier acuity (Experiment 2) and perceived size of objects (Experiment 3) across the visual field and found that individualized spatial distortions were closely associated with variations in both visual acuity and apparent object size. Our results reveal idiosyncratic biases in perceived position and size, originating from a heterogeneous spatial resolution that carries across the visual hierarchy.


2012 ◽  
Vol 24 (7) ◽  
pp. 1610-1624 ◽  
Author(s):  
Jon Andoni Duñabeitia ◽  
Maria Dimitropoulou ◽  
Jonathan Grainger ◽  
Juan Andrés Hernández ◽  
Manuel Carreiras

This study was designed to explore whether the human visual system has different degrees of tolerance to character position changes for letter strings, digit strings, and symbol strings. An explicit perceptual matching task was used (same–different judgment), and participants' electrophysiological activity was recorded. Materials included trials in which the referent stimulus and the target stimulus were identical or differed either by two character replacements or by transposing two characters. Behavioral results showed clear differences in the magnitude of the transposed-character effect for letters as compared with digit and symbol strings. Electrophysiological data confirmed this observation, showing an N2 character transposition effect that was only present for letter strings. An earlier N1 transposition effect was also found for letters but was absent for symbols and digits, whereas a later P3 effect was found for all types of string. These results provide evidence for a position coding mechanism that is specific to letter strings, that was most prominent in an epoch between 200 and 325 msec, and that operates in addition to more general-purpose position coding mechanisms.


2020 ◽  
Author(s):  
Wen-Kai You ◽  
Shreesh P. Mysore

ABSTRACTMice are being used increasing commonly to study visual behaviors, but the time-course of their perceptual dynamics is unclear. Here, using conditional accuracy analysis, a powerful method used to analyze human perception, and drift diffusion modeling, we investigated the dynamics and limits of mouse visual perception with a 2AFC orientation discrimination task. We found that it includes two stages – a short, sensory encoding stage lasting ∼300 ms, which involves the speed-accuracy tradeoff, and a longer visual short-term memory-dependent (VSTM) stage lasting ∼1700 ms. Manipulating stimulus features or adding a foil affected the sensory encoding stage, and manipulating stimulus duration altered the VSTM stage, of mouse perception. Additionally, mice discriminated targets as brief as 100 ms, and exhibited classic psychometric curves in a visual search task. Our results reveal surprising parallels between mouse and human visual perceptual processes, and provide a quantitative scaffold for exploring neural circuit mechanisms of visual perception.


2016 ◽  
Vol 39 ◽  
Author(s):  
Andreas Keller

AbstractFirestone & Scholl (F&S) consider the distinction between judgment and perception to be clear and intuitive. Their intuition is based on considerations about visual perception. That such a distinction is clear, or even existent, is less obvious in nonvisual modalities. Failing to distinguish between perception and judgment is therefore not a flaw in investigating top-down effects of cognition on perception, as the authors suggest. Instead, it is the result of considering the variety of human perception.


2020 ◽  
Author(s):  
Aurora De Bortoli Vizioli ◽  
Anna M. Borghi ◽  
Luca Tummolini

Neurological evidence has shown that brain damages canselectively impair the ability to discriminate between objectsbelonging to others and those that we feel are our own. Despite the ubiquity and relevance of this sense of object ownership for our life, the underlying cognitive mechanisms are still poorly understood. Here we ask whether psychological ownership of an object can be based on its incorporation in one’s body image. To explore this possibility with healthy participants, we employed a modified version of the rubber-hand illusion in which both the participant and the rubber hand wore a ring. We used the self-prioritization effect in a perceptual matching task as an indirect measure of the sense of (dis)ownership over objects. Results indicate that undermining the bodily self has cascade effects on the representation of owned objects, at least for those associated with the body for a long time.


2017 ◽  
Vol 10 (04) ◽  
pp. 817-823
Author(s):  
Suman Chakraborty ◽  
Anil Bikash Chowdhury

Today internet has become a trusted factotum of everyone. Almost all payments like tax, insurance, bank transaction, healthcare payment, payment in e-commerce are done digitally through debit or credit card or through e-wallet. People share their personal information through social media like Facebook. Twitter, WhatsApp etc. The government of every developing country is going to embrace e-Governance system to interact with people more promptly. The information shares through these applications are the burning target to intruders. This paper utilized the imperceptibility as well as the robustness of steganography techniques which are increased by embedding multiple bits in a particular region selected either based on some image attributes or by Human Visual Perception.


Sign in / Sign up

Export Citation Format

Share Document