A Perplexing Puzzle Involving Perception of Straight Ahead

Perception ◽  
1996 ◽  
Vol 25 (1_suppl) ◽  
pp. 41-41
Author(s):  
J T Enright

Perception of visual direction was investigated by requiring subjects repeatedly to adjust a single small light, in an otherwise darkened room, to perceived ‘straight ahead’. This task presumably requires comparing concurrent extra-retinal information (either proprioception or an efference copy) with an internally stored ‘standard’ of comparison. Moment-to-moment precision in that performance is remarkably good, with median threshold (standard deviation) of 47 arc min. Nevertheless, the responses often involved a monotonic shift of direction over a few minutes during a test session in this reduced visual environment. These trends led to final settings that were immediately recognised as grossly erroneous when the room was relit, implying that the presumptive internal standard of comparison, while unstable, can be rapidly updated in a full visual environment. There are clear similarities between this phenomenon and the sudden ‘visual capture’ that occurs in a re-illuminated room, following distortions of visual direction that arose in a similarly reduced setting for subjects whose extraocular muscles were partially paralysed (Matin et al, 1982 Science216 198 – 201). In both cases, the visual stimuli that underlie rapid recalibration are unknown. Among the several possibilities that can be imagined, the strongest candidate hypothesis for this calibration of the straight-ahead direction is that, during fixation in a lit room, one utilises the directional distribution of image motion that arises because of microscale drift of the eye, as it moves toward its equilibrium orientation, much as a moving observer can use optic flow to evaluate ‘heading’ (the dynamic analogue of ‘straight ahead’).

Perception ◽  
1996 ◽  
Vol 25 (1_suppl) ◽  
pp. 150-150 ◽  
Author(s):  
L S Stone ◽  
J Lorenceau ◽  
B R Beutter

There has long been qualitative evidence that humans can pursue an object defined only by the motion of its parts (eg Steinbach, 1976 Vision Research16 1371 – 1375). We explored this quantitatively using an occluded diamond stimulus (Lorenceau and Shiffrar, 1992 Vision Research32 263 – 275). Four subjects (one naive) tracked a line-figure diamond moving along an elliptical path (0.9 Hz) either clockwise (CW) or counterclockwise (CCW) behind either an X-shaped aperture (CROSS) or two vertical rectangular apertures (BARS), which obscured the corners. Although the stimulus consisted of only four line segments (108 cd m−2), moving within a visible aperture (0.2 cd m−2) behind a foreground (38 cd m−2), it is largely perceived as a coherently moving diamond. The intersaccadic portions of eye-position traces were fitted with sinusoids. All subjects tracked object motion with considerable temporal accuracy. The mean phase lag was 5°/6° (CROSS/BARS) and the mean relative phase between the horizontal and vertical components was +95°/+92° (CW) and −85°/−75° (CCW), which is close to perfect. Furthermore, a \chi2 analysis showed that 56% of BARS trials were consistent with tracking the correct elliptical shape ( p<0.05), although segment motion was purely vertical. These data disprove the main tenet of most models of pursuit: that it is a system that seeks to minimise retinal image motion through negative feedback. Rather, the main drive must be a visual signal which has already integrated spatiotemporal retinal information into an object-motion signal.


1974 ◽  
Vol 38 (3) ◽  
pp. 719-725 ◽  
Author(s):  
J. Roger Ware

A series of 3 experiments concerned with the perception of visual direction was conducted using a single adjustable luminous rod in a completely darkened room. In Exp. I, perceptual accuracies of primary (vertical and horizontal) and intermediate (all other directions) visual directions were compared. Accuracy for primary directions was significantly better ( t = 10.73, p < .001). Head-tilts of 5°, 10°, 20°, and 30° to the right and left of 0° in Exp. II did not significantly affect the perceptual accuracy, but perceptual accuracy differed significantly between primary and intermediate directions ( F = 182.11, p < .001). The introduction of non-verbal knowledge of results in Exp. III yielded little improvement in the perceptual accuracy of intermediate visual direction, but a significant practice effect was found. The results were discussed in terms of previous research and suggestions for further research were outlined.


Perception ◽  
1997 ◽  
Vol 26 (1_suppl) ◽  
pp. 237-237
Author(s):  
J Li ◽  
M M Cohen ◽  
C W DeRoshia ◽  
L T Guzy

Perceived eye position and/or the perceived location of visual targets are altered when the orientation of the surrounding visual environment (Cohen et al, 1995 Perception & Psychophysics571 433) or that of the observer (Cohen and Guzy, 1995 Aviation, Space, and Environmental Medicine66 505) is changed. Fourteen subjects used biteboards as they lay on a rotary bed that was oriented head-down −15°, −7.5°, supine, head-up +7.5°, and +15°. In the dark, subjects directed their gaze and set a target to the apparent zenith (exocentric location); they also gazed at a subjective ‘straight ahead’ position with respect to their head (egocentric location). Angular deviations of target settings and changes in vertical eye position were recorded with an ISCAN infrared tracking system. Results indicated that, for exocentric locations, the eyes deviate systematically from the true zenith. The gain for compensating changes in head orientation was 0.69 and 0.73 for gaze direction and target settings, respectively. In contrast, ‘straight ahead’ eye positions were not significantly affected by changes in the subject's orientation. We conclude that subjects make systematic errors when directing their gaze to an exocentric location in near-supine positions. This suggests a systematic bias in the integration of extra-ocular signals with information regarding head orientation. The bias may result from underestimating changes in the orientation of the head in space. In contrast, for egocentric locations, where head orientation information can potentially be discarded, gaze directions were unaffected by head orientation near supine.


Perception ◽  
10.1068/p5292 ◽  
2005 ◽  
Vol 34 (4) ◽  
pp. 453-475 ◽  
Author(s):  
René J V Bertin ◽  
Isabelle Israël

Human observers can detect their heading direction on a short time scale on the basis of optic flow. We investigated the visual perception and reconstruction of visually travelled two-dimensional (2-D) trajectories from optic flow, with and without a landmark. As in our previous study, seated, stationary subjects wore a head-mounted display in which optic-flow stimuli were shown that simulated various manoeuvres: linear or curvilinear 2-D trajectories over a horizontal plane, with observer orientation either fixed in space, fixed relative to the path, or changing relative to both. Afterwards, they reproduced the perceived manoeuvre with a model vehicle, whose position and orientation were recorded. Previous results had suggested that our stimuli can induce illusory percepts when translation and yaw are unyoked. We tested that hypothesis and investigated how perception of the travelled trajectory depends on the amount of yaw and the average path-relative orientation. Using a structured visual environment instead of only dots, or making available additional extra-retinal information, can improve perception of ambiguous optic-flow stimuli. We investigated the amount of necessary structuring, specifically the effect of additional visual and/or extra-retinal information provided by a single landmark in conditions where illusory percepts occur. While yaw was perceived correctly, the travelled path was less accurately perceived, but still adequately when the simulated orientation was fixed in space or relative to the trajectory. When the amount of yaw was not equal to the rotation of the path, or in the opposite direction, subjects still perceived orientation as fixed relative to the trajectory. This caused trajectory misperception because yaw was wrongly attributed to a rotation of the path: path perception is governed by the amount of yaw in the manoeuvre. Trajectory misperception also occurs when orientation is fixed relative to a curvilinear path, but not tangential to it. A single landmark could improve perception. Our results confirm and extend previous findings that, for unambiguous perception of ego-motion from optic flow, additional information is required in many cases, which can take the form of fairly minimal, visual information.


1969 ◽  
Vol 28 (2) ◽  
pp. 591-597 ◽  
Author(s):  
John N. Park

Helmholtz's proprioceptive theory of apparent visual direction predicts a displacement of egocentric straight ahead as an aftereffect of deviation of the eyes from normal frontal position. In a test of this prediction, 91 Ss (a) selected from a line of lighted discs that disc which appeared to be straight ahead, (b) fixated the eyes for 30 sec. on a point in the line of discs which was either 30° from frontal position or at the most extreme position attainable (approximately 45° from frontal position), (c) returned the eyes to what seemed to be frontal position and selected the disc which appeared to be straight ahead. Ocular deviation produced as an aftereffect a displacement of apparent straight ahead which had a mean value of 3.12° and occurred in the same meridian and in the same direction as the eyes had been deviated. The amount of displacement was not significantly affected by the degree of prior ocular deviation or by the orientation of the line of discs (vertical, horizontal, or diagonal).


Author(s):  
Songquan Sun ◽  
Richard D. Leapman

Analyses of ultrathin cryosections are generally performed after freeze-drying because the presence of water renders the specimens highly susceptible to radiation damage. The water content of a subcellular compartment is an important quantity that must be known, for example, to convert the dry weight concentrations of ions to the physiologically more relevant molar concentrations. Water content can be determined indirectly from dark-field mass measurements provided that there is no differential shrinkage between compartments and that there exists a suitable internal standard. The potential advantage of a more direct method for measuring water has led us to explore the use of electron energy loss spectroscopy (EELS) for characterizing biological specimens in their frozen hydrated state.We have obtained preliminary EELS measurements from pure amorphous ice and from cryosectioned frozen protein solutions. The specimens were cryotransfered into a VG-HB501 field-emission STEM equipped with a 666 Gatan parallel-detection spectrometer and analyzed at approximately −160 C.


Author(s):  
R.D. Leapman ◽  
S.Q. Sun ◽  
S-L. Shi ◽  
R.A. Buchanan ◽  
S.B. Andrews

Recent advances in rapid-freezing and cryosectioning techniques coupled with use of the quantitative signals available in the scanning transmission electron microscope (STEM) can provide us with new methods for determining the water distributions of subcellular compartments. The water content is an important physiological quantity that reflects how fluid and electrolytes are regulated in the cell; it is also required to convert dry weight concentrations of ions obtained from x-ray microanalysis into the more relevant molar ionic concentrations. Here we compare the information about water concentrations from both elastic (annular dark-field) and inelastic (electron energy loss) scattering measurements.In order to utilize the elastic signal it is first necessary to increase contrast by removing the water from the cryosection. After dehydration the tissue can be digitally imaged under low-dose conditions, in the same way that STEM mass mapping of macromolecules is performed. The resulting pixel intensities are then converted into dry mass fractions by using an internal standard, e.g., the mean intensity of the whole image may be taken as representative of the bulk water content of the tissue.


Author(s):  
Nicolas Poirel ◽  
Claire Sara Krakowski ◽  
Sabrina Sayah ◽  
Arlette Pineau ◽  
Olivier Houdé ◽  
...  

The visual environment consists of global structures (e.g., a forest) made up of local parts (e.g., trees). When compound stimuli are presented (e.g., large global letters composed of arrangements of small local letters), the global unattended information slows responses to local targets. Using a negative priming paradigm, we investigated whether inhibition is required to process hierarchical stimuli when information at the local level is in conflict with the one at the global level. The results show that when local and global information is in conflict, global information must be inhibited to process local information, but that the reverse is not true. This finding has potential direct implications for brain models of visual recognition, by suggesting that when local information is conflicting with global information, inhibitory control reduces feedback activity from global information (e.g., inhibits the forest) which allows the visual system to process local information (e.g., to focus attention on a particular tree).


Sign in / Sign up

Export Citation Format

Share Document