Effects of Visual-Field Inversions on the Reverse-Perspective Illusion

Perception ◽  
10.1068/p3336 ◽  
2002 ◽  
Vol 31 (9) ◽  
pp. 1147-1151 ◽  
Author(s):  
Norman D Cook ◽  
Takefumi Hayashi ◽  
Toshihiko Amemiya ◽  
Kimihiro Suzuki ◽  
Lorenz Leumann

The ‘reverse-perspective’ illusion entails the apparent motion of a stationary scene painted in relief and containing misleading depth cues. We have found that, using prism goggles to induce horizontal or vertical visual-field reversals, the illusory motion is greatly reduced or eliminated in the direction for which the goggles reverse the visual field. We argue that the illusion is a consequence of the observer's inability to reconcile changes in visual information due to body movement with implicit knowledge concerning anticipated changes. As such, the reverse-perspective illusion may prove to be useful in the study of the integration of linear perspective and motion parallax information.


1992 ◽  
Vol 8 (2) ◽  
pp. 151-164 ◽  
Author(s):  
Martin Egelhaaf ◽  
Alexander Borst

AbstractVisual information is processed in a series of subsequent steps. The performance of each of these steps depends not only on the computations it performs itself but also on the representation of the visual surround on which it operates. Here we investigate the consequences of signal preprocessing for the performance of the motion-detection system of the fly. In particular, we analyze whether the retinal input signals are rectified and segregate into separate ON and OFF channels, which then feed independent parallel motion-detection pathways. We recorded the activity of an identified directionally selective interneuron (HI-cell) in response to apparent motion stimuli, i.e. sequential brightness changes at two neighboring locations in the visual field, as well as to brightness changes at only a single location. For apparent motion stimuli, the motion-dependent response component was determined by subtracting from the overall response the responses to the individual stimulus components when presented alone. The following conclusions could be derived: (1) Apparent motion consisting of a sequence of increased or decreased brightness at two locations in the visual field have the same optimum interstimulus time interval (Fig. 3). (2) Sequences of brightness steps of like polarity (either increments or decrements) elicit positive and negative motion-dependent response components when mimicking motion in the cell's preferred and null direction, respectively. The motion-dependent response components are inverted in sign when the brightness steps of a stimulus sequence have a different polarity (Fig. 7). (3) The responses to the beginning and the end of a brightness pulse depend on the pulse duration. For pulse durations of less than 2 s, both events interact with each other (Fig. 9). All of these results do not provide any indication that the fly processes motion information in independent ON and OFF motion detectors. Brightness changes of both signs are rather represented at the input of the same movement detectors, and interactions between signals resulting from both brightness increments and decrements take their sign into account. This type of preprocessing of the retinal input is argued to render a motion-detection system particularly robust against noise.



2020 ◽  
Vol 11 (1) ◽  
pp. 3
Author(s):  
Laura Gonçalves Ribeiro ◽  
Olli J. Suominen ◽  
Ahmed Durmush ◽  
Sari Peltonen ◽  
Emilio Ruiz Morales ◽  
...  

Visual technologies have an indispensable role in safety-critical applications, where tasks must often be performed through teleoperation. Due to the lack of stereoscopic and motion parallax depth cues in conventional images, alignment tasks pose a significant challenge to remote operation. In this context, machine vision can provide mission-critical information to augment the operator’s perception. In this paper, we propose a retro-reflector marker-based teleoperation aid to be used in hostile remote handling environments. The system computes the remote manipulator’s position with respect to the target using a set of one or two low-resolution cameras attached to its wrist. We develop an end-to-end pipeline of calibration, marker detection, and pose estimation, and extensively study the performance of the overall system. The results demonstrate that we have successfully engineered a retro-reflective marker from materials that can withstand the extreme temperature and radiation levels of the environment. Furthermore, we demonstrate that the proposed maker-based approach provides robust and reliable estimates and significantly outperforms a previous stereo-matching-based approach, even with a single camera.



Development ◽  
1981 ◽  
Vol 65 (1) ◽  
pp. 199-217
Author(s):  
C. Kennard

The extent, and the development, of the ipsilateral retinothalamic projection in the frog Xenopus laevis have been studied using terminal degeneration and autoradiographic techniques. This ipsilateral projection derives only from those retinal areas receiving visual information from the binocular portion of the visual field. In Xenopus, the ipsilateral retinothalamic projection arises from a larger area of the retina than was found to be the case in earlier studies on Rana. This correlates with the fact that Xenopus has a larger binocular visual field than does Rana. The ipsilateral retinothalamic projection is just detectable at about stage 56 of larval life, considerably later than its contralateral counterpart. Experimental manipulation of the developing eye vesicle at early larval stages followed by histological studies of the ipsilateral retinothalamic projections showed, however, that the retinal areas which give rise to this projection are determined by stage 32 of larval life. Further studies, in which monocular enucleation was performed at different larval stages with subsequent examination of the retinothalamic projections from the remaining eye, indicated that the selective pattern of decussation and non-decussation of retinothalamic fibres at the optic chiasma does not require interactions, at the chiasma, between optic fibres from the two eyes.



Author(s):  
Elizabeth Schechter

The largest fibre tract in the human brain connects the two cerebral hemispheres. A ‘split-brain’ surgery severs this structure, sometimes together with other white matter tracts connecting the right hemisphere and the left. Split-brain surgeries have long been performed on non-human animals for experimental purposes, but a number of these surgeries were also performed on adult human beings in the second half of the twentieth century, as a medical treatment for severe cases of epilepsy. A number of these people afterwards agreed to participate in ongoing research into the psychobehavioural consequences of the procedure. These experiments have helped to show that the corpus callosum is a significant source of interhemispheric interaction and information exchange in the ‘neurotypical’ brain. After split-brain surgery, the two hemispheres operate unusually independently of each other in the realm of perception, cognition, and the control of action. For instance, each hemisphere receives visual information directly from the opposite (‘contralateral’) side of space, the right hemisphere from the left visual field and the left hemisphere from the right visual field. This is true of the normal (‘neurotypical’) brain too, but in the neurotypical case interhemispheric tracts allow either hemisphere to gain access to the information that the other has received. In a split-brain subject however the information more or less stays put in whatever hemisphere initially received it. And it isn’t just visual information that is confined to one hemisphere or the other after the surgery. Rather, after split-brain surgery, each hemisphere is the source of proprietary perceptual information of various kinds, and is also the source of proprietary memories, intentions, and aptitudes. Various notions of psychological unity or integration have always been central to notions of mind, personhood, and the self. Although split-brain surgery does not prevent interhemispheric interaction or exchange, it naturally alters and impedes it. So does the split-brain subject as a whole nonetheless remain a unitary psychological being? Or could there now be two such psychological beings within one human animal – sharing one body, one face, one voice? Prominent neuropsychologists working with the subjects have often appeared to argue or assume that a split-brain subject has a divided or disunified consciousness and even two minds. Although a number of philosophers agree, the majority seem to have resisted these conscious and mental ‘duality claims’, defending alternative interpretations of the split-brain experimental results. The sources of resistance are diverse, including everything from a commitment to the necessary unity of consciousness, to recognition of those psychological processes that remain interhemispherically integrated, to concerns about what the moral and legal consequences would be of recognizing multiple psychological beings in one body. On the other hand underlying most of these arguments against the various ‘duality’ claims is the simple fact that the split-brain subject does not appear to be two persons, but one – and there are powerful conceptual, social, and moral connections between being a unitary person on the one hand and having a unified consciousness and mind on the other.



2004 ◽  
Vol 92 (4) ◽  
pp. 2380-2393 ◽  
Author(s):  
M. A. Admiraal ◽  
N.L.W. Keijsers ◽  
C.C.A.M. Gielen

We have investigated pointing movements toward remembered targets after an intervening self-generated body movement. We tested to what extent visual information about the environment or finger position is used in updating target position relative to the body after a step and whether gaze plays a role in the accuracy of the pointing movement. Subjects were tested in three visual conditions: complete darkness (DARK), complete darkness with visual feedback of the finger (FINGER), and with vision of a well-defined environment and with feedback of the finger (FRAME). Pointing accuracy was rather poor in the FINGER and DARK conditions, which did not provide vision of the environment. Constant pointing errors were mainly in the direction of the step and ranged from about 10 to 20 cm. Differences between binocular fixation and target position were often related to the step size and direction. At the beginning of the trial, when the target was visible, fixation was on target. After target extinction, fixation moved away from the target relative to the subject. The variability in the pointing positions appeared to be related to the variable errors in fixation, and the co-variance increases during the delay period after the step, reaching a highly significant value at the time of pointing. The significant co-variance between fixation position and pointing is not the result of a mutual dependence on the step, since we corrected for any direct contributions of the step in both signals. We conclude that the co-variance between fixation and pointing position reflects 1) a common command signal for gaze and arm movements and 2) an effect of fixation on pointing accuracy at the time of pointing.



2020 ◽  
Vol 225 (6) ◽  
pp. 1839-1853 ◽  
Author(s):  
Jan W. Kurzawski ◽  
Kyriaki Mikellidou ◽  
Maria Concetta Morrone ◽  
Franco Pestilli

Abstract The human visual system is capable of processing visual information from fovea to the far peripheral visual field. Recent fMRI studies have shown a full and detailed retinotopic map in area prostriata, located ventro-dorsally and anterior to the calcarine sulcus along the parieto-occipital sulcus with strong preference for peripheral and wide-field stimulation. Here, we report the anatomical pattern of white matter connections between area prostriata and the thalamus encompassing the lateral geniculate nucleus (LGN). To this end, we developed and utilized an automated pipeline comprising a series of Apps that run openly on the cloud computing platform brainlife.io to analyse 139 subjects of the Human Connectome Project (HCP). We observe a continuous and extended bundle of white matter fibers from which two subcomponents can be extracted: one passing ventrally parallel to the optic radiations (OR) and another passing dorsally circumventing the lateral ventricle. Interestingly, the loop travelling dorsally connects the thalamus with the central visual field representation of prostriata located anteriorly, while the other loop travelling more ventrally connects the LGN with the more peripheral visual field representation located posteriorly. We then analyse an additional cohort of 10 HCP subjects using a manual plane extraction method outside brainlife.io to study the relationship between the two extracted white matter subcomponents and eccentricity, myelin and cortical thickness gradients within prostriata. Our results are consistent with a retinotopic segregation recently demonstrated in the OR, connecting the LGN and V1 in humans and reveal for the first time a retinotopic segregation regarding the trajectory of a fiber bundle between the thalamus and an associative visual area.



2002 ◽  
Vol 14 (5) ◽  
pp. 687-701 ◽  
Author(s):  
Jason Proksch ◽  
Daphne Bavelier

There is much anecdotal suggestion of improved visual skills in congenitally deaf individuals. However, this claim has only been met by mixed results from careful investigations of visual skills in deaf individuals. Psychophysical assessments of visual functions have failed, for the most part, to validate the view of enhanced visual skills after deafness. Only a few studies have shown an advantage for deaf individuals in visual tasks. Interestingly, all of these studies share the requirement that participants process visual information in their peripheral visual field under demanding conditions of attention. This work has led us to propose that congenital auditory deprivation alters the gradient of visual attention from central to peripheral field by enhancing peripheral processing. This hypothesis was tested by adapting a search task from Lavie and colleagues in which the interference from distracting information on the search task provides a measure of attentional resources. These authors have established that during an easy central search for a target, any surplus attention remaining will involuntarily process a peripheral distractor that the subject has been instructed to ignore. Attentional resources can be measured by adjusting the difficulty of the search task to the point at which no surplus resources are available for the distractor. Through modification of this paradigm, central and peripheral attentional resources were compared in deaf and hearing individuals. Deaf individuals possessed greater attentional resources in the periphery but less in the center when compared to hearing individuals. Furthermore, based on results from native hearing signers, it was shown that sign language alone could not be responsible for these changes. We conclude that auditory deprivation from birth leads to compensatory changes within the visual system that enhance attentional processing of the peripheral visual field.



1993 ◽  
Vol 70 (4) ◽  
pp. 1578-1584 ◽  
Author(s):  
P. DiZio ◽  
C. E. Lathan ◽  
J. R. Lackner

1. In the oculobrachial illusion, a target light attached to the unseen stationary hand is perceived as moving and changing spatial position when illusory motion of the forearm is elicited by brachial muscle vibration. Our goal was to see whether we could induce apparent motion and displacement of two retinally fixed targets in opposite directions by the use of oculobrachial illusions. 2. We vibrated both biceps brachii, generating illusory movements of the two forearms in opposite directions, and measured any associated changes in perceived distance between target lights on the unseen stationary hands. The stability of visual fixation of one of the targets was also measured. 3. The seen distance between the stationary targets increased significantly when vibration induced an illusory increase in felt distance between the hands, both with binocular and monocular viewing. 4. Subjects maintained fixation accuracy equally well during vibration-induced illusory increases in visual target separation and in a no-vibration control condition. Fixation errors were not correlated with the extent or direction of illusory visual separation. 5. These findings indicate that brachial muscle spindle signals can contribute to an independent representation of felt target location in head-centric coordinates that can be interrelated with a visual representation of target location generated by retinal and oculomotor signals. 6. A model of how these representations are interrelated is proposed, and its relation to other intersensory interactions is discussed.



Perception ◽  
1997 ◽  
Vol 26 (1_suppl) ◽  
pp. 59-59
Author(s):  
J M Zanker ◽  
M P Davey

Visual information processing in primate cortex is based on a highly ordered representation of the surrounding world. In addition to the retinotopic mapping of the visual field, systematic variations of the orientation tuning of neurons are described electrophysiologically for the first stages of the visual stream. On the way to understanding the relation of position and orientation representation, in order to give an adequate account of cortical architecture, it will be an essential step to define the minimum spatial requirements for detection of orientation. We addressed the basic question of spatial limits for detecting orientation by comparing computer simulations of simple orientation filters with psychophysical experiments in which the orientation of small lines had to be detected at various positions in the visual field. At sufficiently high contrast levels, the minimum physical length of a line whose orientation can just be resolved is not constant when presented at various eccentricities, but covaries inversely with the cortical magnification factor. A line needs to span less than 0.2 mm on the cortical surface in order to be recognised as oriented, independently of the actual eccentricity at which the stimulus is presented. This seems to indicate that human performance for this task approaches the physical limits, requiring hardly more than approximately three input elements to be activated, in order to detect the orientation of a highly visible line segment. Combined with the estimates for receptive field sizes of orientation-selective filters derived from computer simulations, this experimental result may nourish speculations of how the rather local elementary process underlying orientation detection in the human visual system can be assembled to form much larger receptive fields of the orientation-sensitive neurons known to exist in the primate visual system.



1997 ◽  
Vol 6 (5) ◽  
pp. 513-531 ◽  
Author(s):  
R. Troy Surdick ◽  
Elizabeth T. Davis ◽  
Robert A. King ◽  
Larry F. Hodges

The ability effectively and accurately to simulate distance in virtual and augmented reality systems is a challenge currently facing R&D. To examine this issue, we separately tested each of seven visual depth cues (relative brightness, relative size, relative height, linear perspective, foreshortening, texture gradient, and stereopsis) as well as the condition in which all seven of these cues were present and simultaneously providing distance information in a simulated display. The viewing distances were 1 and 2 m. In developing simulated displays to convey distance and depth there are three questions that arise. First, which cues provide effective depth information (so that only a small change in the depth cue results in a perceived change in depth)? Second, which cues provide accurate depth information (so that the perceived distance of two equidistant objects perceptually matches)? Finally, how does the effectiveness and accuracy of these depth cues change as a function of the viewing distance? Ten college-aged subjects were tested with each depth-cue condition at both viewing distances. They were tested using a method of constant stimuli procedure and a modified Wheat-stone stereoscopic display. The perspective cues (linear perspective, foreshortening, and texture gradient) were found to be more effective than other depth cues, while effectiveness of relative brightness was vastly inferior. Moreover, relative brightness, relative height, and relative size all significantly decreased in effectiveness with an increase in viewing distance. The depth cues did not differ in terms of accuracy at either viewing distance. Finally, some subjects experienced difficulty in rapidly perceiving distance information provided by stereopsis, but no subjects had difficulty in effectively and accurately perceiving distance with the perspective information used in our experiment. A second experiment demonstrated that a previously stereo-anomalous subject could be trained to perceive stereoscopic depth in a binocular display. We conclude that the use of perspective cues in simulated displays may be more important than the other depth cues tested because these cues are the most effective and accurate cues at both viewing distances, can be easily perceived by all subjects, and can be readily incorporated into simpler, less complex displays (e.g., biocular HMDs) or more complex ones (e.g., binocular or see-through HMDs).



Sign in / Sign up

Export Citation Format

Share Document