scholarly journals Neither the richness of optic flow, nor the precision of heading judgements, predicts walking trajectories

2019 ◽  
Author(s):  
Danlu Cen ◽  
Simon Rushton ◽  
Seralynne Vann

When an observer translates through space, a pattern of image motion, “optic flow”, is projected onto the back of each eye. If the translation is forwards, a radial pattern of optic flow results, with the center of the pattern specifying the direction of translation. It is commonly assumed that humans use optic flow in the visual guidance of walking and that the contribution of optic flow is proportional to its richness. A further assumption is that “heading” judgements (judgements of the direction of translation) and guidance of walking rely on the same visual information and thus the processes involved in the latter can be studied using the former. These assumptions underpin a very extensive body of psychophysical, behavioral, neurophysiological, clinical, computational modelling and imaging research. We measure the form of walking trajectories (using a standard perturbation design in Experiment 1) and the precision of heading judgements (Experiment 2) in four different visual environments. We find that neither the richness of optic flow nor the precision of heading judgements predicts walking trajectories. These results challenge the widely held assumptions about optic flow, perception of heading and the visual guidance of walking.

1997 ◽  
Vol 77 (2) ◽  
pp. 554-561 ◽  
Author(s):  
Jong-Nam Kim ◽  
Kathleen Mulligan ◽  
Helen Sherk

Kim, Jong-Nam, Kathleen Mulligan, and Helen Sherk. Simulated optic flow and extrastriate cortex. I. Optic flow versus texture. J. Neurophysiol. 77: 554–561, 1997. A locomoting observer sees a very different visual scene than an observer at rest: images throughout the visual field accelerate and expand, and they follow approximately radial outward paths from a single origin. This so-called optic flow field is presumably used for visual guidance, and it has been suggested that particular areas of visual cortex are specialized for the analysis of optic flow. In the cat, the lateral suprasylvian visual area (LS) is a likely candidate. To test the hypothesis that LS is specialized for analysis of optic flow fields, we recorded cell responses to optic flow displays. Stimulus movies simulated the experience of a cat trotting slowly across an endless plain covered with small balls. In different simulations we varied the size of balls, their organization (randomly or regularly dispersed), and their color (all one gray level, or multiple shades of gray). For each optic flow movie, a “texture” movie composed of the same elements but lacking optic flow cues was tested. In anesthetized cats, >500 neurons in LS were studied with a variety of movies. Most (70%) of 454 visually responsive cells responded to optic flow movies. Visually responsive cells generally preferred optic flow to texture movies (69% of those responsive to any movie). The direction in which a movie was shown (forward or reverse) was also an important factor. Most cells (68%) strongly preferred forward motion, which corresponded to visual experience during locomotion.


2005 ◽  
Vol 94 (2) ◽  
pp. 1084-1090 ◽  
Author(s):  
Anne K. Churchland ◽  
Stephen G. Lisberger

We have used antidromic activation to determine the functional discharge properties of neurons that project to the frontal pursuit area (FPA) from the medial-superior temporal visual area (MST). In awake rhesus monkeys, MST neurons were considered to be activated antidromically if they emitted action potentials at fixed, short latencies after stimulation in the FPA and if the activation passed the collision test. Antidromically activated neurons ( n = 37) and a sample of the overall population of MST neurons ( n = 110) then were studied during pursuit eye movements across a dark background and during laminar motion of a large random-dot texture and optic flow expansion and contraction during fixation. Antidromically activated neurons showed direction tuning during pursuit (25/37), during laminar image motion (21/37), or both (16/37). Of 27 neurons tested with optic flow stimuli, 14 showed tuning for optic flow expansion ( n = 10) or contraction ( n = 4). There were no statistically significant differences in the response properties of the antidromically activated and control samples. Preferred directions for pursuit and laminar image motion did not show any statistically significant biases, and the preferred directions for eye versus image motion in each sample tended to be equally divided between aligned and opposed. There were small differences between the control and antidromically activated populations in preferred speeds for laminar motion and optic flow; these might have reached statistical significance with larger samples of antidromically activated neurons. We conclude that the population of MST neurons projecting to the FPA is highly diverse and quite similar to the general population of neurons in MST.


2014 ◽  
Vol 112 (10) ◽  
pp. 2470-2480 ◽  
Author(s):  
Andre Kaminiarz ◽  
Anja Schlack ◽  
Klaus-Peter Hoffmann ◽  
Markus Lappe ◽  
Frank Bremmer

The patterns of optic flow seen during self-motion can be used to determine the direction of one's own heading. Tracking eye movements which typically occur during everyday life alter this task since they add further retinal image motion and (predictably) distort the retinal flow pattern. Humans employ both visual and nonvisual (extraretinal) information to solve a heading task in such case. Likewise, it has been shown that neurons in the monkey medial superior temporal area (area MST) use both signals during the processing of self-motion information. In this article we report that neurons in the macaque ventral intraparietal area (area VIP) use visual information derived from the distorted flow patterns to encode heading during (simulated) eye movements. We recorded responses of VIP neurons to simple radial flow fields and to distorted flow fields that simulated self-motion plus eye movements. In 59% of the cases, cell responses compensated for the distortion and kept the same heading selectivity irrespective of different simulated eye movements. In addition, response modulations during real compared with simulated eye movements were smaller, being consistent with reafferent signaling involved in the processing of the visual consequences of eye movements in area VIP. We conclude that the motion selectivities found in area VIP, like those in area MST, provide a way to successfully analyze and use flow fields during self-motion and simultaneous tracking movements.


eLife ◽  
2017 ◽  
Vol 6 ◽  
Author(s):  
Jen-Chun Hsiang ◽  
Keith P Johnson ◽  
Linda Madisen ◽  
Hongkui Zeng ◽  
Daniel Kerschensteiner

Neurons receive synaptic inputs on extensive neurite arbors. How information is organized across arbors and how local processing in neurites contributes to circuit function is mostly unknown. Here, we used two-photon Ca2+ imaging to study visual processing in VGluT3-expressing amacrine cells (VG3-ACs) in the mouse retina. Contrast preferences (ON vs. OFF) varied across VG3-AC arbors depending on the laminar position of neurites, with ON responses preferring larger stimuli than OFF responses. Although arbors of neighboring cells overlap extensively, imaging population activity revealed continuous topographic maps of visual space in the VG3-AC plexus. All VG3-AC neurites responded strongly to object motion, but remained silent during global image motion. Thus, VG3-AC arbors limit vertical and lateral integration of contrast and location information, respectively. We propose that this local processing enables the dense VG3-AC plexus to contribute precise object motion signals to diverse targets without distorting target-specific contrast preferences and spatial receptive fields.


2017 ◽  
Author(s):  
Jen-Chun Hsiang ◽  
Keith Johnson ◽  
Linda Madisen ◽  
Hongkui Zeng ◽  
Daniel Kerschensteiner

AbstractSynaptic inputs to neurons are distributed across extensive neurite arborizations. To what extent arbors process inputs locally or integrate them globally is, for most neurons, unknown. This question is particularly relevant for amacrine cells, a diverse class of retinal interneurons, which receive input and provide output through the same neurites. Here, we used two-photon Ca2+ imaging to analyze visual processing in VGluT3-expressing amacrine cells (VG3-ACs), an important component of object motion sensitive circuits in the retina. VG3-AC neurites differed in their preferred stimulus contrast (ON vs. OFF); and ON and OFF responses varied in transience and preferred stimulus size. Contrast preference changed predictably with the laminar position of neurites in the inner plexiform layer. Yet, neurites at all depths were strongly activated by local but not by global image motion. Thus, VG3-AC neurites process visual information locally, exhibiting diverse responses to contrast steps, but uniform object motion selectivity.


2021 ◽  
Author(s):  
Julien R. Serres ◽  
Antoine H.P. Morice ◽  
Constance Blary ◽  
Romain Miot ◽  
Gilles Montagne ◽  
...  

AbstractTo investigate altitude control in honeybees, an optical context was designed to make honeybees crash. It has been widely accepted that honeybees rely on the optic flow generated by the ground to control their altitude. However, identifying an optical context capable of uncorrelating forward speed from altitude in honeybees’ flight was the first step towards enhancing the optical context to better understand altitude control in honeybees. This optical context aims to put honeybees in the same flight conditions as an open sky flight above mirror-smooth water. An optical manipulation, based on a pair of opposed horizontal mirrors, was designed to remove any visual information coming from the floor and ceiling. Such an optical manipulation reproduced quantitatively the seminal experiment of Heran & Lindauer (1963), and revealed that honeybees control their altitude by detecting the optic flow with a visual field that extends to approximately 165°.


1996 ◽  
Vol 199 (1) ◽  
pp. 253-261 ◽  
Author(s):  
M Lehrer

In a series of behavioural studies we found that bees use depth information extracted from self-induced image motion in several visual tasks involving pin-pointing the goal. Some of the results are reviewed here in an attempt to emphasise the active nature of this performance. They show that bees acquire depth information during free flight by employing two different strategies. One is to adapt flight behaviour, upon arrival at the food source, to the requirements of the task, a performance that is based on a learning process. The other is based on a stereotyped, innate flight pattern performed upon departure from the food source. The latter has probably evolved specifically for the acquisition of depth information.


10.5772/6491 ◽  
2009 ◽  
Author(s):  
Nicolas Franceschini ◽  
Franck Ruffier ◽  
Julien Serres ◽  
Stephane Viollet

i-Perception ◽  
2017 ◽  
Vol 8 (3) ◽  
pp. 204166951770820 ◽  
Author(s):  
Diederick C. Niehorster ◽  
Li Li

How do we perceive object motion during self-motion using visual information alone? Previous studies have reported that the visual system can use optic flow to identify and globally subtract the retinal motion component resulting from self-motion to recover scene-relative object motion, a process called flow parsing. In this article, we developed a retinal motion nulling method to directly measure and quantify the magnitude of flow parsing (i.e., flow parsing gain) in various scenarios to examine the accuracy and tuning of flow parsing for the visual perception of object motion during self-motion. We found that flow parsing gains were below unity for all displays in all experiments; and that increasing self-motion and object motion speed did not alter flow parsing gain. We conclude that visual information alone is not sufficient for the accurate perception of scene-relative motion during self-motion. Although flow parsing performs global subtraction, its accuracy also depends on local motion information in the retinal vicinity of the moving object. Furthermore, the flow parsing gain was constant across common self-motion or object motion speeds. These results can be used to inform and validate computational models of flow parsing.


Sign in / Sign up

Export Citation Format

Share Document