scholarly journals A simple approach to ignoring irrelevant variables by population decoding based on multisensory neurons

2016 ◽  
Vol 116 (3) ◽  
pp. 1449-1467 ◽  
Author(s):  
HyungGoo R. Kim ◽  
Xaq Pitkow ◽  
Dora E. Angelaki ◽  
Gregory C. DeAngelis

Sensory input reflects events that occur in the environment, but multiple events may be confounded in sensory signals. For example, under many natural viewing conditions, retinal image motion reflects some combination of self-motion and movement of objects in the world. To estimate one stimulus event and ignore others, the brain can perform marginalization operations, but the neural bases of these operations are poorly understood. Using computational modeling, we examine how multisensory signals may be processed to estimate the direction of self-motion (i.e., heading) and to marginalize out effects of object motion. Multisensory neurons represent heading based on both visual and vestibular inputs and come in two basic types: “congruent” and “opposite” cells. Congruent cells have matched heading tuning for visual and vestibular cues and have been linked to perceptual benefits of cue integration during heading discrimination. Opposite cells have mismatched visual and vestibular heading preferences and are ill-suited for cue integration. We show that decoding a mixed population of congruent and opposite cells substantially reduces errors in heading estimation caused by object motion. In addition, we present a general formulation of an optimal linear decoding scheme that approximates marginalization and can be implemented biologically by simple reinforcement learning mechanisms. We also show that neural response correlations induced by task-irrelevant variables may greatly exceed intrinsic noise correlations. Overall, our findings suggest a general computational strategy by which neurons with mismatched tuning for two different sensory cues may be decoded to perform marginalization operations that dissociate possible causes of sensory inputs.

i-Perception ◽  
10.1068/ic366 ◽  
2011 ◽  
Vol 2 (4) ◽  
pp. 366-366 ◽  
Author(s):  
Keisuke Araki ◽  
Masaya Kato ◽  
Takehiro Nagai ◽  
Kowa Koida ◽  
Shigeki Nakauchi ◽  
...  

Perception ◽  
1998 ◽  
Vol 27 (10) ◽  
pp. 1153-1176 ◽  
Author(s):  
Michiteru Kitazaki ◽  
Shinsuke Shimojo

The visual system perceptually decomposes retinal image motion into three basic components that are ecologically significant for the human observer: object depth, object motion, and self motion. Using this conceptual framework, we explored the relationship between them by examining perception of objects’ depth order and relative motion during self motion. We found that the visual system obeyed what we call the parallax-sign constraint, but in different ways depending on whether the retinal image motion contained velocity discontinuity or not. When velocity discontinuity existed (eg in dynamic occlusion, transparent motion), the subject perceptually interpreted image motion as relative motion between surfaces with stable depth order. When velocity discontinuity did not exist, he/she perceived depth-order reversal but no relative motion. The results suggest that the existence of surface discontinuity or of multiple surfaces indexed by velocity discontinuity inhibits the reversal of global depth order.


Perception ◽  
1998 ◽  
Vol 27 (8) ◽  
pp. 937-949 ◽  
Author(s):  
Takanao Yajima ◽  
Hiroyasu Ujike ◽  
Keiji Uchikawa

The two main questions addressed in this study were (a) what effect does yoking the relative expansion and contraction (EC) of retinal images to forward and backward head movements have on the resultant magnitude and stability of perceived depth, and (b) how does this relative EC image motion interact with the depth cues of motion parallax? Relative EC image motion was produced by moving a small CCD camera toward and away from the stimulus, two random-dot surfaces separated in depth, in synchrony with the observers' forward and backward head movements. Observers viewed the stimuli monocularly, on a helmet-mounted display, while moving their heads at various velocities, including zero velocity. The results showed that (a) the magnitude of perceived depth was smaller with smaller head velocities (<10 cm s−1), including the zero-head-velocity condition, than with a larger velocity (10 cm s−1), and (b) perceived depth, when motion parallax and the EC image motion cues were simultaneously presented, is equal to the greater of the two possible perceived depths produced from either of these two cues alone. The results suggested the role of nonvisual information of self-motion on perceiving depth.


Perception ◽  
1996 ◽  
Vol 25 (1_suppl) ◽  
pp. 10-10
Author(s):  
B R Beutter ◽  
J Lorenceau ◽  
L S Stone

For four subjects (one naive), we measured pursuit of a line-figure diamond moving along an elliptical path behind an invisible X-shaped aperture under two conditions. The diamond's corners were occluded and only four moving line segments were visible over the background (38 cd m−2). At low segment luminance (44 cd m−2), the percept is largely a coherently moving diamond. At high luminance (108 cd m−2), the percept is largely four independently moving segments. Along with this perceptual effect, there were parallel changes in pursuit. In the low-contrast condition, pursuit was more related to object motion. A \chi2 analysis showed ( p>0.05) that for 98% of trials subjects were more likely tracking the object than the segments, for 29% of trials one could not reject the hypothesis that subjects were tracking the object and not the segments, and for 100% of trials one could reject the hypothesis that subjects were tracking the segments and not the object. Conversely, in the high-contrast condition, pursuit appeared more related to segment motion. For 66% of trials subjects were more likely tracking the segments than the object; for 94% of trials one could reject the hypothesis that subjects were tracking the object and not the segments; and for 13% of trials one could not reject the hypothesis that subjects were tracking the segments and not the object. These results suggest that pursuit is driven by the same object-motion signal as perception, rather than by simple retinal image motion.


Perception ◽  
1996 ◽  
Vol 25 (1_suppl) ◽  
pp. 150-150 ◽  
Author(s):  
L S Stone ◽  
J Lorenceau ◽  
B R Beutter

There has long been qualitative evidence that humans can pursue an object defined only by the motion of its parts (eg Steinbach, 1976 Vision Research16 1371 – 1375). We explored this quantitatively using an occluded diamond stimulus (Lorenceau and Shiffrar, 1992 Vision Research32 263 – 275). Four subjects (one naive) tracked a line-figure diamond moving along an elliptical path (0.9 Hz) either clockwise (CW) or counterclockwise (CCW) behind either an X-shaped aperture (CROSS) or two vertical rectangular apertures (BARS), which obscured the corners. Although the stimulus consisted of only four line segments (108 cd m−2), moving within a visible aperture (0.2 cd m−2) behind a foreground (38 cd m−2), it is largely perceived as a coherently moving diamond. The intersaccadic portions of eye-position traces were fitted with sinusoids. All subjects tracked object motion with considerable temporal accuracy. The mean phase lag was 5°/6° (CROSS/BARS) and the mean relative phase between the horizontal and vertical components was +95°/+92° (CW) and −85°/−75° (CCW), which is close to perfect. Furthermore, a \chi2 analysis showed that 56% of BARS trials were consistent with tracking the correct elliptical shape ( p<0.05), although segment motion was purely vertical. These data disprove the main tenet of most models of pursuit: that it is a system that seeks to minimise retinal image motion through negative feedback. Rather, the main drive must be a visual signal which has already integrated spatiotemporal retinal information into an object-motion signal.


2019 ◽  
Vol 116 (18) ◽  
pp. 9060-9065 ◽  
Author(s):  
Kalpana Dokka ◽  
Hyeshin Park ◽  
Michael Jansen ◽  
Gregory C. DeAngelis ◽  
Dora E. Angelaki

The brain infers our spatial orientation and properties of the world from ambiguous and noisy sensory cues. Judging self-motion (heading) in the presence of independently moving objects poses a challenging inference problem because the image motion of an object could be attributed to movement of the object, self-motion, or some combination of the two. We test whether perception of heading and object motion follows predictions of a normative causal inference framework. In a dual-report task, subjects indicated whether an object appeared stationary or moving in the virtual world, while simultaneously judging their heading. Consistent with causal inference predictions, the proportion of object stationarity reports, as well as the accuracy and precision of heading judgments, depended on the speed of object motion. Critically, biases in perceived heading declined when the object was perceived to be moving in the world. Our findings suggest that the brain interprets object motion and self-motion using a causal inference framework.


2014 ◽  
Vol 112 (10) ◽  
pp. 2470-2480 ◽  
Author(s):  
Andre Kaminiarz ◽  
Anja Schlack ◽  
Klaus-Peter Hoffmann ◽  
Markus Lappe ◽  
Frank Bremmer

The patterns of optic flow seen during self-motion can be used to determine the direction of one's own heading. Tracking eye movements which typically occur during everyday life alter this task since they add further retinal image motion and (predictably) distort the retinal flow pattern. Humans employ both visual and nonvisual (extraretinal) information to solve a heading task in such case. Likewise, it has been shown that neurons in the monkey medial superior temporal area (area MST) use both signals during the processing of self-motion information. In this article we report that neurons in the macaque ventral intraparietal area (area VIP) use visual information derived from the distorted flow patterns to encode heading during (simulated) eye movements. We recorded responses of VIP neurons to simple radial flow fields and to distorted flow fields that simulated self-motion plus eye movements. In 59% of the cases, cell responses compensated for the distortion and kept the same heading selectivity irrespective of different simulated eye movements. In addition, response modulations during real compared with simulated eye movements were smaller, being consistent with reafferent signaling involved in the processing of the visual consequences of eye movements in area VIP. We conclude that the motion selectivities found in area VIP, like those in area MST, provide a way to successfully analyze and use flow fields during self-motion and simultaneous tracking movements.


1996 ◽  
Vol 82 (2) ◽  
pp. 627-635 ◽  
Author(s):  
Shinji Nakamura

To investigate the effects of background stimulation upon eye-movement information (EMI), the perceived deceleration of the target motion during pursuit eye movement (Aubert-Fleishl paradox) was analyzed. In the experiment, a striped pattern was used as a background stimulus with various brightness contrasts and spatial frequencies for serially manipulating the attributions of the background stimulus. Analysis showed that the retinal-image motion of the background stimulus (optic flow) affected eye-movement information and that the effects of optic flow became stronger when high contrast and low spatial frequency stripes were presented as the background stimulus. In conclusion, optic flow is one source of eye-movement information in determining real object motion, and the effectiveness of optic flow depends on the attributes of the background stimulus.


2019 ◽  
Vol 121 (4) ◽  
pp. 1207-1221 ◽  
Author(s):  
Ryo Sasaki ◽  
Dora E. Angelaki ◽  
Gregory C. DeAngelis

Multiple areas of macaque cortex are involved in visual motion processing, but their relative functional roles remain unclear. The medial superior temporal (MST) area is typically divided into lateral (MSTl) and dorsal (MSTd) subdivisions that are thought to be involved in processing object motion and self-motion, respectively. Whereas MSTd has been studied extensively with regard to processing visual and nonvisual self-motion cues, little is known about self-motion signals in MSTl, especially nonvisual signals. Moreover, little is known about how self-motion and object motion signals interact in MSTl and how this differs from interactions in MSTd. We compared the visual and vestibular heading tuning of neurons in MSTl and MSTd using identical stimuli. Our findings reveal that both visual and vestibular heading signals are weaker in MSTl than in MSTd, suggesting that MSTl is less well suited to participate in self-motion perception than MSTd. We also tested neurons in both areas with a variety of combinations of object motion and self-motion. Our findings reveal that vestibular signals improve the separability of coding of heading and object direction in both areas, albeit more strongly in MSTd due to the greater strength of vestibular signals. Based on a marginalization technique, population decoding reveals that heading and object direction can be more effectively dissociated from MSTd responses than MSTl responses. Our findings help to clarify the respective contributions that MSTl and MSTd make to processing of object motion and self-motion, although our conclusions may be somewhat specific to the multipart moving objects that we employed. NEW & NOTEWORTHY Retinal image motion reflects contributions from both the observer’s self-motion and the movement of objects in the environment. The neural mechanisms by which the brain dissociates self-motion and object motion remain unclear. This study provides the first systematic examination of how the lateral subdivision of area MST (MSTl) contributes to dissociating object motion and self-motion. We also examine, for the first time, how MSTl neurons represent translational self-motion based on both vestibular and visual cues.


2013 ◽  
Vol 13 (9) ◽  
pp. 204-204 ◽  
Author(s):  
T. Uehara ◽  
Y. Tani ◽  
T. Nagai ◽  
K. Koida ◽  
S. Nakauchi ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document