scholarly journals Processing of object motion and self-motion in the lateral subdivision of the medial superior temporal area in macaques

2019 ◽  
Vol 121 (4) ◽  
pp. 1207-1221 ◽  
Author(s):  
Ryo Sasaki ◽  
Dora E. Angelaki ◽  
Gregory C. DeAngelis

Multiple areas of macaque cortex are involved in visual motion processing, but their relative functional roles remain unclear. The medial superior temporal (MST) area is typically divided into lateral (MSTl) and dorsal (MSTd) subdivisions that are thought to be involved in processing object motion and self-motion, respectively. Whereas MSTd has been studied extensively with regard to processing visual and nonvisual self-motion cues, little is known about self-motion signals in MSTl, especially nonvisual signals. Moreover, little is known about how self-motion and object motion signals interact in MSTl and how this differs from interactions in MSTd. We compared the visual and vestibular heading tuning of neurons in MSTl and MSTd using identical stimuli. Our findings reveal that both visual and vestibular heading signals are weaker in MSTl than in MSTd, suggesting that MSTl is less well suited to participate in self-motion perception than MSTd. We also tested neurons in both areas with a variety of combinations of object motion and self-motion. Our findings reveal that vestibular signals improve the separability of coding of heading and object direction in both areas, albeit more strongly in MSTd due to the greater strength of vestibular signals. Based on a marginalization technique, population decoding reveals that heading and object direction can be more effectively dissociated from MSTd responses than MSTl responses. Our findings help to clarify the respective contributions that MSTl and MSTd make to processing of object motion and self-motion, although our conclusions may be somewhat specific to the multipart moving objects that we employed. NEW & NOTEWORTHY Retinal image motion reflects contributions from both the observer’s self-motion and the movement of objects in the environment. The neural mechanisms by which the brain dissociates self-motion and object motion remain unclear. This study provides the first systematic examination of how the lateral subdivision of area MST (MSTl) contributes to dissociating object motion and self-motion. We also examine, for the first time, how MSTl neurons represent translational self-motion based on both vestibular and visual cues.

1988 ◽  
Vol 60 (3) ◽  
pp. 940-965 ◽  
Author(s):  
M. R. Dursteler ◽  
R. H. Wurtz

1. Previous experiments have shown that punctate chemical lesions within the middle temporal area (MT) of the superior temporal sulcus (STS) produce deficits in the initiation and maintenance of pursuit eye movements (10, 34). The present experiments were designed to test the effect of such chemical lesions in an area within the STS to which MT projects, the medial superior temporal area (MST). 2. We injected ibotenic acid into localized regions of MST, and we observed two deficits in pursuit eye movements, a retinotopic deficit and a directional deficit. 3. The retinotopic deficit in pursuit initiation was characterized by the monkey's inability to match eye speed to target speed or to adjust the amplitude of the saccade made to acquire the target to compensate for target motion. This deficit was related to the initiation of pursuit to targets moving in any direction in the visual field contralateral to the side of the brain with the lesion. This deficit was similar to the deficit we found following damage to extrafoveal MT except that the affected area of the visual field frequently extended throughout the entire contralateral visual field tested. 4. The directional deficit in pursuit maintenance was characterized by a failure to match eye speed to target speed once the fovea had been brought near the moving target. This deficit occurred only when the target was moving toward the side of the lesion, regardless of whether the target began to move in the ipsilateral or contralateral visual field. There was no deficit in the amplitude of saccades made to acquire the target, or in the amplitude of the catch-up saccades made to compensate for the slowed pursuit. The directional deficit is similar to the one we described previously following chemical lesions of the foveal representation in the STS. 5. Retinotopic deficits resulted from any of our injections in MST. Directional deficits resulted from lesions limited to subregions within MST, particularly lesions that invaded the floor of the STS and the posterior bank of the STS just lateral to MT. Extensive damage to the densely myelinated area of the anterior bank or to the posterior parietal area on the dorsal lip of the anterior bank produced minimal directional deficits. 6. We conclude that damage to visual motion processing in MST underlies the retinotopic pursuit deficit just as it does in MT. MST appears to be a sequential step in visual motion processing that occurs before all of the visual motion information is transmitted to the brainstem areas related to pursuit.(ABSTRACT TRUNCATED AT 400 WORDS)


Author(s):  
Tyler S. Manning ◽  
Kenneth H. Britten

The ability to see motion is critical to survival in a dynamic world. Decades of physiological research have established that motion perception is a distinct sub-modality of vision supported by a network of specialized structures in the nervous system. These structures are arranged hierarchically according to the spatial scale of the calculations they perform, with more local operations preceding those that are more global. The different operations serve distinct purposes, from the interception of small moving objects to the calculation of self-motion from image motion spanning the entire visual field. Each cortical area in the hierarchy has an independent representation of visual motion. These representations, together with computational accounts of their roles, provide clues to the functions of each area. Comparisons between neural activity in these areas and psychophysical performance can identify which representations are sufficient to support motion perception. Experimental manipulation of this activity can also define which areas are necessary for motion-dependent behaviors like self-motion guidance.


2021 ◽  
Vol 118 (32) ◽  
pp. e2106235118
Author(s):  
Reuben Rideaux ◽  
Katherine R. Storrs ◽  
Guido Maiello ◽  
Andrew E. Welchman

Sitting in a static railway carriage can produce illusory self-motion if the train on an adjoining track moves off. While our visual system registers motion, vestibular signals indicate that we are stationary. The brain is faced with a difficult challenge: is there a single cause of sensations (I am moving) or two causes (I am static, another train is moving)? If a single cause, integrating signals produces a more precise estimate of self-motion, but if not, one cue should be ignored. In many cases, this process of causal inference works without error, but how does the brain achieve it? Electrophysiological recordings show that the macaque medial superior temporal area contains many neurons that encode combinations of vestibular and visual motion cues. Some respond best to vestibular and visual motion in the same direction (“congruent” neurons), while others prefer opposing directions (“opposite” neurons). Congruent neurons could underlie cue integration, but the function of opposite neurons remains a puzzle. Here, we seek to explain this computational arrangement by training a neural network model to solve causal inference for motion estimation. Like biological systems, the model develops congruent and opposite units and recapitulates known behavioral and neurophysiological observations. We show that all units (both congruent and opposite) contribute to motion estimation. Importantly, however, it is the balance between their activity that distinguishes whether visual and vestibular cues should be integrated or separated. This explains the computational purpose of puzzling neural representations and shows how a relatively simple feedforward network can solve causal inference.


2019 ◽  
Vol 116 (18) ◽  
pp. 9060-9065 ◽  
Author(s):  
Kalpana Dokka ◽  
Hyeshin Park ◽  
Michael Jansen ◽  
Gregory C. DeAngelis ◽  
Dora E. Angelaki

The brain infers our spatial orientation and properties of the world from ambiguous and noisy sensory cues. Judging self-motion (heading) in the presence of independently moving objects poses a challenging inference problem because the image motion of an object could be attributed to movement of the object, self-motion, or some combination of the two. We test whether perception of heading and object motion follows predictions of a normative causal inference framework. In a dual-report task, subjects indicated whether an object appeared stationary or moving in the virtual world, while simultaneously judging their heading. Consistent with causal inference predictions, the proportion of object stationarity reports, as well as the accuracy and precision of heading judgments, depended on the speed of object motion. Critically, biases in perceived heading declined when the object was perceived to be moving in the world. Our findings suggest that the brain interprets object motion and self-motion using a causal inference framework.


2014 ◽  
Vol 112 (10) ◽  
pp. 2470-2480 ◽  
Author(s):  
Andre Kaminiarz ◽  
Anja Schlack ◽  
Klaus-Peter Hoffmann ◽  
Markus Lappe ◽  
Frank Bremmer

The patterns of optic flow seen during self-motion can be used to determine the direction of one's own heading. Tracking eye movements which typically occur during everyday life alter this task since they add further retinal image motion and (predictably) distort the retinal flow pattern. Humans employ both visual and nonvisual (extraretinal) information to solve a heading task in such case. Likewise, it has been shown that neurons in the monkey medial superior temporal area (area MST) use both signals during the processing of self-motion information. In this article we report that neurons in the macaque ventral intraparietal area (area VIP) use visual information derived from the distorted flow patterns to encode heading during (simulated) eye movements. We recorded responses of VIP neurons to simple radial flow fields and to distorted flow fields that simulated self-motion plus eye movements. In 59% of the cases, cell responses compensated for the distortion and kept the same heading selectivity irrespective of different simulated eye movements. In addition, response modulations during real compared with simulated eye movements were smaller, being consistent with reafferent signaling involved in the processing of the visual consequences of eye movements in area VIP. We conclude that the motion selectivities found in area VIP, like those in area MST, provide a way to successfully analyze and use flow fields during self-motion and simultaneous tracking movements.


2016 ◽  
Vol 115 (1) ◽  
pp. 286-300 ◽  
Author(s):  
Oliver W. Layton ◽  
Brett R. Fajen

Many forms of locomotion rely on the ability to accurately perceive one's direction of locomotion (i.e., heading) based on optic flow. Although accurate in rigid environments, heading judgments may be biased when independently moving objects are present. The aim of this study was to systematically investigate the conditions in which moving objects influence heading perception, with a focus on the temporal dynamics and the mechanisms underlying this bias. Subjects viewed stimuli simulating linear self-motion in the presence of a moving object and judged their direction of heading. Experiments 1 and 2 revealed that heading perception is biased when the object crosses or almost crosses the observer's future path toward the end of the trial, but not when the object crosses earlier in the trial. Nonetheless, heading perception is not based entirely on the instantaneous optic flow toward the end of the trial. This was demonstrated in Experiment 3 by varying the portion of the earlier part of the trial leading up to the last frame that was presented to subjects. When the stimulus duration was long enough to include the part of the trial before the moving object crossed the observer's path, heading judgments were less biased. The findings suggest that heading perception is affected by the temporal evolution of optic flow. The time course of dorsal medial superior temporal area (MSTd) neuron responses may play a crucial role in perceiving heading in the presence of moving objects, a property not captured by many existing models.


PLoS ONE ◽  
2021 ◽  
Vol 16 (6) ◽  
pp. e0253067
Author(s):  
Benedict Wild ◽  
Stefan Treue

Modern accounts of visual motion processing in the primate brain emphasize a hierarchy of different regions within the dorsal visual pathway, especially primary visual cortex (V1) and the middle temporal area (MT). However, recent studies have called the idea of a processing pipeline with fixed contributions to motion perception from each area into doubt. Instead, the role that each area plays appears to depend on properties of the stimulus as well as perceptual history. We propose to test this hypothesis in human subjects by comparing motion perception of two commonly used stimulus types: drifting sinusoidal gratings (DSGs) and random dot patterns (RDPs). To avoid potential biases in our approach we are pre-registering our study. We will compare the effects of size and contrast levels on the perception of the direction of motion for DSGs and RDPs. In addition, based on intriguing results in a pilot study, we will also explore the effects of a post-stimulus mask. Our approach will offer valuable insights into how motion is processed by the visual system and guide further behavioral and neurophysiological research.


2020 ◽  
Vol 117 (50) ◽  
pp. 32165-32168
Author(s):  
Arvid Guterstam ◽  
Michael S. A. Graziano

Recent evidence suggests a link between visual motion processing and social cognition. When person A watches person B, the brain of A apparently generates a fictitious, subthreshold motion signal streaming from B to the object of B’s attention. These previous studies, being correlative, were unable to establish any functional role for the false motion signals. Here, we directly tested whether subthreshold motion processing plays a role in judging the attention of others. We asked, if we contaminate people’s visual input with a subthreshold motion signal streaming from an agent to an object, can we manipulate people’s judgments about that agent’s attention? Participants viewed a display including faces, objects, and a subthreshold motion hidden in the background. Participants’ judgments of the attentional state of the faces was significantly altered by the hidden motion signal. Faces from which subthreshold motion was streaming toward an object were judged as paying more attention to the object. Control experiments showed the effect was specific to the agent-to-object motion direction and to judging attention, not action or spatial orientation. These results suggest that when the brain models other minds, it uses a subthreshold motion signal, streaming from an individual to an object, to help represent attentional state. This type of social-cognitive model, tapping perceptual mechanisms that evolved to process physical events in the real world, may help to explain the extraordinary cultural persistence of beliefs in mind processes having physical manifestation. These findings, therefore, may have larger implications for human psychology and cultural belief.


1988 ◽  
Vol 60 (2) ◽  
pp. 580-603 ◽  
Author(s):  
H. Komatsu ◽  
R. H. Wurtz

1. Among the multiple extrastriate visual areas in monkey cerebral cortex, several areas within the superior temporal sulcus (STS) are selectively related to visual motion processing. In this series of experiments we have attempted to relate this visual motion processing at a neuronal level to a behavior that is dependent on such processing, the generation of smooth-pursuit eye movements. 2. We studied two visual areas within the STS, the middle temporal area (MT) and the medial superior temporal area (MST). For the purposes of this study, MT and MST were defined functionally as those areas within the STS having a high proportion of directionally selective neurons. MST was distinguished from MT by using the established relationship of receptive-field size to eccentricity, with MST having larger receptive fields than MT. 3. A subset of these visually responsive cells within the STS were identified as pursuit cells--those cells that discharge during smooth pursuit of a small target in an otherwise dark room. Pursuit cells were found only in localized regions--in the foveal region of MT (MTf), in a dorsal-medial area of MST on the anterior bank of the STS (MSTd), and in a lateral-anterior area of MST on the floor and the posterior bank of the STS (MST1). 4. Pursuit cells showed two characteristics in common when their visual properties were studied while the monkey was fixating. Almost all cells showed direction selectivity for moving stimuli and included the fovea within their receptive fields. 5. The visual response of pursuit cells in the several areas differed in two ways. Cells in MTf preferred small moving spots of light, whereas cells in MSTd preferred large moving stimuli, such as a pattern of random dots. Cells in MTf had small receptive fields; those in MSTd usually had large receptive fields. Visual responses of pursuit neurons in MST1 were heterogeneous; some resembled those in MTf, whereas others were similar to those in MSTd. This suggests that the pursuit cells in MSTd and MST1 belong to different subregions of MST.


PLoS ONE ◽  
2021 ◽  
Vol 16 (1) ◽  
pp. e0243381
Author(s):  
Meaghan McManus ◽  
Laurence R. Harris

Human perception is based on expectations. We expect visual upright and gravity upright, sensed through vision, vestibular and other sensory systems, to agree. Equally, we expect that visual and vestibular information about self-motion will correspond. What happens when these assumptions are violated? Tilting a person from upright so that gravity is not where it should be impacts both visually induced self-motion (vection) and the perception of upright. How might the two be connected? Using virtual reality, we varied the strength of visual orientation cues, and hence the probability of participants experiencing a visual reorientation illusion (VRI) in which visual cues to orientation dominate gravity, using an oriented corridor and a starfield while also varying head-on-trunk orientation and body posture. The effectiveness of the optic flow in simulating self-motion was assessed by how much visual motion was required to evoke the perception that the participant had reached the position of a previously presented target. VRI was assessed by questionnaire When participants reported higher levels of VRI they also required less visual motion to evoke the sense of traveling through a given distance, regardless of head or body posture, or the type of visual environment. We conclude that experiencing a VRI, in which visual-vestibular conflict is resolved and the direction of upright is reinterpreted, affects the effectiveness of optic flow at simulating motion through the environment. Therefore, any apparent effect of head or body posture or type of environment are largely indirect effects related instead, to the level of VRI experienced by the observer. We discuss potential mechanisms for this such as reinterpreting gravity information or altering the weighting of orientation cues.


Sign in / Sign up

Export Citation Format

Share Document