scholarly journals Acoustic facilitation of object movement detection during self-motion

2011 ◽  
Vol 278 (1719) ◽  
pp. 2840-2847 ◽  
Author(s):  
F. J. Calabro ◽  
S. Soto-Faraco ◽  
L. M. Vaina

In humans, as well as most animal species, perception of object motion is critical to successful interaction with the surrounding environment. Yet, as the observer also moves, the retinal projections of the various motion components add to each other and extracting accurate object motion becomes computationally challenging. Recent psychophysical studies have demonstrated that observers use a flow-parsing mechanism to estimate and subtract self-motion from the optic flow field. We investigated whether concurrent acoustic cues for motion can facilitate visual flow parsing, thereby enhancing the detection of moving objects during simulated self-motion. Participants identified an object (the target) that moved either forward or backward within a visual scene containing nine identical textured objects simulating forward observer translation. We found that spatially co-localized, directionally congruent, moving auditory stimuli enhanced object motion detection. Interestingly, subjects who performed poorly on the visual-only task benefited more from the addition of moving auditory stimuli. When auditory stimuli were not co-localized to the visual target, improvements in detection rates were weak. Taken together, these results suggest that parsing object motion from self-motion-induced optic flow can operate on multisensory object representations.

2019 ◽  
Vol 116 (18) ◽  
pp. 9060-9065 ◽  
Author(s):  
Kalpana Dokka ◽  
Hyeshin Park ◽  
Michael Jansen ◽  
Gregory C. DeAngelis ◽  
Dora E. Angelaki

The brain infers our spatial orientation and properties of the world from ambiguous and noisy sensory cues. Judging self-motion (heading) in the presence of independently moving objects poses a challenging inference problem because the image motion of an object could be attributed to movement of the object, self-motion, or some combination of the two. We test whether perception of heading and object motion follows predictions of a normative causal inference framework. In a dual-report task, subjects indicated whether an object appeared stationary or moving in the virtual world, while simultaneously judging their heading. Consistent with causal inference predictions, the proportion of object stationarity reports, as well as the accuracy and precision of heading judgments, depended on the speed of object motion. Critically, biases in perceived heading declined when the object was perceived to be moving in the world. Our findings suggest that the brain interprets object motion and self-motion using a causal inference framework.


2016 ◽  
Vol 115 (1) ◽  
pp. 286-300 ◽  
Author(s):  
Oliver W. Layton ◽  
Brett R. Fajen

Many forms of locomotion rely on the ability to accurately perceive one's direction of locomotion (i.e., heading) based on optic flow. Although accurate in rigid environments, heading judgments may be biased when independently moving objects are present. The aim of this study was to systematically investigate the conditions in which moving objects influence heading perception, with a focus on the temporal dynamics and the mechanisms underlying this bias. Subjects viewed stimuli simulating linear self-motion in the presence of a moving object and judged their direction of heading. Experiments 1 and 2 revealed that heading perception is biased when the object crosses or almost crosses the observer's future path toward the end of the trial, but not when the object crosses earlier in the trial. Nonetheless, heading perception is not based entirely on the instantaneous optic flow toward the end of the trial. This was demonstrated in Experiment 3 by varying the portion of the earlier part of the trial leading up to the last frame that was presented to subjects. When the stimulus duration was long enough to include the part of the trial before the moving object crossed the observer's path, heading judgments were less biased. The findings suggest that heading perception is affected by the temporal evolution of optic flow. The time course of dorsal medial superior temporal area (MSTd) neuron responses may play a crucial role in perceiving heading in the presence of moving objects, a property not captured by many existing models.


Vision ◽  
2019 ◽  
Vol 3 (2) ◽  
pp. 13
Author(s):  
Pearl Guterman ◽  
Robert Allison

When the head is tilted, an objectively vertical line viewed in isolation is typically perceived as tilted. We explored whether this shift also occurs when viewing global motion displays perceived as either object-motion or self-motion. Observers stood and lay left side down while viewing (1) a static line, (2) a random-dot display of 2-D (planar) motion or (3) a random-dot display of 3-D (volumetric) global motion. On each trial, the line orientation or motion direction were tilted from the gravitational vertical and observers indicated whether the tilt was clockwise or counter-clockwise from the perceived vertical. Psychometric functions were fit to the data and shifts in the point of subjective verticality (PSV) were measured. When the whole body was tilted, the perceived tilt of both a static line and the direction of optic flow were biased in the direction of the body tilt, demonstrating the so-called A-effect. However, we found significantly larger shifts for the static line than volumetric global motion as well as larger shifts for volumetric displays than planar displays. The A-effect was larger when the motion was experienced as self-motion compared to when it was experienced as object-motion. Discrimination thresholds were also more precise in the self-motion compared to object-motion conditions. Different magnitude A-effects for the line and motion conditions—and for object and self-motion—may be due to differences in combining of idiotropic (body) and vestibular signals, particularly so in the case of vection which occurs despite visual-vestibular conflict.


i-Perception ◽  
2017 ◽  
Vol 8 (3) ◽  
pp. 204166951770820 ◽  
Author(s):  
Diederick C. Niehorster ◽  
Li Li

How do we perceive object motion during self-motion using visual information alone? Previous studies have reported that the visual system can use optic flow to identify and globally subtract the retinal motion component resulting from self-motion to recover scene-relative object motion, a process called flow parsing. In this article, we developed a retinal motion nulling method to directly measure and quantify the magnitude of flow parsing (i.e., flow parsing gain) in various scenarios to examine the accuracy and tuning of flow parsing for the visual perception of object motion during self-motion. We found that flow parsing gains were below unity for all displays in all experiments; and that increasing self-motion and object motion speed did not alter flow parsing gain. We conclude that visual information alone is not sufficient for the accurate perception of scene-relative motion during self-motion. Although flow parsing performs global subtraction, its accuracy also depends on local motion information in the retinal vicinity of the moving object. Furthermore, the flow parsing gain was constant across common self-motion or object motion speeds. These results can be used to inform and validate computational models of flow parsing.


Author(s):  
Katja M. Mayer ◽  
Hugh Riddell ◽  
Markus Lappe

AbstractFlow parsing is a way to estimate the direction of scene-relative motion of independently moving objects during self-motion of the observer. So far, this has been tested for simple geometric shapes such as dots or bars. Whether further cues such as prior knowledge about typical directions of an object’s movement, e.g., typical human motion, are considered in the estimations is currently unclear. Here, we adjudicated between the theory that the direction of scene-relative motion of humans is estimated exclusively by flow parsing, just like for simple geometric objects, and the theory that prior knowledge about biological motion affects estimation of perceived direction of scene-relative motion of humans. We placed a human point-light walker in optic flow fields that simulated forward motion of the observer. We introduced conflicts between biological features of the walker (i.e., facing and articulation) and the direction of scene-relative motion. We investigated whether perceived direction of scene-relative motion was biased towards biological features and compared the results to perceived direction of scene-relative motion of scrambled walkers and dot clouds. We found that for humans the perceived direction of scene-relative motion was biased towards biological features. Additionally, we found larger flow parsing gain for humans compared to the other walker types. This indicates that flow parsing is not the only visual mechanism relevant for estimating the direction of scene-relative motion of independently moving objects during self-motion: observers also rely on prior knowledge about typical object motion, such as typical facing and articulation of humans.


2019 ◽  
Vol 121 (4) ◽  
pp. 1207-1221 ◽  
Author(s):  
Ryo Sasaki ◽  
Dora E. Angelaki ◽  
Gregory C. DeAngelis

Multiple areas of macaque cortex are involved in visual motion processing, but their relative functional roles remain unclear. The medial superior temporal (MST) area is typically divided into lateral (MSTl) and dorsal (MSTd) subdivisions that are thought to be involved in processing object motion and self-motion, respectively. Whereas MSTd has been studied extensively with regard to processing visual and nonvisual self-motion cues, little is known about self-motion signals in MSTl, especially nonvisual signals. Moreover, little is known about how self-motion and object motion signals interact in MSTl and how this differs from interactions in MSTd. We compared the visual and vestibular heading tuning of neurons in MSTl and MSTd using identical stimuli. Our findings reveal that both visual and vestibular heading signals are weaker in MSTl than in MSTd, suggesting that MSTl is less well suited to participate in self-motion perception than MSTd. We also tested neurons in both areas with a variety of combinations of object motion and self-motion. Our findings reveal that vestibular signals improve the separability of coding of heading and object direction in both areas, albeit more strongly in MSTd due to the greater strength of vestibular signals. Based on a marginalization technique, population decoding reveals that heading and object direction can be more effectively dissociated from MSTd responses than MSTl responses. Our findings help to clarify the respective contributions that MSTl and MSTd make to processing of object motion and self-motion, although our conclusions may be somewhat specific to the multipart moving objects that we employed. NEW & NOTEWORTHY Retinal image motion reflects contributions from both the observer’s self-motion and the movement of objects in the environment. The neural mechanisms by which the brain dissociates self-motion and object motion remain unclear. This study provides the first systematic examination of how the lateral subdivision of area MST (MSTl) contributes to dissociating object motion and self-motion. We also examine, for the first time, how MSTl neurons represent translational self-motion based on both vestibular and visual cues.


2012 ◽  
Vol 107 (12) ◽  
pp. 3446-3457 ◽  
Author(s):  
Pei Liang ◽  
Jochen Heitwerth ◽  
Roland Kern ◽  
Rafael Kurtz ◽  
Martin Egelhaaf

Three motion-sensitive key elements of a neural circuit, presumably involved in processing object and distance information, were analyzed with optic flow sequences as experienced by blowflies in a three-dimensional environment. This optic flow is largely shaped by the blowfly's saccadic flight and gaze strategy, which separates translational flight segments from fast saccadic rotations. By modifying this naturalistic optic flow, all three analyzed neurons could be shown to respond during the intersaccadic intervals not only to nearby objects but also to changes in the distance to background structures. In the presence of strong background motion, the three types of neuron differ in their sensitivity for object motion. Object-induced response increments are largest in FD1, a neuron long known to respond better to moving objects than to spatially extended motion patterns, but weakest in VCH, a neuron that integrates wide-field motion from both eyes and, by inhibiting the FD1 cell, is responsible for its object preference. Small but significant object-induced response increments are present in HS cells, which serve both as a major input neuron of VCH and as output neurons of the visual system. In both HS and FD1, intersaccadic background responses decrease with increasing distance to the animal, although much more prominently in FD1. This strong dependence of FD1 on background distance is concluded to be the consequence of the activity of VCH that dramatically increases its activity and, thus, its inhibitory strength with increasing distance.


2021 ◽  
Author(s):  
HyungGoo Kim ◽  
Dora Angelaki ◽  
Gregory DeAngelis

Detecting objects that move in a scene is a fundamental computation performed by the visual system. This computation is greatly complicated by observer motion, which causes most objects to move across the retinal image. How the visual system detects scene-relative object motion during self-motion is poorly understood. Human behavioral studies suggest that the visual system may identify local conflicts between motion parallax and binocular disparity cues to depth, and may use these signals to detect moving objects. We describe a novel mechanism for performing this computation based on neurons in macaque area MT with incongruent depth tuning for binocular disparity and motion parallax cues. Neurons with incongruent tuning respond selectively to scene-relative object motion and their responses are predictive of perceptual decisions when animals are trained to detect a moving object during selfmotion. This finding establishes a novel functional role for neurons with incongruent tuning for multiple depth cues.


Perception ◽  
1998 ◽  
Vol 27 (9) ◽  
pp. 1067-1077 ◽  
Author(s):  
Stephen Palmisano ◽  
Barbara Gillam

While early research suggested that peripheral vision dominates the perception of self-motion, subsequent studies found little or no effect of stimulus eccentricity. In contradiction to these broad notions of ‘peripheral dominance’ and ‘eccentricity independence’, the present experiments showed that the spatial frequency of optic flow interacts with its eccentricity to determine circular vection magnitude—central stimulation producing the most compelling vection for high-spatial-frequency stimuli and peripheral stimulation producing the most compelling vection for lower-spatial-frequency stimuli. This interaction appeared to be due, in part at least, to the effect that the higher-spatial-frequency moving pattern had on subjects’ ability to organise optic flow into related motion about a single axis. For example, far-peripheral exposure to this high-spatial-frequency pattern caused many subjects to organise the optic flow into independent local regions of motion (a situation which clearly favoured the perception of object motion not self-motion). It is concluded that both high-spatial-frequency and low-spatial-frequency mechanisms are involved in the visual perception of self-motion—with their activities depending on the nature and eccentricity of the motion stimulation.


Sign in / Sign up

Export Citation Format

Share Document