Influence of visual blur on object-motion detection, self-motion detection and postural balance

1990 ◽  
Vol 40 (1) ◽  
pp. 1-6 ◽  
Author(s):  
A. Straube ◽  
W. Paulus ◽  
T. Brandt
2020 ◽  
Author(s):  
Maria-Bianca Leonte ◽  
Aljoscha Leonhardt ◽  
Alexander Borst ◽  
Alex S. Mauss

AbstractVisual motion detection is among the best understood neuronal computations. One assumed behavioural role is to detect self-motion and to counteract involuntary course deviations, extensively investigated in tethered walking or flying flies. In free flight, however, any deviation from a straight course is signalled by both the visual system as well as by proprioceptive mechanoreceptors called ‘halteres’, which are the equivalent of the vestibular system in vertebrates. Therefore, it is yet unclear to what extent motion vision contributes to course control, or whether straight flight is completely controlled by proprioceptive feedback from the halteres. To answer these questions, we genetically rendered flies motion-blind by blocking their primary motion-sensitive neurons and quantified their free-flight performance. We found that such flies have difficulties maintaining a straight flight trajectory, much like control flies in the dark. By unilateral wing clipping, we generated an asymmetry in propulsory force and tested the ability of flies to compensate for this perturbation. While wild-type flies showed a remarkable level of compensation, motion-blind animals exhibited pronounced circling behaviour. Our results therefore unequivocally demonstrate that motion vision is necessary to fly straight under realistic conditions.


2019 ◽  
Vol 19 (10) ◽  
pp. 294a
Author(s):  
Scott T Steinmetz ◽  
Oliver W Layton ◽  
N. Andrew Browning ◽  
Nathaniel V Powell ◽  
Brett R Fajen

2009 ◽  
Vol 09 (04) ◽  
pp. 609-627 ◽  
Author(s):  
J. WANG ◽  
N. V. PATEL ◽  
W. I. GROSKY ◽  
F. FOTOUHI

In this paper, we address the problem of camera and object motion detection in the compressed domain. The estimation of camera motion and the moving object segmentation have been widely stated in a variety of context for video analysis, due to their capabilities of providing essential clues for interpreting the high-level semantics of video sequences. A novel compressed domain motion estimation and segmentation scheme is presented and applied in this paper. MPEG-2 compressed domain information, namely Motion Vectors (MV) and Discrete Cosine Transform (DCT) coefficients, is filtered and manipulated to obtain a dense and reliable Motion Vector Field (MVF) over consecutive frames. An iterative segmentation scheme based upon the generalized affine transformation model is exploited to effect the global camera motion detection. The foreground spatiotemporal objects are separated from the background using the temporal consistency check to the output of the iterative segmentation. This consistency check process can coalesce the resulting foreground blocks and weed out unqualified blocks. Illustrative examples are provided to demonstrate the efficacy of the proposed approach.


2003 ◽  
Vol 90 (2) ◽  
pp. 723-730 ◽  
Author(s):  
Kai V. Thilo ◽  
Andreas Kleinschmidt ◽  
Michael A. Gresty

In a previous functional neuroimaging study we found that early visual areas deactivated when a rotating optical flow stimulus elicited the illusion of self-motion (vection) compared with when it was perceived as a moving object. Here, we investigated whether electrical cortical responses to an independent central visual probe stimulus change as a function of whether optical flow stimulation in the periphery induces the illusion of self-motion or not. Visual-evoked potentials (VEPs) were obtained in response to pattern-reversals in the central visual field in the presence of a constant peripheral large-field optokinetic stimulus that rotated around the naso-occipital axis and induced intermittent sensations of vection. As control, VEPs were also recorded during a stationary peripheral stimulus and showed no difference than those obtained during optokinetic stimulation. The VEPs during constant peripheral stimulation were then divided into two groups according to the time spans where the subjects reported object- or self-motion, respectively. The N70 VEP component showed a significant amplitude reduction when, due to the peripheral stimulus, subjects experienced self-motion compared to when the peripheral stimulus was perceived as object-motion. This finding supplements and corroborates our recent evidence from functional neuroimaging that early visual cortex deactivates when a visual flow stimulus elicits the illusion of self-motion compared with when the same sensory input is interpreted as object-motion. This dampened responsiveness might reflect a redistribution of sensorial and attentional resources when the monitoring of self-motion relies on a sustained and veridical processing of optic flow and may be compromised by other sources of visual input.


2012 ◽  
Vol 108 (6) ◽  
pp. 1685-1694 ◽  
Author(s):  
Lionel Bringoux ◽  
Jean-Claude Lepecq ◽  
Frédéric Danion

Accurate control of grip force during object manipulation is necessary to prevent the object from slipping, especially to compensate for the action of gravitational and inertial forces resulting from hand/object motion. The goal of the current study was to assess whether the control of grip force was influenced by visually induced self-motion (i.e., vection), which would normally be accompanied by changes in object load. The main task involved holding a 400-g object between the thumb and the index finger while being seated within a virtual immersive environment that simulated the vertical motion of an elevator across floors. Different visual motions were tested, including oscillatory (0.21 Hz) and constant-speed displacements of the virtual scene. Different arm-loading conditions were also tested: with or without the hand-held object and with or without oscillatory arm motion (0.9 Hz). At the perceptual level, ratings from participants showed that both oscillatory and constant-speed motion of the elevator rapidly induced a long-lasting sensation of self-motion. At the sensorimotor level, vection compellingness altered arm movement control. Spectral analyses revealed that arm motion was entrained by the oscillatory motion of the elevator. However, we found no evidence that grip force used to hold the object was visually affected. Specifically, spectral analyses revealed no component in grip force that would mirror the virtual change in object load associated with the oscillatory motion of the elevator, thereby allowing the grip-to-load force coupling to remain unaffected. Altogether, our findings show that the neural mechanisms underlying vection interfere with arm movement control but do not interfere with the delicate modulation of grip force. More generally, those results provide evidence that the strength of the coupling between the sensorimotor system and the perceptual level can be modulated depending on the effector.


2019 ◽  
Vol 116 (18) ◽  
pp. 9060-9065 ◽  
Author(s):  
Kalpana Dokka ◽  
Hyeshin Park ◽  
Michael Jansen ◽  
Gregory C. DeAngelis ◽  
Dora E. Angelaki

The brain infers our spatial orientation and properties of the world from ambiguous and noisy sensory cues. Judging self-motion (heading) in the presence of independently moving objects poses a challenging inference problem because the image motion of an object could be attributed to movement of the object, self-motion, or some combination of the two. We test whether perception of heading and object motion follows predictions of a normative causal inference framework. In a dual-report task, subjects indicated whether an object appeared stationary or moving in the virtual world, while simultaneously judging their heading. Consistent with causal inference predictions, the proportion of object stationarity reports, as well as the accuracy and precision of heading judgments, depended on the speed of object motion. Critically, biases in perceived heading declined when the object was perceived to be moving in the world. Our findings suggest that the brain interprets object motion and self-motion using a causal inference framework.


Sign in / Sign up

Export Citation Format

Share Document