scholarly journals Motion clouds: model-based stimulus synthesis of natural-like random textures for the study of motion perception

2012 ◽  
Vol 107 (11) ◽  
pp. 3217-3226 ◽  
Author(s):  
Paula Sanz Leon ◽  
Ivo Vanzetta ◽  
Guillaume S. Masson ◽  
Laurent U. Perrinet

Choosing an appropriate set of stimuli is essential to characterize the response of a sensory system to a particular functional dimension, such as the eye movement following the motion of a visual scene. Here, we describe a framework to generate random texture movies with controlled information content, i.e., Motion Clouds. These stimuli are defined using a generative model that is based on controlled experimental parametrization. We show that Motion Clouds correspond to dense mixing of localized moving gratings with random positions. Their global envelope is similar to natural-like stimulation with an approximate full-field translation corresponding to a retinal slip. We describe the construction of these stimuli mathematically and propose an open-source Python-based implementation. Examples of the use of this framework are shown. We also propose extensions to other modalities such as color vision, touch, and audition.

2021 ◽  
Vol 255 ◽  
pp. 106620
Author(s):  
A. Elouneg ◽  
D. Sutula ◽  
J. Chambert ◽  
A. Lejeune ◽  
S.P.A. Bordas ◽  
...  

2006 ◽  
Vol 16 (1-2) ◽  
pp. 23-28 ◽  
Author(s):  
W. Geoffrey Wright ◽  
Paul DiZio ◽  
James R. Lackner

We evaluated the influence of moving visual scenes and knowledge of spatial and physical context on visually induced self-motion perception in an immersive virtual environment. A sinusoidal, vertically oscillating visual stimulus induced perceptions of self-motion that matched changes in visual acceleration. Subjects reported peaks of perceived self-motion in synchrony with peaks of visual acceleration and opposite in direction to visual scene motion. Spatial context was manipulated by testing subjects in the environment that matched the room in the visual scene or by testing them in a separate chamber. Physical context was manipulated by testing the subject while seated in a stable, earth-fixed desk chair or in an apparatus capable of large linear motions, however, in both conditions no actual motion occurred. The compellingness of perceived self-motion was increased significantly when the spatial context matched the visual input and actual body displacement was possible, however, the latency and amplitude of perceived self-motion were unaffected by the spatial or physical context. We propose that two dissociable processes are involved in self-motion perception: one process, primarily driven by visual input, affects vection latency and path integration, the other process, receiving cognitive input, drives the compellingness of perceived self-motion.


2005 ◽  
Vol 15 (4) ◽  
pp. 185-195 ◽  
Author(s):  
W.G. Wright ◽  
P. DiZio ◽  
J.R. Lackner

We evaluated visual and vestibular contributions to vertical self motion perception by exposing subjects to various combinations of 0.2 Hz vertical linear oscillation and visual scene motion. The visual stimuli presented via a head-mounted display consisted of video recordings of the test chamber from the perspective of the subject seated in the oscillator. In the dark, subjects accurately reported the amplitude of vertical linear oscillation with only a slight tendency to underestimate it. In the absence of inertial motion, even low amplitude oscillatory visual motion induced the perception of vertical self-oscillation. When visual and vestibular stimulation were combined, self-motion perception persisted in the presence of large visual-vestibular discordances. A dynamic visual input with magnitude discrepancies tended to dominate the resulting apparent self-motion, but vestibular effects were also evident. With visual and vestibular stimulation either spatially or temporally out-of-phase with one another, the input that dominated depended on their amplitudes. High amplitude visual scene motion was almost completely dominant for the levels tested. These findings are inconsistent with self-motion perception being determined by simple weighted summation of visual and vestibular inputs and constitute evidence against sensory conflict models. They indicate that when the presented visual scene is an accurate representation of the physical test environment, it dominates over vestibular inputs in determining apparent spatial position relative to external space.


1994 ◽  
Vol 78 (1) ◽  
pp. 112-114
Author(s):  
Kazuhito Noguchi ◽  
Koichi Haishi ◽  
Daisuke Sato

We report a phenomenon that seems to have potential to elucidate a role of eye movement in motion perception. When tracking a target controlled by a triangular wave, the viewer perceives movement of the target like a ball bouncing in between two walls. We measured eye movement with electrooculograms (EOGs) when the subject was tracking a target controlled by a triangular wave. Eye movement after passing the turning point and rapidly returning to the target with saccadic movement and then smoothly tracking the target was recorded for all 4 adults. It was considered that extraretinal information on eye position during saccade may mainly contribute to this illusion.


1983 ◽  
Vol 92 (2) ◽  
pp. 165-171 ◽  
Author(s):  
Carsten Wennmo ◽  
Bengt Hindfelt ◽  
Ilmari Pyykkö

We report a quantitative analysis of eye movement disturbances in patients with isolated cerebellar disorders and patients with cerebellar disorders and concomitant brainstem involvement. The most characteristic abnormalities in the exclusively cerebellar patients were increased velocities of the slow phases of vestibular nystagmus induced by rotation in the dark and increased peak velocities of the fast phases of optokinetic nystagmus induced by full-field optokinetic stimuli. Dysmetria of saccades was found in three of six cerebellar patients and gaze nystagmus in all six patients. The typical findings in the combined cerebellobrainstem group were reduced peak velocities of voluntary saccades, defective smooth pursuit and reduced peak velocities of the fast component of nystagmus during rotation in both the dark and light. All patients with combined cerebellobrainstem disorder had dysmetric voluntary saccades and gaze nystagmus. The numbers of superimposed saccades during smooth pursuit were uniformly increased. Release of inhibition in cerebellar disorders may explain the hyperresponsiveness and inaccuracy of eye movements found in this study. In addition, when lesions also involve the brainstem, however, integrative centers coding eye velocity are affected, leading to slow and inaccurate eye movements. These features elicited clinically may be useful in the diagnosis of cerebellar and brainstem disorders.


1984 ◽  
Vol 52 (6) ◽  
pp. 1140-1153 ◽  
Author(s):  
S. G. Lisberger ◽  
F. A. Miles ◽  
D. S. Zee

Adaptive changes were induced in the vestibuloocular reflex (VOR) of monkeys by oscillating them while they viewed the visual scene through optical devices (“spectacles”) that required changes in the amplitude of eye movement during head turns. The “gain” of the VOR (eye velocity divided by head velocity) during sinusoidal oscillation in darkness underwent gradual changes that were appropriate to reduce the motion of images on the retina during the adapting procedures. Bilateral ablation of the flocculus and ventral paraflocculus caused a complete and enduring loss of the ability to undergo adaptive changes in the VOR. Partial lesions caused a substantial but incomplete loss of the adaptive capability. We conclude that the flocculus is necessary for adaptive changes in the monkey's VOR. Further experiments in normal animals determined the types of stimuli that were necessary and/or sufficient to cause changes in VOR gain. Full-field visual stimulation was not necessary to induce adaptive changes in the VOR. Monkeys tracked a small spot in conditions that elicited the same combination of eye and head movements seen during passive oscillation with spectacles. The gain of the VOR showed changes 50-70% as large as those produced by the same duration of oscillation with spectacles. Since the effective tracking conditions cause a consistent correlation of floccular output with vestibular inputs, these data are compatible with our previous suggestion that the flocculus may provide signals used by the central nervous system to compute errors in the gain of the VOR. Prolonged sinusoidal optokinetic stimulation with the head stationary caused only a slight increase in VOR gain. Left-right reversal of vision and eye movement during sinusoidal vestibular oscillation caused decreases in VOR gain. In rabbits, both of these stimulus conditions produced large increases in the gain of the VOR, which implied that eye velocity signals were used instead of vestibular inputs to compute errors in the VOR. Our different results argue that vestibular signals are necessary for computing errors in VOR gain in the monkey. The species difference may reflect the additional role that smooth pursuit eye movements play in stabilizing gaze during head turns in monkeys.


Author(s):  
G. Vacca

<p><strong>Abstract.</strong> In the photogrammetric process of the 3D reconstruction of an object or a building, multi-image orientation is one of the most important tasks that often include simultaneous camera calibration. The accuracy of image orientation and camera calibration significantly affects the quality and accuracy of all subsequent photogrammetric processes, such as determining the spatial coordinates of individual points or 3D modeling. In the context of artificial vision, the full-field analysis procedure is used, which leads to the so-called Strcture from Motion (SfM), which includes the simultaneous determination of the camera's internal and external orientation parameters and the 3D model. The procedures were designed and developed by means of a photogrammetric system, but the greatest development and innovation of these procedures originated from the computer vision from the late 90s, together with the SfM method. The reconstructions on this method have been useful for visualization purposes and not for photogrammetry and mapping. Thanks to advances in computer technology and computer performance, a large number of images can be automatically oriented in a coordinate system arbitrarily defined by different algorithms, often available in open source software (VisualSFM, Bundler, PMVS2, CMVS, etc.) or in the form of Web services (Microsoft Photosynth, Autodesk 123D Catch, My3DScanner, etc.). However, it is important to obtain an assessment of the accuracy and reliability of these automated procedures. This paper presents the results obtained from the dome low close range photogrammetric surveys and processed with some open source software using the Structure from Motion approach: VisualSfM, OpenDroneMap (ODM) and Regard3D. Photogrammetric surveys have also been processed with the Photoscan commercial software by Agisoft.</p><p>For the photogrammetric survey we used the digital camera Canon EOS M3 (24.2 Megapixel, pixel size 3.72&amp;thinsp;mm). We also surveyed the dome with the Faro Focus 3D TLS. Only one scan was carried out, from ground level, at a resolution setting of &amp;frac14; with 3x quality, corresponding to a resolution of 7&amp;thinsp;mm / 10&amp;thinsp;m. Both TLS point cloud and Photoscan point cloud were used as a reference to validate the point clouds coming from VisualSFM, OpenDroneMap and Regards3D. The validation was done using the Cloud Compare open source software.</p>


Sign in / Sign up

Export Citation Format

Share Document