Visuomotor extrapolation

2008 ◽  
Vol 31 (2) ◽  
pp. 220-221 ◽  
Author(s):  
David Whitney

AbstractAccurate perception of moving objects would be useful; accurate visually guided action is crucial. Visual motion across the scene influences perceived object location and the trajectory of reaching movements to objects. In this commentary, I propose that the visual system assigns the position of any object based on the predominant motion present in the scene, and that this is used to guide reaching movements to compensate for delays in visuomotor processing.

2001 ◽  
Vol 13 (6) ◽  
pp. 1243-1253 ◽  
Author(s):  
Rajesh P. N. Rao ◽  
David M. Eagleman ◽  
Terrence J. Sejnowski

When a flash is aligned with a moving object, subjects perceive the flash to lag behind the moving object. Two different models have been proposed to explain this “flash-lag” effect. In the motion extrapolation model, the visual system extrapolates the location of the moving object to counteract neural propagation delays, whereas in the latency difference model, it is hypothesized that moving objects are processed and perceived more quickly than flashed objects. However, recent psychophysical experiments suggest that neither of these interpretations is feasible (Eagleman & Sejnowski, 2000a, 2000b, 2000c), hypothesizing instead that the visual system uses data from the future of an event before committing to an interpretation. We formalize this idea in terms of the statistical framework of optimal smoothing and show that a model based on smoothing accounts for the shape of psychometric curves from a flash-lag experiment involving random reversals of motion direction. The smoothing model demonstrates how the visual system may enhance perceptual accuracy by relying not only on data from the past but also on data collected from the immediate future of an event.


Perception ◽  
10.1068/p5405 ◽  
2005 ◽  
Vol 34 (6) ◽  
pp. 717-740 ◽  
Author(s):  
Brett R Fajen

Tasks such as steering, braking, and intercepting moving objects constitute a class of behaviors, known as visually guided actions, which are typically carried out under continuous control on the basis of visual information. Several decades of research on visually guided action have resulted in an inventory of control laws that describe for each task how information about the sufficiency of one's current state is used to make ongoing adjustments. Although a considerable amount of important research has been generated within this framework, several aspects of these tasks that are essential for successful performance cannot be captured. The purpose of this paper is to provide an overview of the existing framework, discuss its limitations, and introduce a new framework that emphasizes the necessity of calibration and perceptual learning. Within the proposed framework, successful human performance on these tasks is a matter of learning to detect and calibrate optical information about the boundaries that separate possible from impossible actions. This resolves a long-lasting incompatibility between theories of visually guided action and the concept of an affordance. The implications of adopting this framework for the design of experiments and models of visually guided action are discussed.


2012 ◽  
Vol 25 (0) ◽  
pp. 106 ◽  
Author(s):  
Luc Tremblay ◽  
Joanne Wong ◽  
Gerome Manson

We recently used an audiovisual illusion (Shams et al., 2000) during fast and accurate reaching movements and showed that susceptibility to the fusion illusion is reduced at high limb velocities (Tremblay and Nguyen, 2010). This study aimed to determine if auditory information processing is suppressed during voluntary action (Chapman and Beauchamp, 2006), which could explain reduced fusion during reaching movements. Instead of asking our participants () to report the number of flashes, we asked them to report the number of beeps (Andersen et al., 2004). Before each trial, participants were asked to fixate on a target LED presented on a horizontal reaching surface. The secondary stimuli combined 3 flash (0, 1, 2) by 2 beep (1, 2). During control tests, the secondary stimuli were presented at rest. In the experimental phase, stimuli were presented 0, 100 or 200 ms relative to the onset of a fast and accurate movement. Participants reported the number of beeps after each trial. A 3 flash × 2 beep × 4 presentation condition (0, 100, 200 ms + Control) ANOVA revealed that participants were less accurate at perceiving the actual number of beeps during the movement as compared to the control condition. More importantly, the number of flashes influenced the number of perceived beeps during the movement but not in the control condition. Lastly, no relationship was found between limb velocity and the number of perceived beeps. These results indicate that auditory information is significantly suppressed during goal-directed action but this mechanism alone fails to explain the link between limb velocity and the fusion illusion.


Author(s):  
Maggie Shiffrar

The accurate visual perception of an object’s motion requires the simultaneous integration of motion information arising from that object along with the segmentation of motion information from other objects. When moving objects are seen through apertures, or viewing windows, the resultant illusions highlight some of the challenges that the visual system faces as it balances motion segmentation with motion integration. One example is the barber pole Illusion, in which lines appear to translate orthogonally to their true direction of emotion. Another is the illusory perception of incoherence when simple rectilinear objects translate or rotate behind disconnected apertures. Studies of these illusions suggest that visual motion processes frequently rely on simple form cues.


2019 ◽  
Vol 5 (1) ◽  
pp. 247-268 ◽  
Author(s):  
Peter Thier ◽  
Akshay Markanday

The cerebellar cortex is a crystal-like structure consisting of an almost endless repetition of a canonical microcircuit that applies the same computational principle to different inputs. The output of this transformation is broadcasted to extracerebellar structures by way of the deep cerebellar nuclei. Visually guided eye movements are accommodated by different parts of the cerebellum. This review primarily discusses the role of the oculomotor part of the vermal cerebellum [the oculomotor vermis (OMV)] in the control of visually guided saccades and smooth-pursuit eye movements. Both types of eye movements require the mapping of retinal information onto motor vectors, a transformation that is optimized by the OMV, considering information on past performance. Unlike the role of the OMV in the guidance of eye movements, the contribution of the adjoining vermal cortex to visual motion perception is nonmotor and involves a cerebellar influence on information processing in the cerebral cortex.


2019 ◽  
Vol 9 (10) ◽  
pp. 2003 ◽  
Author(s):  
Tung-Ming Pan ◽  
Kuo-Chin Fan ◽  
Yuan-Kai Wang

Intelligent analysis of surveillance videos over networks requires high recognition accuracy by analyzing good-quality videos that however introduce significant bandwidth requirement. Degraded video quality because of high object dynamics under wireless video transmission induces more critical issues to the success of smart video surveillance. In this paper, an object-based source coding method is proposed to preserve constant quality of video streaming over wireless networks. The inverse relationship between video quality and object dynamics (i.e., decreasing video quality due to the occurrence of large and fast-moving objects) is characterized statistically as a linear model. A regression algorithm that uses robust M-estimator statistics is proposed to construct the linear model with respect to different bitrates. The linear model is applied to predict the bitrate increment required to enhance video quality. A simulated wireless environment is set up to verify the proposed method under different wireless situations. Experiments with real surveillance videos of a variety of object dynamics are conducted to evaluate the performance of the method. Experimental results demonstrate significant improvement of streaming videos relative to both visual and quantitative aspects.


2005 ◽  
Vol 43 (2) ◽  
pp. 216-226 ◽  
Author(s):  
Jonathan S. Cant ◽  
David A. Westwood ◽  
Kenneth F. Valyear ◽  
Melvyn A. Goodale

2002 ◽  
Vol 13 (2) ◽  
pp. 125-129 ◽  
Author(s):  
Hirokazu Ogawa ◽  
Yuji Takeda ◽  
Akihiro Yagi

Inhibitory tagging is a process that prevents focal attention from revisiting previously checked items in inefficient searches, facilitating search performance. Recent studies suggested that inhibitory tagging is object rather than location based, but it was unclear whether inhibitory tagging operates on moving objects. The present study investigated the tagging effect on moving objects. Participants were asked to search for a moving target among randomly and independently moving distractors. After either efficient or inefficient search, participants performed a probe detection task that measured the inhibitory effect on search items. The inhibitory effect on distractors was observed only after inefficient searches. The present results support the concept of object-based inhibitory tagging.


Sign in / Sign up

Export Citation Format

Share Document