scholarly journals Neuromorphic Configurable Architecture for Robust Motion Estimation

2008 ◽  
Vol 2008 ◽  
pp. 1-9 ◽  
Author(s):  
Guillermo Botella ◽  
Manuel Rodríguez ◽  
Antonio García ◽  
Eduardo Ros

The robustness of the human visual system recovering motion estimation in almost any visual situation is enviable, performing enormous calculation tasks continuously, robustly, efficiently, and effortlessly. There is obviously a great deal we can learn from our own visual system. Currently, there are several optical flow algorithms, although none of them deals efficiently with noise, illumination changes, second-order motion, occlusions, and so on. The main contribution of this work is the efficient implementation of a biologically inspired motion algorithm that borrows nature templates as inspiration in the design of architectures and makes use of a specific model of human visual motion perception: Multichannel Gradient Model (McGM). This novel customizable architecture of a neuromorphic robust optical flow can be constructed with FPGA or ASIC device using properties of the cortical motion pathway, constituting a useful framework for building future complex bioinspired systems running in real time with high computational complexity. This work includes the resource usage and performance data, and the comparison with actual systems. This hardware has many application fields like object recognition, navigation, or tracking in difficult environments due to its bioinspired and robustness properties.

2001 ◽  
Vol 13 (6) ◽  
pp. 1243-1253 ◽  
Author(s):  
Rajesh P. N. Rao ◽  
David M. Eagleman ◽  
Terrence J. Sejnowski

When a flash is aligned with a moving object, subjects perceive the flash to lag behind the moving object. Two different models have been proposed to explain this “flash-lag” effect. In the motion extrapolation model, the visual system extrapolates the location of the moving object to counteract neural propagation delays, whereas in the latency difference model, it is hypothesized that moving objects are processed and perceived more quickly than flashed objects. However, recent psychophysical experiments suggest that neither of these interpretations is feasible (Eagleman & Sejnowski, 2000a, 2000b, 2000c), hypothesizing instead that the visual system uses data from the future of an event before committing to an interpretation. We formalize this idea in terms of the statistical framework of optimal smoothing and show that a model based on smoothing accounts for the shape of psychometric curves from a flash-lag experiment involving random reversals of motion direction. The smoothing model demonstrates how the visual system may enhance perceptual accuracy by relying not only on data from the past but also on data collected from the immediate future of an event.


1989 ◽  
Vol 146 (1) ◽  
pp. 115-139
Author(s):  
C. Koch ◽  
H. T. Wang ◽  
B. Mathur

Computing motion on the basis of the time-varying image intensity is a difficult problem for both artificial and biological vision systems. We will show how one well-known gradient-based computer algorithm for estimating visual motion can be implemented within the primate's visual system. This relaxation algorithm computes the optical flow field by minimizing a variational functional of a form commonly encountered in early vision, and is performed in two steps. In the first stage, local motion is computed, while in the second stage spatial integration occurs. Neurons in the second stage represent the optical flow field via a population-coding scheme, such that the vector sum of all neurons at each location codes for the direction and magnitude of the velocity at that location. The resulting network maps onto the magnocellular pathway of the primate visual system, in particular onto cells in the primary visual cortex (V1) as well as onto cells in the middle temporal area (MT). Our algorithm mimics a number of psychophysical phenomena and illusions (perception of coherent plaids, motion capture, motion coherence) as well as electrophysiological recordings. Thus, a single unifying principle ‘the final optical flow should be as smooth as possible’ (except at isolated motion discontinuities) explains a large number of phenomena and links single-cell behavior with perception and computational theory.


2017 ◽  
Vol 87 ◽  
pp. 1-14 ◽  
Author(s):  
Andry Maykol Pinto ◽  
Paulo G. Costa ◽  
Miguel V. Correia ◽  
Anibal C. Matos ◽  
A. Paulo Moreira

2020 ◽  
Vol 117 (39) ◽  
pp. 24581-24589
Author(s):  
Johannes Bill ◽  
Hrag Pailian ◽  
Samuel J. Gershman ◽  
Jan Drugowitsch

In the real world, complex dynamic scenes often arise from the composition of simpler parts. The visual system exploits this structure by hierarchically decomposing dynamic scenes: When we see a person walking on a train or an animal running in a herd, we recognize the individual’s movement as nested within a reference frame that is, itself, moving. Despite its ubiquity, surprisingly little is understood about the computations underlying hierarchical motion perception. To address this gap, we developed a class of stimuli that grant tight control over statistical relations among object velocities in dynamic scenes. We first demonstrate that structured motion stimuli benefit human multiple object tracking performance. Computational analysis revealed that the performance gain is best explained by human participants making use of motion relations during tracking. A second experiment, using a motion prediction task, reinforced this conclusion and provided fine-grained information about how the visual system flexibly exploits motion structure.


2018 ◽  
Vol 4 (1) ◽  
pp. 143-163 ◽  
Author(s):  
Helen H. Yang ◽  
Thomas R. Clandinin

Motion in the visual world provides critical information to guide the behavior of sighted animals. Furthermore, as visual motion estimation requires comparisons of signals across inputs and over time, it represents a paradigmatic and generalizable neural computation. Focusing on the Drosophila visual system, where an explosion of technological advances has recently accelerated experimental progress, we review our understanding of how, algorithmically and mechanistically, motion signals are first computed.


2019 ◽  
Vol 27 (1) ◽  
pp. 211-220
Author(s):  
王向军 WANG Xiang-jun ◽  
张继龙 ZHANG Ji-long ◽  
阴 雷 YIN Lei

2019 ◽  
Author(s):  
Johannes Bill ◽  
Hrag Pailian ◽  
Samuel J Gershman ◽  
Jan Drugowitsch

AbstractIn the real world, complex dynamic scenes often arise from the composition of simpler parts. The visual system exploits this structure by hierarchically decomposing dynamic scenes: when we see a person walking on a train or an animal running in a herd, we recognize the individual’s movement as nested within a reference frame that is itself moving. Despite its ubiquity, surprisingly little is understood about the computations underlying hierarchical motion perception. To address this gap, we developed a novel class of stimuli that grant tight control over statistical relations among object velocities in dynamic scenes. We first demonstrate that structured motion stimuli benefit human multiple object tracking performance. Computational analysis revealed that the performance gain is best explained by human participants making use of motion relations during tracking. A second experiment, using a motion prediction task, reinforced this conclusion and provided fine-grained information about how the visual system flexibly exploits motion structure.


Sign in / Sign up

Export Citation Format

Share Document