scholarly journals Optical Flow in Deep Visual Tracking

2020 ◽  
Vol 34 (07) ◽  
pp. 12112-12119
Author(s):  
Mikko Vihlman ◽  
Arto Visala

Single-target tracking of generic objects is a difficult task since a trained tracker is given information present only in the first frame of a video. In recent years, increasingly many trackers have been based on deep neural networks that learn generic features relevant for tracking. This paper argues that deep architectures are often fit to learn implicit representations of optical flow. Optical flow is intuitively useful for tracking, but most deep trackers must learn it implicitly. This paper is among the first to study the role of optical flow in deep visual tracking. The architecture of a typical tracker is modified to reveal the presence of implicit representations of optical flow and to assess the effect of using the flow information more explicitly. The results show that the considered network learns implicitly an effective representation of optical flow. The implicit representation can be replaced by an explicit flow input without a notable effect on performance. Using the implicit and explicit representations at the same time does not improve tracking accuracy. The explicit flow input could allow constructing lighter networks for tracking.

1999 ◽  
Vol 22 (5) ◽  
pp. 759-760
Author(s):  
Bruce Bridgeman

The visual system captures a unique contrast between implicit and explicit representation where the same event (location of a visible object) is coded in both ways in parallel. A method of differentiating the two representations is described using an illusion that affects only the explicit representation. Consistent with predictions, implicit information is available only from targets presently visible, but, surprisingly, a two-alternative decision does not disturb the implicit representation.


Geosciences ◽  
2018 ◽  
Vol 9 (1) ◽  
pp. 8 ◽  
Author(s):  
Christophe Baron-Hyppolite ◽  
Christopher Lashley ◽  
Juan Garzon ◽  
Tyler Miesse ◽  
Celso Ferreira ◽  
...  

Assessing the accuracy of nearshore numerical models—such as SWAN—is important to ensure their effectiveness in representing physical processes and predicting flood hazards. In particular, for application to coastal wetlands, it is important that the model accurately represents wave attenuation by vegetation. In SWAN, vegetation might be implemented either implicitly, using an enhanced bottom friction; or explicitly represented as drag on an immersed body. While previous studies suggest that the implicit representation underestimates dissipation, field data has only recently been used to assess fully submerged vegetation. Therefore, the present study investigates the performance of both the implicit and explicit representations of vegetation in SWAN in simulating wave attenuation over a natural emergent marsh. The wave and flow modules within Delft3D are used to create an open-ocean model to simulate offshore wave conditions. The domain is then decomposed to simulate nearshore processes and provide the boundary conditions necessary to run a standalone SWAN model. Here, the implicit and explicit representations of vegetation are finally assessed. Results show that treating vegetation simply as enhanced bottom roughness (implicitly) under-represents the complexity of wave-vegetation interaction and, consequently, underestimates wave energy dissipation (error > 30%). The explicit vegetation representation, however, shows good agreement with field data (error < 20%).


2018 ◽  
Author(s):  
Naohide Yamamoto ◽  
Dagmara E. Mach ◽  
John W. Philbeck ◽  
Jennifer Van Pelt

Generally, imagining an action and physically executing it are thought to be controlled by common motor representations. However, imagined walking to a previewed target tends to be terminated more quickly than real walking to the same target, raising a question as to what representations underlie the two modes of walking. To address this question, the present study put forward a hypothesis that both explicit and implicit representations of gait are involved in imagined walking, and further proposed that the underproduction of imagined walking duration largely stems from the explicit representation due to its susceptibility to a general undershooting tendency in time production (i.e., the error of anticipation). Properties of the explicit and implicit representations were examined by manipulating their relative dominance during imagined walking through concurrent bodily motions, and also by using non-spatial tasks that extracted the temporal structure of imagined walking. Results showed that the duration of imagined walking subserved by the implicit representation was equal to that of real walking, and a time production task exhibited an equivalent underproduction bias as in imagined walking tasks that were based on the explicit representation. These findings are interpreted as evidence for the dual-representation view of imagined walking.


Sensors ◽  
2022 ◽  
Vol 22 (2) ◽  
pp. 509
Author(s):  
Dipayan Mitra ◽  
Aranee Balachandran ◽  
Ratnasingham Tharmarasa

Airborne angle-only sensors can be used to track stationary or mobile ground targets. In order to make the problem observable in 3-dimensions (3-D), the height of the target (i.e., the height of the terrain) from the sea-level is needed to be known. In most of the existing works, the terrain height is assumed to be known accurately. However, the terrain height is usually obtained from Digital Terrain Elevation Data (DTED), which has different resolution levels. Ignoring the terrain height uncertainty in a tracking algorithm will lead to a bias in the estimated states. In addition to the terrain uncertainty, another common source of uncertainty in angle-only sensors is the sensor biases. Both these uncertainties must be handled properly to obtain better tracking accuracy. In this paper, we propose algorithms to estimate the sensor biases with the target(s) of opportunity and algorithms to track targets with terrain and sensor bias uncertainties. Sensor bias uncertainties can be reduced by estimating the biases using the measurements from the target(s) of opportunity with known horizontal positions. This step can be an optional step in an angle-only tracking problem. In this work, we have proposed algorithms to pick optimal targets of opportunity to obtain better bias estimation and algorithms to estimate the biases with the selected target(s) of opportunity. Finally, we provide a filtering framework to track the targets with terrain and bias uncertainties. The Posterior Cramer–Rao Lower Bound (PCRLB), which provides the lower bound on achievable estimation error, is derived for the single target filtering with an angle-only sensor with terrain uncertainty and measurement biases. The effectiveness of the proposed algorithms is verified by Monte Carlo simulations. The simulation results show that sensor biases can be estimated accurately using the target(s) of opportunity and the tracking accuracies of the targets can be improved significantly using the proposed algorithms when the terrain and bias uncertainties are present.


2016 ◽  
Vol 108 ◽  
pp. 110-111
Author(s):  
Fabiola R. Gómez-Velázquez ◽  
Itzel Vergara-Basulto ◽  
Andrés A. González-Garrido ◽  
R. Malatesha Joshi ◽  
Alicia Martínez-Ramos

Sign in / Sign up

Export Citation Format

Share Document