Using Image Sequences for Long-Term Visual Localization

Author(s):  
Erik Stenborg ◽  
Torsten Sattler ◽  
Lars Hammarstrand
Author(s):  
Mathias Burki ◽  
Marcin Dymczyk ◽  
Igor Gilitschenski ◽  
Cesar Cadena ◽  
Roland Siegwart ◽  
...  

2020 ◽  
Vol 12 (9) ◽  
pp. 1409
Author(s):  
Ewerton Silva ◽  
Ricardo da S. Torres ◽  
Bruna Alberton ◽  
Leonor Patricia C. Morellato ◽  
Thiago S. F. Silva

One of the challenges in remote phenology studies lies in how to efficiently manage large volumes of data obtained as long-term sequences of high-resolution images. A promising approach is known as image foveation, which is able to reduce the computational resources used (i.e., memory storage) in several applications. In this paper, we propose an image foveation approach towards plant phenology tracking where relevant changes within an image time series guide the creation of foveal models used to resample unseen images. By doing so, images are taken to a space-variant domain where regions vary in resolution according to their contextual relevance for the application. We performed our validation on a dataset of vegetation image sequences previously used in plant phenology studies.


2020 ◽  
Vol 5 (2) ◽  
pp. 1492-1499
Author(s):  
Lee Clement ◽  
Mona Gridseth ◽  
Justin Tomasi ◽  
Jonathan Kelly

Perception ◽  
1996 ◽  
Vol 25 (2) ◽  
pp. 207-220 ◽  
Author(s):  
James V Stone

An unsupervised method is presented which permits a set of model neurons, or a microcircuit, to learn low-level vision tasks, such as the extraction of surface depth. Each microcircuit implements a simple, generic strategy which is based on a key assumption: perceptually salient visual invariances, such as surface depth, vary smoothly over time. In the process of learning to extract smoothly varying invariances, each microcircuit maximises a microfunction. This is achieved by means of a learning rule which maximises the long-term variance of the state of a model neuron and simultaneously minimises its short-term variance. The learning rule involves a linear combination of anti-Hebbian and Hebbian weight changes, over short and long time scales, respectively. The method is demonstrated on a hyperacuity task: estimating subpixel stereo disparity from a temporal sequence of random-dot stereograms. After learning, the microcircuit generalises, without additional learning, to previously unseen image sequences. It is proposed that the approach adopted here may be used to define a canonical microfunction, which can be used to learn many perceptually salient invariances.


1996 ◽  
Vol 8 (7) ◽  
pp. 1463-1492 ◽  
Author(s):  
James V. Stone

A model is presented for unsupervised learning of low level vision tasks, such as the extraction of surface depth. A key assumption is that perceptually salient visual parameters (e.g., surface depth) vary smoothly over time. This assumption is used to derive a learning rule that maximizes the long-term variance of each unit's outputs, whilst simultaneously minimizing its short-term variance. The length of the half-life associated with each of these variances is not critical to the success of the algorithm. The learning rule involves a linear combination of anti-Hebbian and Hebbian weight changes, over short and long time scales, respectively. This maximizes the information throughput with respect to low-frequency parameters implicit in the input sequence. The model is used to learn stereo disparity from temporal sequences of random-dot and gray-level stereograms containing synthetically generated subpixel disparities. The presence of temporal discontinuities in disparity does not prevent learning or generalization to previously unseen image sequences. The implications of this class of unsupervised methods for learning in perceptual systems are discussed.


Sign in / Sign up

Export Citation Format

Share Document