scholarly journals Scale-Covariant and Scale-Invariant Gaussian Derivative Networks

Author(s):  
Tony Lindeberg
Author(s):  
Tony Lindeberg

AbstractThis paper presents a hybrid approach between scale-space theory and deep learning, where a deep learning architecture is constructed by coupling parameterized scale-space operations in cascade. By sharing the learnt parameters between multiple scale channels, and by using the transformation properties of the scale-space primitives under scaling transformations, the resulting network becomes provably scale covariant. By in addition performing max pooling over the multiple scale channels, or other permutation-invariant pooling over scales, a resulting network architecture for image classification also becomes provably scale invariant. We investigate the performance of such networks on the MNIST Large Scale dataset, which contains rescaled images from the original MNIST dataset over a factor of 4 concerning training data and over a factor of 16 concerning testing data. It is demonstrated that the resulting approach allows for scale generalization, enabling good performance for classifying patterns at scales not spanned by the training data.


1990 ◽  
Author(s):  
Vadim A. Markel ◽  
Leonid S. Muratov ◽  
Mark I. Stockman ◽  
Thomas F. George

Author(s):  
Flavio Mercati

The best matching procedure described in Chapter 4 is equivalent to the introduction of a principal fibre bundle in configuration space. Essentially one introduces a one-dimensional gauge connection on the time axis, which is a representation of the Euclidean group of rotations and translations (or, possibly, the similarity group which includes dilatations). To accommodate temporal relationalism, the variational principle needs to be invariant under reparametrizations. The simplest way to realize this in point–particle mechanics is to use Jacobi’s reformulation of Mapertuis’ principle. The chapter concludes with the relational reformulation of the Newtonian N-body problem (and its scale-invariant variant).


Author(s):  
S. G. Rajeev

The initial value problem of the incompressible Navier–Stokes equations is explained. Leray’s classic study of it (using Picard iteration) is simplified and described in the language of physics. The ideas of Lebesgue and Sobolev norms are explained. The L2 norm being the energy, cannot increase. This gives sufficient control to establish existence, regularity and uniqueness in two-dimensional flow. The L3 norm is not guaranteed to decrease, so this strategy fails in three dimensions. Leray’s proof of regularity for a finite time is outlined. His attempts to construct a scale-invariant singular solution, and modern work showing this is impossible, are then explained. The physical consequences of a negative answer to the regularity of Navier–Stokes solutions are explained. This chapter is meant as an introduction, for physicists, to a difficult field of analysis.


2015 ◽  
Vol 2015 (2) ◽  
pp. P02010 ◽  
Author(s):  
Xiao Chen ◽  
Gil Young Cho ◽  
Thomas Faulkner ◽  
Eduardo Fradkin

Sign in / Sign up

Export Citation Format

Share Document