parametric transformation
Recently Published Documents


TOTAL DOCUMENTS

30
(FIVE YEARS 8)

H-INDEX

5
(FIVE YEARS 1)

Symmetry ◽  
2021 ◽  
Vol 14 (1) ◽  
pp. 37
Author(s):  
Fernando Nuez

In this paper, algebraic relations were established that determined the invariance of a transformed number after several transformations. The restrictions that determine the group structure of these relationships were analyzed, as was the case of the Klein group. Parametric Kr functions associated with the existence of cycles were presented, as well as the role of the number of their links in the grouping of numbers in higher-order equivalence classes. For this, we developed a methodology based on binary equivalence relations and the complete parameterization of the Kaprekar routine using Ki functions of parametric transformation.


Test ◽  
2021 ◽  
Author(s):  
Nick Kloodt ◽  
Natalie Neumeyer ◽  
Ingrid Van Keilegom

AbstractIn transformation regression models, the response is transformed before fitting a regression model to covariates and transformed response. We assume such a model where the errors are independent from the covariates and the regression function is modeled nonparametrically. We suggest a test for goodness-of-fit of a parametric transformation class based on a distance between a nonparametric transformation estimator and the parametric class. We present asymptotic theory under the null hypothesis of validity of the semi-parametric model and under local alternatives. A bootstrap algorithm is suggested in order to apply the test. We also consider relevant hypotheses to distinguish between large and small distances of the parametric transformation class to the ‘true’ transformation.


2019 ◽  
Vol 72 (6) ◽  
pp. 1287-1315
Author(s):  
Natalie Neumeyer ◽  
Leonie Selk ◽  
Charles Tillier

Author(s):  
Dipan K. Pal ◽  
Marios Savvides

ConvNets, through their architecture, only enforce invariance to translation. In this paper, we introduce a new class of deep convolutional architectures called Non-Parametric Transformation Networks (NPTNs) which can learn general invariances and symmetries directly from data. NPTNs are a natural generalization of ConvNets and can be optimized directly using gradient descent. Unlike almost all previous works in deep architectures, they make no assumption regarding the structure of the invariances present in the data and in that aspect are flexible and powerful. We also model ConvNets and NPTNs under a unified framework called Transformation Networks (TN), which yields a better understanding of the connection between the two. We demonstrate the efficacy of NPTNs on data such as MNIST with extreme transformations and CIFAR10 where they outperform baselines, and further outperform several recent algorithms on ETH-80. They do so while having the same number of parameters. We also show that they are more effective than ConvNets in modelling symmetries and invariances from data, without the explicit knowledge of the added arbitrary nuisance transformations. Finally, we replace ConvNets with NPTNs within Capsule Networks and show that this enables Capsule Nets to perform even better.


2018 ◽  
Vol 2 (OOPSLA) ◽  
pp. 1-28 ◽  
Author(s):  
James Koppel ◽  
Varot Premtoon ◽  
Armando Solar-Lezama

Sign in / Sign up

Export Citation Format

Share Document