scholarly journals Robust Learning with Implicit Residual Networks

2020 ◽  
Vol 3 (1) ◽  
pp. 34-55
Author(s):  
Viktor Reshniak ◽  
Clayton G. Webster

In this effort, we propose a new deep architecture utilizing residual blocks inspired by implicit discretization schemes. As opposed to the standard feed-forward networks, the outputs of the proposed implicit residual blocks are defined as the fixed points of the appropriately chosen nonlinear transformations. We show that this choice leads to the improved stability of both forward and backward propagations, has a favorable impact on the generalization power, and allows for control the robustness of the network with only a few hyperparameters. In addition, the proposed reformulation of ResNet does not introduce new parameters and can potentially lead to a reduction in the number of required layers due to improved forward stability. Finally, we derive the memory-efficient training algorithm, propose a stochastic regularization technique, and provide numerical results in support of our findings.

2002 ◽  
Vol 11 (04) ◽  
pp. 475-497
Author(s):  
JOHN F. VASSILOPOULOS ◽  
CRIS KOUTSOUGERAS ◽  
ARTURO HERNÁNDEZ-AGUIRRE

The Coulomb Energy network offers a unique perspective towards nonlinear transformations. However, its training as it was originally proposed by C. Scofield [1] presented difficulties that prevented its general use. We have investigated this model and we present here the reasons for its shortcomings. Further we propose refinements to the model and its training algorithm, and we present the study and results of various other modifications. We address these problems by constraining its architecture (topology) and present a derivation of the associated training algorithm. We also discuss further refinements of this algorithm. Existing genetic algorithms and simulated annealing are also evaluated as training techniques. Simulation results are also presented.


2012 ◽  
Vol 132 (11) ◽  
pp. 1011-1017 ◽  
Author(s):  
Yasutaka Amagai ◽  
Hiroyuki Fujiki ◽  
Koji Shimizume ◽  
Shigeru Hidaka

2020 ◽  
Vol 2020 (8) ◽  
pp. 114-1-114-7
Author(s):  
Bryan Blakeslee ◽  
Andreas Savakis

Change detection in image pairs has traditionally been a binary process, reporting either “Change” or “No Change.” In this paper, we present LambdaNet, a novel deep architecture for performing pixel-level directional change detection based on a four class classification scheme. LambdaNet successfully incorporates the notion of “directional change” and identifies differences between two images as “Additive Change” when a new object appears, “Subtractive Change” when an object is removed, “Exchange” when different objects are present in the same location, and “No Change.” To obtain pixel annotated change maps for training, we generated directional change class labels for the Change Detection 2014 dataset. Our tests illustrate that LambdaNet would be suitable for situations where the type of change is unstructured, such as change detection scenarios in satellite imagery.


PIERS Online ◽  
2007 ◽  
Vol 3 (4) ◽  
pp. 374-378 ◽  
Author(s):  
Yu Liu ◽  
Ziqiang Yang ◽  
Zheng Liang ◽  
Limei Qi

Sign in / Sign up

Export Citation Format

Share Document