scholarly journals A Reduced-Order Gauss-Newton Method for Nonlinear Problems Based on Compressed Sensing for PDE Applications

Author(s):  
Horacio Florez ◽  
Miguel Argáez
2021 ◽  
Vol 446 ◽  
pp. 110666 ◽  
Author(s):  
Wenqian Chen ◽  
Qian Wang ◽  
Jan S. Hesthaven ◽  
Chuhua Zhang

Materials ◽  
2019 ◽  
Vol 12 (8) ◽  
pp. 1227 ◽  
Author(s):  
Dingfei Jin ◽  
Yue Yang ◽  
Tao Ge ◽  
Daole Wu

In this paper, we propose a fast sparse recovery algorithm based on the approximate l0 norm (FAL0), which is helpful in improving the practicability of the compressed sensing theory. We adopt a simple function that is continuous and differentiable to approximate the l0 norm. With the aim of minimizing the l0 norm, we derive a sparse recovery algorithm using the modified Newton method. In addition, we neglect the zero elements in the process of computing, which greatly reduces the amount of computation. In a computer simulation experiment, we test the image denoising and signal recovery performance of the different sparse recovery algorithms. The results show that the convergence rate of this method is faster, and it achieves nearly the same accuracy as other algorithms, improving the signal recovery efficiency under the same conditions.


Author(s):  
S. Indrapriyadarsini ◽  
Shahrzad Mahboubi ◽  
Hiroshi Ninomiya ◽  
Takeshi Kamio ◽  
Hideki Asai

Gradient based methods are popularly used in training neural networks and can be broadly categorized into first and second order methods. Second order methods have shown to have better convergence compared to first order methods, especially in solving highly nonlinear problems. The BFGS quasi-Newton method is the most commonly studied second order method for neural network training. Recent methods have shown to speed up the convergence of the BFGS method using the Nesterov’s acclerated gradient and momentum terms. The SR1 quasi-Newton method though less commonly used in training neural networks, are known to have interesting properties and provide good Hessian approximations when used with a trust-region approach. Thus, this paper aims to investigate accelerating the Symmetric Rank-1 (SR1) quasi-Newton method with the Nesterov’s gradient for training neural networks and briefly discuss its convergence. The performance of the proposed method is evaluated on a function approximation and image classification problem.


Author(s):  
S. Indrapriyadarsini ◽  
Shahrzad Mahboubi ◽  
Hiroshi Ninomiya ◽  
Takeshi Kamio ◽  
Hideki Asai

Gradient based methods are popularly used in training neural networks and can be broadly categorized into first and second order methods. Second order methods have shown to have better convergence compared to first order methods, especially in solving highly nonlinear problems. The BFGS quasi-Newton method is the most commonly studied second order method for neural network training. Recent methods have shown to speed up the convergence of the BFGS method using the Nesterov’s acclerated gradient and momentum terms. The SR1 quasi-Newton method though less commonly used in training neural networks, are known to have interesting properties and provide good Hessian approximations when used with a trust-region approach. Thus, this paper aims to investigate accelerating the Symmetric Rank-1 (SR1) quasi-Newton method with the Nesterov’s gradient for training neural networks and briefly discuss its convergence. The performance of the proposed method is evaluated on a function approximation and image classification problem.


Mathematics ◽  
2020 ◽  
Vol 8 (3) ◽  
pp. 452
Author(s):  
Giro Candelario ◽  
Alicia Cordero ◽  
Juan R. Torregrosa

In the recent literature, some fractional one-point Newton-type methods have been proposed in order to find roots of nonlinear equations using fractional derivatives. In this paper, we introduce a new fractional Newton-type method with order of convergence α + 1 and compare it with the existing fractional Newton method with order 2 α . Moreover, we also introduce a multipoint fractional Traub-type method with order 2 α + 1 and compare its performance with that of its first step. Some numerical tests and analysis of the dependence on the initial estimations are made for each case, including a comparison with classical Newton ( α = 1 of the first step of the class) and classical Traub’s scheme ( α = 1 of fractional proposed multipoint method). In this comparison, some cases are found where classical Newton and Traub’s methods do not converge and the proposed methods do, among other advantages.


Algorithms ◽  
2021 ◽  
Vol 15 (1) ◽  
pp. 6
Author(s):  
S. Indrapriyadarsini ◽  
Shahrzad Mahboubi ◽  
Hiroshi Ninomiya ◽  
Takeshi Kamio ◽  
Hideki Asai

Gradient-based methods are popularly used in training neural networks and can be broadly categorized into first and second order methods. Second order methods have shown to have better convergence compared to first order methods, especially in solving highly nonlinear problems. The BFGS quasi-Newton method is the most commonly studied second order method for neural network training. Recent methods have been shown to speed up the convergence of the BFGS method using the Nesterov’s acclerated gradient and momentum terms. The SR1 quasi-Newton method, though less commonly used in training neural networks, is known to have interesting properties and provide good Hessian approximations when used with a trust-region approach. Thus, this paper aims to investigate accelerating the Symmetric Rank-1 (SR1) quasi-Newton method with the Nesterov’s gradient for training neural networks, and to briefly discuss its convergence. The performance of the proposed method is evaluated on a function approximation and image classification problem.


Author(s):  
A. Abdulle ◽  
Y. Bai

A general framework to combine numerical homogenization and reduced-order modelling techniques for partial differential equations (PDEs) with multiple scales is described. Numerical homogenization methods are usually efficient to approximate the effective solution of PDEs with multiple scales. However, classical numerical homogenization techniques require the numerical solution of a large number of so-called microproblems to approximate the effective data at selected grid points of the computational domain. Such computations become particularly expensive for high-dimensional, time-dependent or nonlinear problems. In this paper, we explain how numerical homogenization method can benefit from reduced-order modelling techniques that allow one to identify offline and online computational procedures. The effective data are only computed accurately at a carefully selected number of grid points (offline stage) appropriately ‘interpolated’ in the online stage resulting in an online cost comparable to that of a single-scale solver. The methodology is presented for a class of PDEs with multiple scales, including elliptic, parabolic, wave and nonlinear problems. Numerical examples, including wave propagation in inhomogeneous media and solute transport in unsaturated porous media, illustrate the proposed method.


Sign in / Sign up

Export Citation Format

Share Document