Anderson acceleration based on the H−s Sobolev norm for contractive and noncontractive fixed-point operators

2022 ◽  
Vol 403 ◽  
pp. 113844
Author(s):  
Yunan Yang ◽  
Alex Townsend ◽  
Daniel Appelö
2011 ◽  
Vol 49 (4) ◽  
pp. 1715-1735 ◽  
Author(s):  
Homer F. Walker ◽  
Peng Ni

Author(s):  
Aizuddin Mohamed ◽  
Razi Abdul-Rahman

An implementation for a fully automatic adaptive finite element method (AFEM) for computation of nonlinear thermoelectric problems in three dimensions is presented. Adaptivity of the nonlinear solvers is based on the well-established hp-adaptivity where the mesh refinement and the polynomial order of elements are methodically controlled to reduce the discretization errors of the coupled field variables temperature and electric potential. A single mesh is used for both fields and the nonlinear coupling of temperature and electric potential is accounted in the computation of a posteriori error estimate where the residuals are computed element-wise. Mesh refinements are implemented for tetrahedral mesh such that conformity of elements with neighboring elements is preserved. Multiple nonlinear solution steps are assessed including variations of the fixed-point method with Anderson acceleration algorithms. The Barzilai-Borwein algorithm to optimize the nonlinear solution steps are also assessed. Promising results have been observed where all the nonlinear methods show the same accuracy with the tendency of approaching convergence with more elements refining. Anderson acceleration is the most efficient among the nonlinear solvers studied where its total computing time is less than half of the more conventional fixed-point iteration.


Geophysics ◽  
2021 ◽  
Vol 86 (1) ◽  
pp. R99-R108
Author(s):  
Yunan Yang

State-of-the-art seismic imaging techniques treat inversion tasks such as full-waveform inversion (FWI) and least-squares reverse time migration (LSRTM) as partial differential equation-constrained optimization problems. Due to the large-scale nature, gradient-based optimization algorithms are preferred in practice to update the model iteratively. Higher-order methods converge in fewer iterations but often require higher computational costs, more line-search steps, and bigger memory storage. A balance among these aspects has to be considered. We have conducted an evaluation using Anderson acceleration (AA), a popular strategy to speed up the convergence of fixed-point iterations, to accelerate the steepest-descent algorithm, which we innovatively treat as a fixed-point iteration. Independent of the unknown parameter dimensionality, the computational cost of implementing the method can be reduced to an extremely low dimensional least-squares problem. The cost can be further reduced by a low-rank update. We determine the theoretical connections and the differences between AA and other well-known optimization methods such as L-BFGS and the restarted generalized minimal residual method and compare their computational cost and memory requirements. Numerical examples of FWI and LSRTM applied to the Marmousi benchmark demonstrate the acceleration effects of AA. Compared with the steepest-descent method, AA can achieve faster convergence and can provide competitive results with some quasi-Newton methods, making it an attractive optimization strategy for seismic inversion.


Acta Numerica ◽  
2018 ◽  
Vol 27 ◽  
pp. 207-287 ◽  
Author(s):  
C. T. Kelley

This article is about numerical methods for the solution of nonlinear equations. We consider both the fixed-point form $\mathbf{x}=\mathbf{G}(\mathbf{x})$ and the equations form $\mathbf{F}(\mathbf{x})=0$ and explain why both versions are necessary to understand the solvers. We include the classical methods to make the presentation complete and discuss less familiar topics such as Anderson acceleration, semi-smooth Newton’s method, and pseudo-arclength and pseudo-transient continuation methods.


Author(s):  
Mattia Filippini ◽  
Piergiorgio Alotto ◽  
Alessandro Giust

Purpose The purpose of this paper is to implement the Anderson acceleration for different formulations of eletromagnetic nonlinear problems and analyze the method efficiency and strategies to obtain a fast convergence. Design/methodology/approach The paper is structured as follows: the general class of fixed point nonlinear problems is shown at first, highlighting the requirements for convergence. The acceleration method is then shown with the associated pseudo-code. Finally, the algorithm is tested on different formulations (finite element, finite element/boundary element) and material properties (nonlinear iron, hysteresis models for laminates). The results in terms of convergence and iterations required are compared to the non-accelerated case. Findings The Anderson acceleration provides accelerations up to 75 per cent in the test cases that have been analyzed. For the hysteresis test case, a restart technique is proven to be helpful in analogy to the restarted GMRES technique. Originality/value The acceleration that has been suggested in this paper is rarely adopted for the electromagnetic case (it is normally adopted in the electronic simulation case). The procedure is general and works with different magneto-quasi static formulations as shown in the paper. The obtained accelerations allow to reduce the number of iterations required up to 75 per cent in the benchmark cases. The method is also a good candidate in the hysteresis case, where normally the fixed point schemes are preferred to the Newton ones.


Symmetry ◽  
2021 ◽  
Vol 13 (4) ◽  
pp. 636
Author(s):  
Xia Tang ◽  
Chun Wen ◽  
Xian-Ming Gu ◽  
Zhao-Li Shen

Anderson(m0) extrapolation, an accelerator to a fixed-point iteration, stores m0+1 prior evaluations of the fixed-point iteration and computes a linear combination of those evaluations as a new iteration. The computational cost of the Anderson(m0) acceleration becomes expensive with the parameter m0 increasing, thus m0 is a common choice in most practice. In this paper, with the aim of improving the computations of PageRank problems, a new method was developed by applying Anderson(1) extrapolation at periodic intervals within the Arnoldi-Inout method. The new method is called the AIOA method. Convergence analysis of the AIOA method is discussed in detail. Numerical results on several PageRank problems are presented to illustrate the effectiveness of our proposed method.


2020 ◽  
Vol 30 (4) ◽  
pp. 3170-3197
Author(s):  
Junzi Zhang ◽  
Brendan O'Donoghue ◽  
Stephen Boyd

Sign in / Sign up

Export Citation Format

Share Document