global convergence
Recently Published Documents


TOTAL DOCUMENTS

1115
(FIVE YEARS 179)

H-INDEX

52
(FIVE YEARS 6)

2022 ◽  
Vol 6 (1) ◽  
pp. 46
Author(s):  
Fouad Othman Mallawi ◽  
Ramandeep Behl ◽  
Prashanth Maroju

There are very few papers that talk about the global convergence of iterative methods with the help of Banach spaces. The main purpose of this paper is to discuss the global convergence of third order iterative method. The convergence analysis of this method is proposed under the assumptions that Fréchet derivative of first order satisfies continuity condition of the Hölder. Finally, we consider some integral equation and boundary value problem (BVP) in order to illustrate the suitability of theoretical results.


2022 ◽  
Vol 2022 (1) ◽  
Author(s):  
Zabidin Salleh ◽  
Adel Almarashi ◽  
Ahmad Alhawarat

AbstractThe conjugate gradient method can be applied in many fields, such as neural networks, image restoration, machine learning, deep learning, and many others. Polak–Ribiere–Polyak and Hestenses–Stiefel conjugate gradient methods are considered as the most efficient methods to solve nonlinear optimization problems. However, both methods cannot satisfy the descent property or global convergence property for general nonlinear functions. In this paper, we present two new modifications of the PRP method with restart conditions. The proposed conjugate gradient methods satisfy the global convergence property and descent property for general nonlinear functions. The numerical results show that the new modifications are more efficient than recent CG methods in terms of number of iterations, number of function evaluations, number of gradient evaluations, and CPU time.


Author(s):  
Tobias Lehmann ◽  
Max-K. von Renesse ◽  
Alexander Sambale ◽  
André Uschmajew

AbstractWe derive an a priori parameter range for overrelaxation of the Sinkhorn algorithm, which guarantees global convergence and a strictly faster asymptotic local convergence. Guided by the spectral analysis of the linearized problem we pursue a zero cost procedure to choose a near optimal relaxation parameter.


2021 ◽  
Author(s):  
Shicong Cen ◽  
Chen Cheng ◽  
Yuxin Chen ◽  
Yuting Wei ◽  
Yuejie Chi

Preconditioning and Regularization Enable Faster Reinforcement Learning Natural policy gradient (NPG) methods, in conjunction with entropy regularization to encourage exploration, are among the most popular policy optimization algorithms in contemporary reinforcement learning. Despite the empirical success, the theoretical underpinnings for NPG methods remain severely limited. In “Fast Global Convergence of Natural Policy Gradient Methods with Entropy Regularization”, Cen, Cheng, Chen, Wei, and Chi develop nonasymptotic convergence guarantees for entropy-regularized NPG methods under softmax parameterization, focusing on tabular discounted Markov decision processes. Assuming access to exact policy evaluation, the authors demonstrate that the algorithm converges linearly at an astonishing rate that is independent of the dimension of the state-action space. Moreover, the algorithm is provably stable vis-à-vis inexactness of policy evaluation. Accommodating a wide range of learning rates, this convergence result highlights the role of preconditioning and regularization in enabling fast convergence.


2021 ◽  
Vol 2021 (1) ◽  
Author(s):  
Abdulkarim Hassan Ibrahim ◽  
Poom Kumam ◽  
Auwal Bala Abubakar ◽  
Jamilu Abubakar

AbstractIn recent times, various algorithms have been incorporated with the inertial extrapolation step to speed up the convergence of the sequence generated by these algorithms. As far as we know, very few results exist regarding algorithms of the inertial derivative-free projection method for solving convex constrained monotone nonlinear equations. In this article, the convergence analysis of a derivative-free iterative algorithm (Liu and Feng in Numer. Algorithms 82(1):245–262, 2019) with an inertial extrapolation step for solving large scale convex constrained monotone nonlinear equations is studied. The proposed method generates a sufficient descent direction at each iteration. Under some mild assumptions, the global convergence of the sequence generated by the proposed method is established. Furthermore, some experimental results are presented to support the theoretical analysis of the proposed method.


Author(s):  
José Ignacio Nazif-Munoz ◽  
Amélie Quesnel-Vallée ◽  
Axel van den Berg

AbstractGlobal convergence of public policies has been regarded as a defining feature of the late twentieth century. This study explores the generalizability of this thesis for three road safety measures: (i) road safety agencies; (ii) child restraint laws; and (iii) mandatory use of daytime running lights. This study analyzes cross-national longitudinal data using survival analysis for the years 1964–2015 in 181 countries. The first main finding is that only child restraint laws have globally converged; in contrast, the other two policies exhibit a fractured global convergence process, likely as the result of competing international and national forces. This finding may reflect the lack of necessary conditions, at the regional and national levels, required to accelerate the spread of policies globally, adding further nuance to the global convergence thesis. A second finding is that mechanisms of policy adoption, such as imitation/learning and competition, rather than coercion, explain more consistently global and regional convergence outcomes in the road safety realm. This finding reinforces the idea of specific elective affinities, when explaining why the diffusion of policies may or not result in convergence. Lastly, by recognizing fractured convergence processes, these results call for revisiting the global convergence thesis and reintegrating more consistently regional analyses into policy diffusion and convergence studies.


Author(s):  
Amina Boumediene ◽  
Tahar Bechouat ◽  
Rachid Benzine ◽  
Ghania Hadji

The nonlinear Conjugate gradient method (CGM) is a very effective way in solving large-scale optimization problems. Zhang et al. proposed a new CG coefficient which is defined by [Formula: see text]. They proved the sufficient descent condition and the global convergence for nonconvex minimization in strong Wolfe line search. In this paper, we prove that this CG coefficient possesses sufficient descent conditions and global convergence properties under the exact line search.


Sign in / Sign up

Export Citation Format

Share Document