scholarly journals An adaptive discretization method solving semi-infinite optimization problems with quadratic rate of convergence

Optimization ◽  
2020 ◽  
pp. 1-29
Author(s):  
Tobias Seidel ◽  
Karl-Heinz Küfer
2010 ◽  
Vol 18 (2) ◽  
pp. 199-228 ◽  
Author(s):  
Ying-ping Chen ◽  
Chao-Hong Chen

An adaptive discretization method, called split-on-demand (SoD), enables estimation of distribution algorithms (EDAs) for discrete variables to solve continuous optimization problems. SoD randomly splits a continuous interval if the number of search points within the interval exceeds a threshold, which is decreased at every iteration. After the split operation, the nonempty intervals are assigned integer codes, and the search points are discretized accordingly. As an example of using SoD with EDAs, the integration of SoD and the extended compact genetic algorithm (ECGA) is presented and numerically examined. In this integration, we adopt a local search mechanism as an optional component of our back end optimization engine. As a result, the proposed framework can be considered as a memetic algorithm, and SoD can potentially be applied to other memetic algorithms. The numerical experiments consist of two parts: (1) a set of benchmark functions on which ECGA with SoD and ECGA with two well-known discretization methods: the fixed-height histogram (FHH) and the fixed-width histogram (FWH) are compared; (2) a real-world application, the economic dispatch problem, on which ECGA with SoD is compared to other methods. The experimental results indicate that SoD is a better discretization method to work with ECGA. Moreover, ECGA with SoD works quite well on the economic dispatch problem and delivers solutions better than the best known results obtained by other methods in existence.


2005 ◽  
Vol 15 (2) ◽  
pp. 301-306 ◽  
Author(s):  
Nada Djuranovic-Milicic

In this paper an algorithm for LC1 unconstrained optimization problems, which uses the second order Dini upper directional derivative is considered. The purpose of the paper is to establish general algorithm hypotheses under which convergence occurs to optimal points. A convergence proof is given, as well as an estimate of the rate of convergence.


2021 ◽  
Vol 27 (11) ◽  
pp. 563-574
Author(s):  
V. V. Kureychik ◽  
◽  
S. I. Rodzin ◽  

Computational models of bio heuristics based on physical and cognitive processes are presented. Data on such characteristics of bio heuristics (including evolutionary and swarm bio heuristics) are compared.) such as the rate of convergence, computational complexity, the required amount of memory, the configuration of the algorithm parameters, the difficulties of software implementation. The balance between the convergence rate of bio heuristics and the diversification of the search space for solutions to optimization problems is estimated. Experimental results are presented for the problem of placing Peco graphs in a lattice with the minimum total length of the graph edges.


2018 ◽  
Vol 39 (2) ◽  
pp. 545-578 ◽  
Author(s):  
Raghu Bollapragada ◽  
Richard H Byrd ◽  
Jorge Nocedal

Abstract The paper studies the solution of stochastic optimization problems in which approximations to the gradient and Hessian are obtained through subsampling. We first consider Newton-like methods that employ these approximations and discuss how to coordinate the accuracy in the gradient and Hessian to yield a superlinear rate of convergence in expectation. The second part of the paper analyzes an inexact Newton method that solves linear systems approximately using the conjugate gradient (CG) method, and that samples the Hessian and not the gradient (the gradient is assumed to be exact). We provide a complexity analysis for this method based on the properties of the CG iteration and the quality of the Hessian approximation, and compare it with a method that employs a stochastic gradient iteration instead of the CG method. We report preliminary numerical results that illustrate the performance of inexact subsampled Newton methods on machine learning applications based on logistic regression.


Author(s):  
Peter Bamidele Shola

<div class="Section1"><p>In this paper a population-based meta-heuristic algorithm for optimization problems in a continous space is presented.The algorithm,here called cheapest shop seeker is modeled after a group of shoppers seeking to identify the cheapest shop (among many available) for shopping. The  algorithm was tested on many benchmark functions with the result  compared with those from some other methods. The algorithm appears to  have a better  success  rate of hitting the global optimum point  of a function  and of the rate of convergence (in terms of the number of iterations required to reach the optimum  value) for some functions  in spite  of its simplicity.</p></div>


2008 ◽  
Vol 18 (1) ◽  
pp. 47-52 ◽  
Author(s):  
Nada Djuranovic-Milicic

In this paper a multi-step algorithm for LC1 unconstrained optimization problems is presented. This method uses previous multi-step iterative information and curve search to generate new iterative points. A convergence proof is given, as well as an estimate of the rate of convergence.


Sign in / Sign up

Export Citation Format

Share Document