continuous optimization problems
Recently Published Documents


TOTAL DOCUMENTS

196
(FIVE YEARS 57)

H-INDEX

27
(FIVE YEARS 6)

2022 ◽  
Vol 11 (1) ◽  
pp. 55-72 ◽  
Author(s):  
Anima Naik ◽  
Pradeep Kumar Chokkalingam

In this paper, we propose the binary version of the Social Group Optimization (BSGO) algorithm for solving the 0-1 knapsack problem. The standard Social Group Optimization (SGO) is used for continuous optimization problems. So a transformation function is used to convert the continuous values generated from SGO into binary ones. The experiments are carried out using both low-dimensional and high-dimensional knapsack problems. The results obtained by the BSGO algorithm are compared with other binary optimization algorithms. Experimental results reveal the superiority of the BSGO algorithm in achieving a high quality of solutions over different algorithms and prove that it is one of the best finding algorithms especially in high-dimensional cases.


2021 ◽  
Vol 8 (4) ◽  
pp. 041418
Author(s):  
Blake A. Wilson ◽  
Zhaxylyk A. Kudyshev ◽  
Alexander V. Kildishev ◽  
Sabre Kais ◽  
Vladimir M. Shalaev ◽  
...  

2021 ◽  
Vol 11 (21) ◽  
pp. 9828
Author(s):  
Vincent A. Cicirello

The runtime behavior of Simulated Annealing (SA), similar to other metaheuristics, is controlled by hyperparameters. For SA, hyperparameters affect how “temperature” varies over time, and “temperature” in turn affects SA’s decisions on whether or not to transition to neighboring states. It is typically necessary to tune the hyperparameters ahead of time. However, there are adaptive annealing schedules that use search feedback to evolve the “temperature” during the search. A classic and generally effective adaptive annealing schedule is the Modified Lam. Although effective, the Modified Lam can be sensitive to the scale of the cost function, and is sometimes slow to converge to its target behavior. In this paper, we present a novel variation of the Modified Lam that we call Self-Tuning Lam, which uses early search feedback to auto-adjust its self-adaptive behavior. Using a variety of discrete and continuous optimization problems, we demonstrate the ability of the Self-Tuning Lam to nearly instantaneously converge to its target behavior independent of the scale of the cost function, as well as its run length. Our implementation is integrated into Chips-n-Salsa, an open-source Java library for parallel and self-adaptive local search.


2021 ◽  
pp. 31-45
Author(s):  
Yuichi Yoshida

AbstractIn this chapter, we consider constant-time algorithms for continuous optimization problems. Specifically, we consider quadratic function minimization and tensor decomposition, both of which have numerous applications in machine learning and data mining. The key component in our analysis is graph limit theory, which was originally developed to study graphs analytically.


2021 ◽  
Author(s):  
Nisheeth K. Vishnoi

In the last few years, Algorithms for Convex Optimization have revolutionized algorithm design, both for discrete and continuous optimization problems. For problems like maximum flow, maximum matching, and submodular function minimization, the fastest algorithms involve essential methods such as gradient descent, mirror descent, interior point methods, and ellipsoid methods. The goal of this self-contained book is to enable researchers and professionals in computer science, data science, and machine learning to gain an in-depth understanding of these algorithms. The text emphasizes how to derive key algorithms for convex optimization from first principles and how to establish precise running time bounds. This modern text explains the success of these algorithms in problems of discrete optimization, as well as how these methods have significantly pushed the state of the art of convex optimization itself.


Sign in / Sign up

Export Citation Format

Share Document