scholarly journals GENO – Optimization for Classical Machine Learning Made Fast and Easy

2020 ◽  
Vol 34 (09) ◽  
pp. 13620-13621
Author(s):  
Sören Laue ◽  
Matthias Mitterreiter ◽  
Joachim Giesen

Most problems from classical machine learning can be cast as an optimization problem. We introduce GENO (GENeric Optimization), a framework that lets the user specify a constrained or unconstrained optimization problem in an easy-to-read modeling language. GENO then generates a solver, i.e., Python code, that can solve this class of optimization problems. The generated solver is usually as fast as hand-written, problem-specific, and well-engineered solvers. Often the solvers generated by GENO are faster by a large margin compared to recently developed solvers that are tailored to a specific problem class.An online interface to our framework can be found at http://www.geno-project.org.

2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Ahmad Alhawarat ◽  
Thoi Trung Nguyen ◽  
Ramadan Sabra ◽  
Zabidin Salleh

To find a solution of unconstrained optimization problems, we normally use a conjugate gradient (CG) method since it does not cost memory or storage of second derivative like Newton’s method or Broyden–Fletcher–Goldfarb–Shanno (BFGS) method. Recently, a new modification of Polak and Ribiere method was proposed with new restart condition to give a so-call AZPRP method. In this paper, we propose a new modification of AZPRP CG method to solve large-scale unconstrained optimization problems based on a modification of restart condition. The new parameter satisfies the descent property and the global convergence analysis with the strong Wolfe-Powell line search. The numerical results prove that the new CG method is strongly aggressive compared with CG_Descent method. The comparisons are made under a set of more than 140 standard functions from the CUTEst library. The comparison includes number of iterations and CPU time.


2021 ◽  
Vol 36 ◽  
pp. 04007
Author(s):  
Gillian Yi Han Woo ◽  
Hong Seng Sim ◽  
Yong Kheng Goh ◽  
Wah June Leong

In this paper, we propose to use spectral proximal method to solve sparse optimization problems. Sparse optimization refers to an optimization problem involving the ι0 -norm in objective or constraints. The previous research showed that the spectral gradient method is outperformed the other standard unconstrained optimization methods. This is due to spectral gradient method replaced the full rank matrix by a diagonal matrix and the memory decreased from Ο(n2) to Ο(n). Since ι0-norm term is nonconvex and non-smooth, it cannot be solved by standard optimization algorithm. We will solve the ι0 -norm problem with an underdetermined system as its constraint will be considered. Using Lagrange method, this problem is transformed into an unconstrained optimization problem. A new method called spectral proximal method is proposed, which is a combination of proximal method and spectral gradient method. The spectral proximal method is then applied to the ι0-norm unconstrained optimization problem. The programming code will be written in Python to compare the efficiency of the proposed method with some existing methods. The benchmarks of the comparison are based on number of iterations, number of functions call and the computational time. Theoretically, the proposed method requires less storage and less computational time.


Author(s):  
Xinghuo Yu ◽  
◽  
Weixing Zheng ◽  
Baolin Wu ◽  
Xin Yao ◽  
...  

In this paper, a novel penalty function approach is proposed for constrained optimization problems with linear and nonlinear constraints. It is shown that by using a mapping function to "wrap" up the constraints, a constrained optimization problem can be converted to an unconstrained optimization problem. It is also proved mathematically that the best solution of the converted unconstrained optimization problem will approach the best solution of the constrained optimization problem if the tuning parameter for the wrapping function approaches zero. A tailored genetic algorithm incorporating an adaptive tuning method is then used to search for the global optimal solutions of the converted unconstrained optimization problems. Four test examples were used to show the effectiveness of the approach.


Author(s):  
K. J. KACHIASHVILI

There are different methods of statistical hypotheses testing.1–4 Among them, is Bayesian approach. A generalization of Bayesian rule of many hypotheses testing is given below. It consists of decision rule dimensionality with respect to the number of tested hypotheses, which allows to make decisions more differentiated than in the classical case and to state, instead of unconstrained optimization problem, constrained one that enables to make guaranteed decisions concerning errors of true decisions rejection, which is the key point when solving a number of practical problems. These generalizations are given both for a set of simple hypotheses, each containing one space point, and hypotheses containing a finite set of separated space points.


Sign in / Sign up

Export Citation Format

Share Document