Advanced First- and Second-Order Optimization Methods

2020 ◽  
pp. 473-510
2020 ◽  
Vol 28 (4) ◽  
Author(s):  
Maad Mohsin Mijwil ◽  
Rana Ali Abttan

In this paper, we have applied the genetic algorithm to the selection of the true values for RC (resistors/capacitors) as an essential role in the development of analogue active filters. The classic method of incorporating passive elements is a complex situation and can attend to errors. In order to reduce the frequency of errors and the human effort, evolutionary optimization methods are employed to select the RC values. In this study, Genetic algorithm (GA) is proposed to optimize the second-order active filter. It must find the values of the passive elements RC to get a filter configuration that reduces the sensitivities to variations as well as reduces design errors less than a defined height value, concerning certain specifications. The optimization problem which is one of the problems that must be solved by GA is a multi-objective optimization problem (MOOP). GA was carried out taking into account two possible situations about the values that resistors and capacitors could adopt. The obtained experimental results show that GA can be used to obtain filter configurations that meet the specified standard.


2014 ◽  
Vol 200 (2) ◽  
pp. 720-744 ◽  
Author(s):  
Clara Castellanos ◽  
Ludovic Métivier ◽  
Stéphane Operto ◽  
Romain Brossier ◽  
Jean Virieux

1992 ◽  
Vol 4 (2) ◽  
pp. 141-166 ◽  
Author(s):  
Roberto Battiti

On-line first-order backpropagation is sufficiently fast and effective for many large-scale classification problems but for very high precision mappings, batch processing may be the method of choice. This paper reviews first- and second-order optimization methods for learning in feedforward neural networks. The viewpoint is that of optimization: many methods can be cast in the language of optimization techniques, allowing the transfer to neural nets of detailed results about computational complexity and safety procedures to ensure convergence and to avoid numerical problems. The review is not intended to deliver detailed prescriptions for the most appropriate methods in specific applications, but to illustrate the main characteristics of the different methods and their mutual relations.


Processes ◽  
2019 ◽  
Vol 7 (4) ◽  
pp. 221 ◽  
Author(s):  
Huiyi Cao ◽  
Yingkai Song ◽  
Kamil A. Khan

Convex relaxations of functions are used to provide bounding information to deterministic global optimization methods for nonconvex systems. To be useful, these relaxations must converge rapidly to the original system as the considered domain shrinks. This article examines the convergence rates of convex outer approximations for functions and nonlinear programs (NLPs), constructed using affine subtangents of an existing convex relaxation scheme. It is shown that these outer approximations inherit rapid second-order pointwise convergence from the original scheme under certain assumptions. To support this analysis, the notion of second-order pointwise convergence is extended to constrained optimization problems, and general sufficient conditions for guaranteeing this convergence are developed. The implications are discussed. An implementation of subtangent-based relaxations of NLPs in Julia is discussed and is applied to example problems for illustration.


2005 ◽  
Vol 78 (2) ◽  
pp. 257-272 ◽  
Author(s):  
Dragan S. Djordjević ◽  
Predrag S. Stanimirović

AbstractWe develop several iterative methods for computing generalized inverses using both first and second order optimization methods in C*-algebras. Known steepest descent iterative methods are generalized in C*-algebras. We introduce second order methods based on the minimization of the norms ‖Ax − b‖2 and ‖x‖2 by means of the known second order unconstrained minimization methods. We give several examples which illustrate our theory.


Sign in / Sign up

Export Citation Format

Share Document