REDUCING DIMENSION IN GLOBAL OPTIMIZATION

2011 ◽  
Vol 08 (03) ◽  
pp. 535-544 ◽  
Author(s):  
BOUDJEHEM DJALIL ◽  
BOUDJEHEM BADREDDINE ◽  
BOUKAACHE ABDENOUR

In this paper, we propose a very interesting idea in global optimization making it easer and a low-cost task. The main idea is to reduce the dimension of the optimization problem in hand to a mono-dimensional one using variables coding. At this level, the algorithm will look for the global optimum of a mono-dimensional cost function. The new algorithm has the ability to avoid local optima, reduces the number of evaluations, and improves the speed of the algorithm convergence. This method is suitable for functions that have many extremes. Our algorithm can determine a narrow space around the global optimum in very restricted time based on a stochastic tests and an adaptive partition of the search space. Illustrative examples are presented to show the efficiency of the proposed idea. It was found that the algorithm was able to locate the global optimum even though the objective function has a large number of optima.

2020 ◽  
Author(s):  
Alberto Bemporad ◽  
Dario Piga

AbstractThis paper proposes a method for solving optimization problems in which the decision-maker cannot evaluate the objective function, but rather can only express a preference such as “this is better than that” between two candidate decision vectors. The algorithm described in this paper aims at reaching the global optimizer by iteratively proposing the decision maker a new comparison to make, based on actively learning a surrogate of the latent (unknown and perhaps unquantifiable) objective function from past sampled decision vectors and pairwise preferences. A radial-basis function surrogate is fit via linear or quadratic programming, satisfying if possible the preferences expressed by the decision maker on existing samples. The surrogate is used to propose a new sample of the decision vector for comparison with the current best candidate based on two possible criteria: minimize a combination of the surrogate and an inverse weighting distance function to balance between exploitation of the surrogate and exploration of the decision space, or maximize a function related to the probability that the new candidate will be preferred. Compared to active preference learning based on Bayesian optimization, we show that our approach is competitive in that, within the same number of comparisons, it usually approaches the global optimum more closely and is computationally lighter. Applications of the proposed algorithm to solve a set of benchmark global optimization problems, for multi-objective optimization, and for optimal tuning of a cost-sensitive neural network classifier for object recognition from images are described in the paper. MATLAB and a Python implementations of the algorithms described in the paper are available at http://cse.lab.imtlucca.it/~bemporad/glis.


2021 ◽  
Vol 12 (4) ◽  
pp. 98-116
Author(s):  
Noureddine Boukhari ◽  
Fatima Debbat ◽  
Nicolas Monmarché ◽  
Mohamed Slimane

Evolution strategies (ES) are a family of strong stochastic methods for global optimization and have proved their capability in avoiding local optima more than other optimization methods. Many researchers have investigated different versions of the original evolution strategy with good results in a variety of optimization problems. However, the convergence rate of the algorithm to the global optimum stays asymptotic. In order to accelerate the convergence rate, a hybrid approach is proposed using the nonlinear simplex method (Nelder-Mead) and an adaptive scheme to control the local search application, and the authors demonstrate that such combination yields significantly better convergence. The new proposed method has been tested on 15 complex benchmark functions and applied to the bi-objective portfolio optimization problem and compared with other state-of-the-art techniques. Experimental results show that the performance is improved by this hybridization in terms of solution eminence and strong convergence.


2014 ◽  
Vol 2014 ◽  
pp. 1-14 ◽  
Author(s):  
Hui Lu ◽  
Zheng Zhu ◽  
Xiaoteng Wang ◽  
Lijuan Yin

Test task scheduling problem (TTSP) is a typical combinational optimization scheduling problem. This paper proposes a variable neighborhood MOEA/D (VNM) to solve the multiobjective TTSP. Two minimization objectives, the maximal completion time (makespan) and the mean workload, are considered together. In order to make solutions obtained more close to the real Pareto Front, variable neighborhood strategy is adopted. Variable neighborhood approach is proposed to render the crossover span reasonable. Additionally, because the search space of the TTSP is so large that many duplicate solutions and local optima will exist, the Starting Mutation is applied to prevent solutions from becoming trapped in local optima. It is proved that the solutions got by VNM can converge to the global optimum by using Markov Chain and Transition Matrix, respectively. The experiments of comparisons of VNM, MOEA/D, and CNSGA (chaotic nondominated sorting genetic algorithm) indicate that VNM performs better than the MOEA/D and the CNSGA in solving the TTSP. The results demonstrate that proposed algorithm VNM is an efficient approach to solve the multiobjective TTSP.


2009 ◽  
Vol 2009 ◽  
pp. 1-21 ◽  
Author(s):  
Joaquín Cervera ◽  
Alfonso Baños

This work focuses on the problem of automatic loop shaping in the context of robust control. More specifically in the framework given by Quantitative Feedback Theory (QFT), traditionally the search of an optimum design, a non convex and nonlinear optimization problem, is simplified by linearizing and/or convexifying the problem. In this work, the authors propose a suboptimal solution using a fixed structure in the compensator and evolutionary optimization. The main idea in relation to previous work consists of the study of the use of fractional compensators, which give singular properties to automatically shape the open loop gain function with a minimum set of parameters, which is crucial for the success of evolutionary algorithms. Additional heuristics are proposed in order to guide evolutionary process towards close to optimum solutions, focusing on local optima avoidance.


2008 ◽  
Vol 33-37 ◽  
pp. 1407-1412
Author(s):  
Ying Hui Lu ◽  
Shui Lin Wang ◽  
Hao Jiang ◽  
Xiu Run Ge

In geotechnical engineering, based on the theory of inverse analysis of displacement, the problem for identification of material parameters can be transformed into an optimization problem. Commonly, because of the non-linear relationship between the identified parameters and the displacement, the objective function bears the multimodal characteristic in the variable space. So to solve better the multimodal characteristic in the non-linear inverse analysis, a new global optimization algorithm, which integrates the dynamic descent algorithm and the modified BFGS (Brogden-Fletcher-Goldfrab-Shanno) algorithm, is proposed. Five typical multimodal functions in the variable space are tested to prove that the new proposed algorithm can quickly converge to the best point with few function evaluations. In the practical application, the new algorithm is employed to identify the Young’s modulus of four different materials. The results of the identification further show that the new proposed algorithm is a very highly efficient and robust one.


2006 ◽  
Vol 128 (6) ◽  
pp. 1272-1284 ◽  
Author(s):  
Bram Demeulenaere ◽  
Erwin Aertbeliën ◽  
Myriam Verschuure ◽  
Jan Swevers ◽  
Joris De Schutter

This paper focuses on reducing the dynamic reactions (shaking force, shaking moment, and driving torque) of planar crank-rocker four-bars through counterweight addition. Determining the counterweight mass parameters constitutes a nonlinear optimization problem, which suffers from local optima. This paper, however, proves that it can be reformulated as a convex program, that is, a nonlinear optimization problem of which any local optimum is also globally optimal. Because of this unique property, it is possible to investigate (and by virtue of the guaranteed global optimum, in fact prove) the ultimate limits of counterweight balancing. In a first example a design procedure is presented that is based on graphically representing the ultimate limits in design charts. A second example illustrates the versatility and power of the convex optimization framework by reformulating an earlier counterweight balancing method as a convex program and providing improved numerical results for it.


Author(s):  
T.A. Agasiev

Methods of landscape analysis are developed to estimate various characteristic features of the objective function in the optimization problem. The accuracy of the estimates largely depends on the chosen method of experiment design for the landscape sampling, i.e. on the number and location of points in the search space forming a discrete representation of the objective function landscape. The method of information content is the most resistant to changes in the experiment design but requires route building to bypass the obtained points of landscape sampling. A method of characterization of the optimization problem objective function is proposed on the base of landscape sampling without building a route to bypass its points. The notion of a variability map of objective function is introduced. The informativeness criteria are formulated for groups of points of a landscape sample. A method of constructing the so-called full variability map is proposed as well as the function of generalized information content for the analysis of the characteristic features of the objective function. The method allows obtaining more accurate estimates of target function characteristics which are resistant to variations of the experiment design


Author(s):  
Liqun Wang ◽  
Songqing Shan ◽  
G. Gary Wang

The presence of black-box functions in engineering design, which are usually computation-intensive, demands efficient global optimization methods. This work proposes a new global optimization method for black-box functions. The global optimization method is based on a novel mode-pursuing sampling (MPS) method which systematically generates more sample points in the neighborhood of the function mode while statistically covers the entire search space. Quadratic regression is performed to detect the region containing the global optimum. The sampling and detection process iterates until the global optimum is obtained. Through intensive testing, this method is found to be effective, efficient, robust, and applicable to both continuous and discontinuous functions. It supports simultaneous computation and applies to both unconstrained and constrained optimization problems. Because it does not call any existing global optimization tool, it can be used as a standalone global optimization method for inexpensive problems as well. Limitation of the method is also identified and discussed.


2019 ◽  
Vol 2019 ◽  
pp. 1-14 ◽  
Author(s):  
Hassan Smaoui ◽  
Abdelkabir Maqsoud ◽  
Sami Kaidi

The solution of inverse problems in groundwater flow has been massively invested by several researchers around the world. This type of problem has been formulated by a constrained optimization problem and this constraint is none other than the direct problem (DP) itself. Thus, solving algorithms are developed that simultaneously solve the direct problem (Darcy’s equation) and the associated optimization problem. Several papers have been published in the literature using optimization methods based on computation of the objective function gradients. This type of method suffers from the inability to provide a global optimum. Similarly, they also have the disadvantage of not being applicable to objective functions of discontinuous derivatives. This paper is proposed to avoid these disadvantages. Indeed, for the optimization phase, we use random search-based methods that do not use derivative computations, but based on a search step followed only by evaluation of the objective function as many times as necessary to the convergence towards the global optimum. Among the different algorithms of this type of methods, we adopted the genetic algorithm (GA). On the other hand, the numerical solution of the direct problem is accomplished by the CVFEM discretization method (Control Volume Finite Element Method) which ensures the mass conservation in a natural way by its mathematical formulation. The resulting computation code HySubF-CVFEM (Hydrodynamic of Subsurface Flow by Control Volume Finite Element Method) solves the Darcy equation in a heterogeneous porous medium. Thus, this paper describes the description of the integrated optimization algorithm called HySubF-CVFEM/GA that has been successfully implemented and validated successfully compared to a schematic flow case offering analytical solutions. The results of this comparison are qualified of excellent accuracy. To identify the transmissivity field of the realistic study area, the code HySubF-CVFEM/GA was applied to the coastal “Chaouia” groundwater located in Western of Morocco. This aquifer of high heterogeneity is essential for water resources for the Casablanca region. Results analysis of this study has shown that the developed code is capable of providing high accuracy transmissivity fields, thus representing the heterogeneity observed in situ. However, in comparison with gradient method optimization the HySubF-CVFEM/GA code converges too slowly to the optimal solution (large CPU-time consuming). Despite this disadvantage, and given the high accuracy of the obtained results, the HySubF-CVFEM/GA code can be recommended to solve in an efficient and effective manner the identification parameters problems in hydrogeology.


2018 ◽  
Vol 8 (9) ◽  
pp. 1664 ◽  
Author(s):  
Abdul Wadood ◽  
Saeid Gholami Farkoush ◽  
Tahir Khurshaid ◽  
Chang-Hwan Kim ◽  
Jiangtao Yu ◽  
...  

In electrical engineering problems, bio- and nature-inspired optimization techniques are valuable ways to minimize or maximize an objective function. We use the root tree algorithm (RTO), inspired by the random movement of roots, to search for the global optimum, in order to best solve the problem of overcurrent relays (OCRs). It is a complex and highly linear constrained optimization problem. In this problem, we have one type of design variable, time multiplier settings (TMSs), for each relay in the circuit. The objective function is to minimize the total operating time of all the primary relays to avoid excessive interruptions. In this paper, three case studies have been considered. From the simulation results, it has been observed that the RTO with certain parameter settings operates better compared to the other up-to-date algorithms.


Sign in / Sign up

Export Citation Format

Share Document