A Robust Optimization Approach Using Taguchi’s Loss Function for Solving Nonlinear Optimization Problems

Author(s):  
Balaji Ramakrishnan ◽  
S. S. Rao

Abstract The application of the concept of robust design, based on Taguchi’s loss function, in formulating and solving nonlinear optimization problems is investigated. The effectiveness of the approach is illustrated with two examples. The first example is a machining parameter optimization problem wherein the production cost, tool life and production rate are optimized with limitations on machining characteristics such as cutting power, cutting tool temperature and surface finish. The second example is a welded beam design problem where the dimensions of the weldment and the beam are found without exceeding the limitations stated on the shear stress in the weld, normal stress in the beam, buckling load on the beam and tip deflection of the beam. The results are highlighted by comparing the solutions of the robust formulation with those obtained from the conventional formulation. The methodology presented in this work is expected to be useful in the design of products and processes which are least sensitive to the noises and which reflect in higher quality.

2019 ◽  
Vol 2019 ◽  
pp. 1-19
Author(s):  
NingNing Du ◽  
Yan-Kui Liu ◽  
Ying Liu

In financial optimization problem, the optimal portfolios usually depend heavily on the distributions of uncertain return rates. When the distributional information about uncertain return rates is partially available, it is important for investors to find a robust solution for immunization against the distribution uncertainty. The main contribution of this paper is to develop an ambiguous value-at-risk (VaR) optimization framework for portfolio selection problems, where the distributions of uncertain return rates are partially available. For tractability consideration, we deal with new safe approximations of ambiguous probabilistic constraints under two types of random perturbation sets and obtain two equivalent tractable formulations of the ambiguous probabilistic constraints. Finally, to demonstrate the potential for solving portfolio optimization problems, we provide a practical example about the Chinese stock market. The advantage of the proposed robust optimization method is also illustrated by comparing it with the existing optimization approach via numerical experiments.


Symmetry ◽  
2020 ◽  
Vol 12 (9) ◽  
pp. 1529
Author(s):  
Jung-Fa Tsai ◽  
Ming-Hua Lin ◽  
Duan-Yi Wen

Several structural design problems that involve continuous and discrete variables are very challenging because of the combinatorial and non-convex characteristics of the problems. Although the deterministic optimization approach theoretically guarantees to find the global optimum, it usually leads to a significant burden in computational time. This article studies the deterministic approach for globally solving mixed–discrete structural optimization problems. An improved method that symmetrically reduces the number of constraints for linearly expressing signomial terms with pure discrete variables is applied to significantly enhance the computational efficiency of obtaining the exact global optimum of the mixed–discrete structural design problem. Numerical experiments of solving the stepped cantilever beam design problem and the pressure vessel design problem are conducted to show the efficiency and effectiveness of the presented approach. Compared with existing methods, this study introduces fewer convex terms and constraints for transforming the mixed–discrete structural problem and uses much less computational time for solving the reformulated problem to global optimality.


Author(s):  
Yu Gu ◽  
Xiaoping Qian

In this paper, we present an extension of the B-spline based density representation to a robust formulation of topology optimization. In our B-spline based topology optimization approach, we use separate representations for material density distribution and analysis. B-splines are used as a representation of density and the usual finite elements are used for analysis. The density undergoes a Heaviside projection to reduce the grayness in the optimized structures. To ensure minimal length control so the resulting designs are robust with respect to manufacturing imprecision, we adopt a three-structure formulation during the optimization. That is, dilated, intermediate and eroded designs are used in the optimization formulation. We give an analytical description of minimal length of features in optimized designs. Numerical examples have been implemented on three common topology optimization problems: minimal compliance, heat conduction and compliant mechanism. They demonstrate that the proposed approach is effective in generating designs with crisp black/white transition and is accurate in minimal length control.


Author(s):  
Burak Kocuk

In this paper, we consider a Kullback-Leibler divergence constrained distributionally robust optimization model. This model considers an ambiguity set that consists of all distributions whose Kullback-Leibler divergence to an empirical distribution is bounded. Utilizing the fact that this divergence measure has an exponential cone representation, we obtain the robust counterpart of the Kullback-Leibler divergence constrained distributionally robust optimization problem as a dual exponential cone constrained program under mild assumptions on the underlying optimization problem. The resulting conic reformulation of the original optimization problem can be directly solved by a commercial conic programming solver. We specialize our generic formulation to two classical optimization problems, namely, the Newsvendor Problem and the Uncapacitated Facility Location Problem. Our computational study in an out-of-sample analysis shows that the solutions obtained via the distributionally robust optimization approach yield significantly better performance in terms of the dispersion of the cost realizations while the central tendency deteriorates only slightly compared to the solutions obtained by stochastic programming.


2019 ◽  
pp. 2022-2029
Author(s):  
Saba Nasser Majeed

In this paper, we propose new types of non-convex functions called strongly --vex functions and semi strongly --vex functions. We study some properties of these proposed functions. As an application of these functions in optimization problems, we discuss some optimality properties of the generalized nonlinear optimization problem for which we use, as an objective function, strongly --vex function and semi strongly --vex function.


2021 ◽  
Vol 63 (3) ◽  
pp. 293-298
Author(s):  
Nantiwat Pholdee ◽  
Vivek K. Patel ◽  
Sadiq M. Sait ◽  
Sujin Bureerat ◽  
Ali Rıza Yıldız

Abstract In this research, a novel optimization algorithm, which is a hybrid spotted hyena-Nelder-Mead optimization algorithm (HSHO-NM) algorithm, has been introduced in solving grinding optimization problems. A well-known grinding optimization problem is solved to prove the superiority of the HSHO-NM over other algorithms. The results of the HSHO-NM are compared with others. The results show that HSHO-NM is an efficient optimization approach for obtaining the optimal manufacturing variables in grinding operations.


2020 ◽  
Vol 12 (21) ◽  
pp. 3541
Author(s):  
Saori Takeyama ◽  
Shunsuke Ono ◽  
Itsuo Kumazawa

We propose a new constrained optimization approach to hyperspectral (HS) image restoration. Most existing methods restore a desirable HS image by solving some optimization problems, consisting of a regularization term(s) and a data-fidelity term(s). The methods have to handle a regularization term(s) and a data-fidelity term(s) simultaneously in one objective function; therefore, we need to carefully control the hyperparameter(s) that balances these terms. However, the setting of such hyperparameters is often a troublesome task because their suitable values depend strongly on the regularization terms adopted and the noise intensities on a given observation. Our proposed method is formulated as a convex optimization problem, utilizing a novel hybrid regularization technique named Hybrid Spatio-Spectral Total Variation (HSSTV) and incorporating data-fidelity as hard constraints. HSSTV has a strong noise and artifact removal ability while avoiding oversmoothing and spectral distortion, without combining other regularizations such as low-rank modeling-based ones. In addition, the constraint-type data-fidelity enables us to translate the hyperparameters that balance between regularization and data-fidelity to the upper bounds of the degree of data-fidelity that can be set in a much easier manner. We also develop an efficient algorithm based on the alternating direction method of multipliers (ADMM) to efficiently solve the optimization problem. We illustrate the advantages of the proposed method over various HS image restoration methods through comprehensive experiments, including state-of-the-art ones.


2004 ◽  
Vol 21 (02) ◽  
pp. 207-224 ◽  
Author(s):  
HERMINIA I. CALVETE ◽  
CARMEN GALÉ

Bilevel programming involves two optimization problems where the constraint region of the first-level problem is implicitly determined by another optimization problem. This model has been applied to decentralized planning problems involving a decision process with a hierarchical structure. In this paper, we consider the bilevel linear fractional/linear programming problem, in which the objective function of the first-level is linear fractional, the objective function of the second level is linear, and the common constraint region is a polyhedron. For this problem, taking into account the relationship between the optimization problem of the second level and its dual, a global optimization approach is proposed that uses an exact penalty function based on the duality gap of the second-level problem.


2020 ◽  
Vol 2 (1) ◽  
pp. 37-55 ◽  
Author(s):  
Carl Leake ◽  
Daniele Mortari

This article presents a new methodology called Deep Theory of Functional Connections (TFC) that estimates the solutions of partial differential equations (PDEs) by combining neural networks with the TFC. The TFC is used to transform PDEs into unconstrained optimization problems by analytically embedding the PDE’s constraints into a “constrained expression” containing a free function. In this research, the free function is chosen to be a neural network, which is used to solve the now unconstrained optimization problem. This optimization problem consists of minimizing a loss function that is chosen to be the square of the residuals of the PDE. The neural network is trained in an unsupervised manner to minimize this loss function. This methodology has two major differences when compared with popular methods used to estimate the solutions of PDEs. First, this methodology does not need to discretize the domain into a grid, rather, this methodology can randomly sample points from the domain during the training phase. Second, after training, this methodology produces an accurate analytical approximation of the solution throughout the entire training domain. Because the methodology produces an analytical solution, it is straightforward to obtain the solution at any point within the domain and to perform further manipulation if needed, such as differentiation. In contrast, other popular methods require extra numerical techniques if the estimated solution is desired at points that do not lie on the discretized grid, or if further manipulation to the estimated solution must be performed.


Author(s):  
Adam N. Elmachtoub ◽  
Paul Grigas

Many real-world analytics problems involve two significant challenges: prediction and optimization. Because of the typically complex nature of each challenge, the standard paradigm is predict-then-optimize. By and large, machine learning tools are intended to minimize prediction error and do not account for how the predictions will be used in the downstream optimization problem. In contrast, we propose a new and very general framework, called Smart “Predict, then Optimize” (SPO), which directly leverages the optimization problem structure—that is, its objective and constraints—for designing better prediction models. A key component of our framework is the SPO loss function, which measures the decision error induced by a prediction. Training a prediction model with respect to the SPO loss is computationally challenging, and, thus, we derive, using duality theory, a convex surrogate loss function, which we call the SPO+ loss. Most importantly, we prove that the SPO+ loss is statistically consistent with respect to the SPO loss under mild conditions. Our SPO+ loss function can tractably handle any polyhedral, convex, or even mixed-integer optimization problem with a linear objective. Numerical experiments on shortest-path and portfolio-optimization problems show that the SPO framework can lead to significant improvement under the predict-then-optimize paradigm, in particular, when the prediction model being trained is misspecified. We find that linear models trained using SPO+ loss tend to dominate random-forest algorithms, even when the ground truth is highly nonlinear. This paper was accepted by Yinyu Ye, optimization.


Sign in / Sign up

Export Citation Format

Share Document