scholarly journals A stochastic design optimization methodology to reduce emission spread in combustion engines

Author(s):  
Kadir Mourat ◽  
Carola Eckstein ◽  
Thomas Koch

AbstractThis paper introduces a method for efficiently solving stochastic optimization problems in the field of engine calibration. The main objective is to make more conscious decisions during the base engine calibration process by considering the system uncertainty due to component tolerances and thus enabling more robust design, low emissions, and avoiding expensive recalibration steps that generate costs and possibly postpone the start of production. The main idea behind the approach is to optimize the design parameters of the engine control unit (ECU) that are subject to uncertainty by considering the resulting output uncertainty. The premise is that a model of the system under study exists, which can be evaluated cheaply, and the system tolerance is known. Furthermore, it is essential that the stochastic optimization problem can be formulated such that the objective function and the constraint functions can be expressed using proper metrics such as the value at risk (VaR). The main idea is to derive analytically closed formulations for the VaR, which are cheap to evaluate and thus reduce the computational effort of evaluating the objective and constraints. The VaR is therefore learned as a function of the input parameters of the initial model using a supervised learning algorithm. For this work, we employ Gaussian process regression models. To illustrate the benefits of the approach, it is applied to a representative engine calibration problem. The results show a significant improvement in emissions compared to the deterministic setting, where the optimization problem is constructed using safety coefficients. We also show that the computation time is comparable to the deterministic setting and is orders of magnitude less than solving the problem using the Monte-Carlo or quasi-Monte-Carlo method.

2021 ◽  
Author(s):  
Paul Embrechts ◽  
Alexander Schied ◽  
Ruodu Wang

We study issues of robustness in the context of Quantitative Risk Management and Optimization. We develop a general methodology for determining whether a given risk-measurement-related optimization problem is robust, which we call “robustness against optimization.” The new notion is studied for various classes of risk measures and expected utility and loss functions. Motivated by practical issues from financial regulation, special attention is given to the two most widely used risk measures in the industry, Value-at-Risk (VaR) and Expected Shortfall (ES). We establish that for a class of general optimization problems, VaR leads to nonrobust optimizers, whereas convex risk measures generally lead to robust ones. Our results offer extra insight on the ongoing discussion about the comparative advantages of VaR and ES in banking and insurance regulation. Our notion of robustness is conceptually different from the field of robust optimization, to which some interesting links are derived.


2019 ◽  
Vol 2019 ◽  
pp. 1-19
Author(s):  
NingNing Du ◽  
Yan-Kui Liu ◽  
Ying Liu

In financial optimization problem, the optimal portfolios usually depend heavily on the distributions of uncertain return rates. When the distributional information about uncertain return rates is partially available, it is important for investors to find a robust solution for immunization against the distribution uncertainty. The main contribution of this paper is to develop an ambiguous value-at-risk (VaR) optimization framework for portfolio selection problems, where the distributions of uncertain return rates are partially available. For tractability consideration, we deal with new safe approximations of ambiguous probabilistic constraints under two types of random perturbation sets and obtain two equivalent tractable formulations of the ambiguous probabilistic constraints. Finally, to demonstrate the potential for solving portfolio optimization problems, we provide a practical example about the Chinese stock market. The advantage of the proposed robust optimization method is also illustrated by comparing it with the existing optimization approach via numerical experiments.


2021 ◽  
Author(s):  
Florian Wechsung ◽  
Andrew Giuliani ◽  
M. Landreman ◽  
Antoine J Cerfon ◽  
Georg Stadler

Abstract We extend the single-stage stellarator coil design approach for quasi-symmetry on axis from [Giuliani et al, 2020] to additionally take into account coil manufacturing errors. By modeling coil errors independently from the coil discretization, we have the flexibility to consider realistic forms of coil errors. The corresponding stochastic optimization problems are formulated using a risk-neutral approach and risk-averse approaches. We present an efficient, gradient-based descent algorithm which relies on analytical derivatives to solve these problems. In a comprehensive numerical study, we compare the coil designs resulting from deterministic and risk-neutral stochastic optimization and find that the risk-neutral formulation results in more robust configurations and reduces the number of local minima of the optimization problem. We also compare deterministic and risk-neutral approaches in terms of quasi-symmetry on and away from the magnetic axis, and in terms of the confinement of particles released close to the axis. Finally, we show that for the optimization problems we consider, a risk-averse objective using the Conditional Value-at-Risk leads to results which are similar to the risk-neutral objective.


Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Jia-Tong Li ◽  
Jie Shen ◽  
Na Xu

For CVaR (conditional value-at-risk) portfolio nonsmooth optimization problem, we propose an infeasible incremental bundle method on the basis of the improvement function and the main idea of incremental method for solving convex finite min-max problems. The presented algorithm only employs the information of the objective function and one component function of constraint functions to form the approximate model for improvement function. By introducing the aggregate technique, we keep the information of previous iterate points that may be deleted from bundle to overcome the difficulty of numerical computation and storage. Our algorithm does not enforce the feasibility of iterate points and the monotonicity of objective function, and the global convergence of the algorithm is established under mild conditions. Compared with the available results, our method loosens the requirements of computing the whole constraint function, which makes the algorithm easier to implement.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Mohammad Khodabakhshi ◽  
Mehdi Ahmadi

Purpose The paper aims to present an approach to cost-benefit analysis with stochastic data. Determining the type and the values of alternative’s factors are probably the most important issue in this approach. Therefore, in the proposed approach, a competitive advantage model was built to measure the values of alternative’s factors. Then, a satisfactory cost-benefit analysis model with random data was proposed to evaluate the alternatives. The cost-benefit analysis of each alternative was carried out to obtain the real and satisfactory cost-benefit of the decision-maker. Design/methodology/approach This paper is orientationally expressed as a mathematical problem in which the optimization problem needs to analyze the approach. This paper is written based on uncertainty linear optimization. Optimization under uncertainty refers to this branch of optimization where there are uncertainties involved in the data or the model and is popularly known as stochastic optimization problems. Findings As was seen in the purpose part, in this paper, an approach is presented to cost-benefit analysis by the use of competitive advantage with stochastic data. In this regards, a stochastic optimization problem to assess competitive advantage is proposed. This optimization problem recognizes the values of alternative’s factors which is the most important step in cost-benefit analysis. An optimization problem is proposed to cost benefit analysis, as well. Practical implications To investigate different aspects of the proposed approach, a case study with random data of 21 economic projects was considered. Originality/value Cost–benefit analysis is a systematic approach to estimating the strengths and weaknesses of alternatives used to determine options which provide the best approach to achieving benefits while preserving savings. Cost–benefit analysis is related to cost-effectiveness analysis. Benefits and costs are expressed in monetary terms and are adjusted for the time value of money; all flows of benefits and costs over time are expressed on a common basis in terms of their net present value, regardless of whether they are incurred at different times. As seen the paper using competitive advantage tries to determine the values of alternative’s factor. As competitive advantage model analyze the advantages and disadvantages of alternatives, this paper by the use of this idea tries to determine the costs and benefits. Two stochastic optimization problems in the middle of this approach are proposed, which assess competitive advantage and cost–benefit analysis, respectively.


2016 ◽  
Vol 33 (1-2) ◽  
Author(s):  
Edgars Jakobsons

AbstractThe statistical functional expectile has recently attracted the attention of researchers in the area of risk management, because it is the only risk measure that is both coherent and elicitable. In this article, we consider the portfolio optimization problem with an expectile objective. Portfolio optimization problems corresponding to other risk measures are often solved by formulating a linear program (LP) that is based on a sample of asset returns. We derive three different LP formulations for the portfolio expectile optimization problem, which can be considered as counterparts to the LP formulations for the Conditional Value-at-Risk (CVaR) objective in the works of Rockafellar and Uryasev [


Author(s):  
Kentaro Yaji ◽  
Shintaro Yamasaki ◽  
Shohji Tsushima ◽  
Kikuo Fujita

Abstract We propose a novel framework based on multi-fidelity design optimization for indirectly solving computationally hard topology optimization problems. The primary concept of the proposed framework is to divide an original topology optimization problem into two subproblems, i.e., low- and high-fidelity design optimization problems. Hence, artificial design parameters, referred to as seeding parameters, are incorporated into the low-fidelity design optimization problem that is formulated on the basis of a pseudo-topology optimization problem. Meanwhile, the role of high-fidelity design optimization is to obtain a promising initial guess from a dataset comprising topology-optimized design candidates, and subsequently solve a surrogate optimization problem under a restricted design solution space. We apply the proposed framework to a topology optimization problem for the design of flow fields in battery systems, and confirm the efficacy through numerical investigations.


2019 ◽  
Vol 28 (02) ◽  
pp. 1950007 ◽  
Author(s):  
Lev V. Utkin ◽  
Mikhail A. Ryabinin

A Discriminative Deep Forest (DisDF) as a metric learning algorithm is proposed in the paper. It is based on the Deep Forest or gcForest proposed by Zhou and Feng and can be viewed as a gcForest modification. The case of the fully supervised learning is studied when the class labels of individual training examples are known. The main idea underlying the algorithm is to assign weights to decision trees in random forest in order to reduce distances between objects from the same class and to increase them between objects from different classes. The weights are training parameters. A specific objective function which combines Euclidean and Manhattan distances and simplifies the optimization problem for training the DisDF is proposed. The numerical experiments illustrate the proposed distance metric algorithm.


Sign in / Sign up

Export Citation Format

Share Document