scholarly journals Integration of Second-Order Sensitivity Method and CoKriging Surrogate Model

Mathematics ◽  
2021 ◽  
Vol 9 (4) ◽  
pp. 401
Author(s):  
Zebin Zhang ◽  
Martin Buisson ◽  
Pascal Ferrand ◽  
Manuel Henner

The global exploring feature of the surrogate model makes it a useful intermedia for design optimization. The accuracy of the surrogate model is closely related with the efficiency of optima-search. The cokriging approach described in present studies can significantly improve the surrogate model accuracy and cut down the turnaround time spent on the modeling process. Compared to the universal Kriging method, the cokriging method interpolates not only the sampling data, but also on their associated derivatives. However, the derivatives, especially high order ones, are too computationally costly to be easily affordable, forming a bottleneck for the application of derivative enhanced methods. Based on the sensitivity analysis of Navier–Stokes equations, current study introduces a low-cost method to compute the high-order derivatives, making high order derivatives enhanced cokriging modeling practically achievable. For a methodological illustration, second-order derivatives of regression model and correlation models are proposed. A second-order derivative enhanced cokriging model-based optimization tool was developed and tested on the optimal design of an automotive engine cooling fan. This approach improves the modern optimal design efficiency and proposes a novel direction for the large scale optimization problems.

2010 ◽  
Vol 19 (01) ◽  
pp. 45-58 ◽  
Author(s):  
SAJAD NAJAFI RAVADANEGH ◽  
ARASH VAHIDNIA ◽  
HOJAT HATAMI

Optimal planning of large-scale distribution networks is a multiobjective combinatorial optimization problem with many complexities. This paper proposes the application of improved genetic algorithm (GA) for the optimal design of large-scale distribution systems in order to provide optimal sizing and locating of the high voltage (HV) substations and medium voltage (MV) feeders routing, using their corresponding fixed and variable costs associated with operational and optimization constraints. The novel approach presented in the paper, solves hard satisfactory optimization problems with different constraints in large-scale distribution networks. This paper presents a new concept based on MST in graph theory and GA for optimal locating of the HV substations and MV feeders routing in a real-size distribution network. Minimum spanning tree solved with Prim's algorithm is employed to generate a set of feasible population. In the present article, to reduce computational burden and avoid huge search space leading to infeasible solutions, special coding method is generated for GA operators to solve optimal feeders routing. The proposed coding method guarantees the validity of the solution during the progress of the GA toward the global optimal solution. The developed GA-based software is tested in a real-size large-scale distribution system and the well-satisfactory results are presented.


Author(s):  
P. K. KAPUR ◽  
ANU. G. AGGARWAL ◽  
KANICA KAPOOR ◽  
GURJEET KAUR

The demand for complex and large-scale software systems is increasing rapidly. Therefore, the development of high-quality, reliable and low cost computer software has become critical issue in the enormous worldwide computer technology market. For developing these large and complex software small and independent modules are integrated which are tested independently during module testing phase of software development. In the process, testing resources such as time, testing personnel etc. are used. These resources are not infinitely large. Consequently, it is an important matter for the project manager to allocate these limited resources among the modules optimally during the testing process. Another major concern in software development is the cost. It is in fact, profit to the management if the cost of the software is less while meeting the costumer requirements. In this paper, we investigate an optimal resource allocation problem of minimizing the cost of software testing under limited amount of available resources, given a reliability constraint. To solve the optimization problem we present genetic algorithm which stands up as a powerful tool for solving search and optimization problems. The key objective of using genetic algorithm in the field of software reliability is its capability to give optimal results through learning from historical data. One numerical example has been discussed to illustrate the applicability of the approach.


2020 ◽  
Author(s):  
Qizhuang Cen ◽  
Tengfei Hao ◽  
Hao Ding ◽  
Shanhong Guan ◽  
Zhiqiang Qin ◽  
...  

Abstract Ising machines based on analog systems have the potential of acceleration in solving ubiquitous combinatorial optimization problems. Although some artificial spins to support large-scale Ising machine is reported, e.g. superconducting qubits in quantum annealers and short optical pulses in coherent Ising machines, the spin coherence is fragile due to the ultra-low equivalent temperature or optical phase sensitivity. In this paper, we propose to use short microwave pulses generated from an optoelectronic parametric oscillator as the spins to implement the Ising machine with large scale and also high coherence under room temperature. The proposed machine supports 10,000 spins, and the high coherence leads to accurate computation. Moreover, the Ising machine is highly compatible with high-speed electronic devices for programmability, paving a low-cost, accurate, and easy-to-implement way toward to solve real-world optimization problems.


2021 ◽  
Author(s):  
Taozeng Zhu ◽  
Jingui Xie ◽  
Melvyn Sim

Many real-world optimization problems have input parameters estimated from data whose inherent imprecision can lead to fragile solutions that may impede desired objectives and/or render constraints infeasible. We propose a joint estimation and robustness optimization (JERO) framework to mitigate estimation uncertainty in optimization problems by seamlessly incorporating both the parameter estimation procedure and the optimization problem. Toward that end, we construct an uncertainty set that incorporates all of the data, and the size of the uncertainty set is based on how well the parameters are estimated from that data when using a particular estimation procedure: regressions, the least absolute shrinkage and selection operator, and maximum likelihood estimation (among others). The JERO model maximizes the uncertainty set’s size and so obtains solutions that—unlike those derived from models dedicated strictly to robust optimization—are immune to parameter perturbations that would violate constraints or lead to objective function values exceeding their desired levels. We describe several applications and provide explicit formulations of the JERO framework for a variety of estimation procedures. To solve the JERO models with exponential cones, we develop a second-order conic approximation that limits errors beyond an operating range; with this approach, we can use state-of-the-art second-order conic programming solvers to solve even large-scale convex optimization problems. This paper was accepted by J. George Shanthikumar, special issue on data-driven prescriptive analytics.


Geophysics ◽  
2016 ◽  
Vol 81 (2) ◽  
pp. F1-F15 ◽  
Author(s):  
Ludovic Métivier ◽  
Romain Brossier

The SEISCOPE optimization toolbox is a set of FORTRAN 90 routines, which implement first-order methods (steepest-descent and nonlinear conjugate gradient) and second-order methods ([Formula: see text]-BFGS and truncated Newton), for the solution of large-scale nonlinear optimization problems. An efficient line-search strategy ensures the robustness of these implementations. The routines are proposed as black boxes easy to interface with any computational code, where such large-scale minimization problems have to be solved. Traveltime tomography, least-squares migration, or full-waveform inversion are examples of such problems in the context of geophysics. Integrating the toolbox for solving this class of problems presents two advantages. First, it helps to separate the routines depending on the physics of the problem from the ones related to the minimization itself, thanks to the reverse communication protocol. This enhances flexibility in code development and maintenance. Second, it allows us to switch easily between different optimization algorithms. In particular, it reduces the complexity related to the implementation of second-order methods. Because the latter benefit from faster convergence rates compared to first-order methods, significant improvements in terms of computational efforts can be expected.


2015 ◽  
Vol 18 (4) ◽  
pp. 985-1011 ◽  
Author(s):  
Liang Pan ◽  
Kun Xu

AbstractIn this paper, a compact third-order gas-kinetic scheme is proposed for the compressible Euler and Navier-Stokes equations. The main reason for the feasibility to develop such a high-order scheme with compact stencil, which involves only neighboring cells, is due to the use of a high-order gas evolution model. Besides the evaluation of the time-dependent flux function across a cell interface, the high-order gas evolution model also provides an accurate time-dependent solution of the flow variables at a cell interface. Therefore, the current scheme not only updates the cell averaged conservative flow variables inside each control volume, but also tracks the flow variables at the cell interface at the next time level. As a result, with both cell averaged and cell interface values, the high-order reconstruction in the current scheme can be done compactly. Different from using a weak formulation for high-order accuracy in the Discontinuous Galerkin method, the current scheme is based on the strong solution, where the flow evolution starting from a piecewise discontinuous high-order initial data is precisely followed. The cell interface time-dependent flow variables can be used for the initial data reconstruction at the beginning of next time step. Even with compact stencil, the current scheme has third-order accuracy in the smooth flow regions, and has favorable shock capturing property in the discontinuous regions. It can be faithfully used from the incompressible limit to the hypersonic flow computations, and many test cases are used to validate the current scheme. In comparison with many other high-order schemes, the current method avoids the use of Gaussian points for the flux evaluation along the cell interface and the multi-stage Runge-Kutta time stepping technique. Due to its multidimensional property of including both derivatives of flow variables in the normal and tangential directions of a cell interface, the viscous flow solution, especially those with vortex structure, can be accurately captured. With the same stencil of a second order scheme, numerical tests demonstrate that the current scheme is as robust as well-developed second-order shock capturing schemes, but provides more accurate numerical solutions than the second order counterparts.


Author(s):  
Paul Cronin ◽  
Harry Woerde ◽  
Rob Vasbinder

Sign in / Sign up

Export Citation Format

Share Document