Sensitivity analysis with dynamic programming

1984 ◽  
Vol 11 (1) ◽  
pp. 127-130 ◽  
Author(s):  
S. O. Denis Russell

A way is presented for making a sensitivity analysis to find by how much the total cost increases as one moves away from the optimal solution obtained by dynamic programming. It is illustrated using a simple storm drain optimization problem. Key words: dynamic programming, sensitivity analysis, optimization.

Author(s):  
R. Giancarlo

In this Chapter we present some general algorithmic techniques that have proved to be useful in speeding up the computation of some families of dynamic programming recurrences which have applications in sequence alignment, paragraph formation and prediction of RNA secondary structure. The material presented in this chapter is related to the computation of Levenshtein distances and approximate string matching that have been discussed in the previous three chapters. Dynamic programming is a general technique for solving discrete optimization (minimization or maximization) problems that can be represented by decision processes and for which the principle of optimality holds. We can view a decision process as a directed graph in which nodes represent the states of the process and edges represent decisions. The optimization problem at hand is represented as a decision process by decomposing it into a set of subproblems of smaller size. Such recursive decomposition is continued until we get only trivial subproblems, which can be solved directly. Each node in the graph corresponds to a subproblem and each edge (a, b) indicates that one way to solve subproblem a optimally is to solve first subproblem b optimally. Then, an optimal solution, or policy, is typically given by a path on the graph that minimizes or maximizes some objective function. The correctness of this approach is guaranteed by the principle of optimality which must be satisfied by the optimization problem: An optimal policy has the property that whatever the initial node (state) and initial edge (decision) are, the remaining edges (decisions) must be an optimal policy with regard to the node (state) resulting from the first transition. Another consequence of the principle of optimality is that we can express the optimal cost (and solution) of a subproblem in terms of optimal costs (and solutions) of problems of smaller size. That is, we can express optimal costs through a recurrence relation. This is a key component of dynamic programming, since we can compute the optimal cost of a subproblem only once, store the result in a table, and look it up when needed.


2020 ◽  
Vol 30 (4) ◽  
pp. 525-542
Author(s):  
Chandra Jaggi ◽  
Jitendra Singh

Our environment can be hit by disasters such as earthquake, flood, tornadoes, hurricanes, etc., and if any happens, organizations (private or government) start to provide facilities for evacuation and supply of relief commodities in the affected regions. This paper formulates an inventory-relief-chain (IRC) model with the relief supply chain for the relief commodity distribution. The results of the model provide the optimal number of intermediate distribution and relief centers. However, in the study, every center is considered to distribute relief commodities. The model is formulated with a core objective to optimize the total cost of the relief operation. An algorithm is proposed to find the optimal solution of the model. Furthermore, a numerical analysis is carried out on a numerical example to validate the efficiency of the algorithm. The sensitivity analysis is done over the optimal solution and the respective parameters of the model, at the same time representing the strength of the model,i.e., the extent these parameters are sensitive to the objective. The model can help relief organizations when managing relief supplies to the disaster areas.


2020 ◽  
Vol 11 (3) ◽  
pp. 120-132
Author(s):  
Fazilet Özer ◽  
Ismail Hakki Toroslu ◽  
Pinar Karagoz

With the automated teller machine (ATM) cash replenishment problem, banks aim to reduce the number of out-of-cash ATMs and duration of out-of-cash status. On the other hand, they want to reduce the cost of cash replenishment, as well. The problem conventionally involves forecasting ATM cash withdrawals, and then cash replenishment optimization based on the forecast. The authors assume that reliable forecasts are already obtained for the amount of cash needed in ATMs. The focus of the article is cash replenishment optimization. After introducing linear programming-based solutions, the authors propose a solution based on dynamic programming. Experiments conducted on real data reveal that the proposed approach can find the optimal solution more efficiently than linear programming.


Author(s):  
Empya Charlie ◽  
Siti Rusdiana ◽  
Rini Oktavia

Penelitian ini bertujuan untuk mengoptimalkan penjadwalan karyawan di CV. Karya Indah Bordir dalam melakukan tugas-tugas tertentu menggunakan metode Hungaria, serta menganalisis sensitivitas solusi optimal jika ada pengurangan waktu karyawan untuk menyelesaikan tugas-tugas. Metode Hongaria diterapkan pada proses bordir yang melibatkan 11 karyawan dan 10 tugas. Hasil penjadwalan yang optimal meminimalkan waktu produksi bordir perusahaan. Hasil penjadwalan optimal yang ditemukan adalah: karyawan 1 mengerjakan tas Mambo, karyawan 2 mengerjakan tas Elli, karyawan 3 mengerjakan tas Lonjong, karyawan 4 mengerjakan tas Tampang bunga, karyawan 6 mengerjakan tas Ransel, karyawan 7 mengerjakan tas Tima, karyawan 8 mengerjakan tas Keong, karyawan 9 mengerjakan tas Alexa, karyawan 10 mengerjakan tas Luna, dan karyawan 11 mengerjakan tas Mikha, dengan total waktu kerja adalah 13,7 jam. Setelah metode Hongaria diterapkan, CV. Karya Indah Bordir mendapat peningkatan pendapatan sebanyak 9,09%. Analisis sensitivitas dilakukan dengan mengurangi waktu karyawan dalam menyulam tas. Hasil analisis sensitivitas adalah beberapa batasan untuk variabel basis dan non basis untuk mempertahankan solusi optimal.   This research has a purpose to optimize the scheduling of employees in CV. Karya Indah Bordir in doing certain tasks using Hungarian method, as well as analyzing the sensitivity of the optimal solution if there is a reduction on the employees time to finish the tasks. The Hungarian method was applied on the embroidery process involving 11 employees and 10 tasks. The optimal scheduling result minimize the time of the embroidery production of the company. The optimal scheduling result found is: employee 1 does the Mambo bag, employee 2 does the Elli bag, employee 3 does the Lonjong bag, employee 4 does the Tampang bunga bag, employee 6 does the Ransel, employee 7 does the Tima bag, employee 8 does the Keong bag, employee 9 does the Alexa bag, employees 10 does the Luna bag, and employee 11 does the Mikha bag, with the total work time is 13,7 hours. After the Hungarian method was applied, CV. Karya Indah Bordir got the increasing revenue as much as 9,09 %. The sensitivity analysis was conducted by reducing the time of the employees take in embroidery the bags. The results of the sensitivity analysis are some boundaries for basis and non basis variables to maintain the optimal solution. 


Author(s):  
Alexander D. Bekman ◽  
Sergey V. Stepanov ◽  
Alexander A. Ruchkin ◽  
Dmitry V. Zelenin

The quantitative evaluation of producer and injector well interference based on well operation data (profiles of flow rates/injectivities and bottomhole/reservoir pressures) with the help of CRM (Capacitance-Resistive Models) is an optimization problem with large set of variables and constraints. The analytical solution cannot be found because of the complex form of the objective function for this problem. Attempts to find the solution with stochastic algorithms take unacceptable time and the result may be far from the optimal solution. Besides, the use of universal (commercial) optimizers hides the details of step by step solution from the user, for example&nbsp;— the ambiguity of the solution as the result of data inaccuracy.<br> The present article concerns two variants of CRM problem. The authors present a new algorithm of solving the problems with the help of “General Quadratic Programming Algorithm”. The main advantage of the new algorithm is the greater performance in comparison with the other known algorithms. Its other advantage is the possibility of an ambiguity analysis. This article studies the conditions which guarantee that the first variant of problem has a unique solution, which can be found with the presented algorithm. Another algorithm for finding the approximate solution for the second variant of the problem is also considered. The method of visualization of approximate solutions set is presented. The results of experiments comparing the new algorithm with some previously known are given.


Mathematics ◽  
2021 ◽  
Vol 9 (4) ◽  
pp. 303
Author(s):  
Nikolai Krivulin

We consider a decision-making problem to evaluate absolute ratings of alternatives from the results of their pairwise comparisons according to two criteria, subject to constraints on the ratings. We formulate the problem as a bi-objective optimization problem of constrained matrix approximation in the Chebyshev sense in logarithmic scale. The problem is to approximate the pairwise comparison matrices for each criterion simultaneously by a common consistent matrix of unit rank, which determines the vector of ratings. We represent and solve the optimization problem in the framework of tropical (idempotent) algebra, which deals with the theory and applications of idempotent semirings and semifields. The solution involves the introduction of two parameters that represent the minimum values of approximation error for each matrix and thereby describe the Pareto frontier for the bi-objective problem. The optimization problem then reduces to a parametrized vector inequality. The necessary and sufficient conditions for solutions of the inequality serve to derive the Pareto frontier for the problem. All solutions of the inequality, which correspond to the Pareto frontier, are taken as a complete Pareto-optimal solution to the problem. We apply these results to the decision problem of interest and present illustrative examples.


2021 ◽  
Vol 17 (4) ◽  
pp. 1-20
Author(s):  
Serena Wang ◽  
Maya Gupta ◽  
Seungil You

Given a classifier ensemble and a dataset, many examples may be confidently and accurately classified after only a subset of the base models in the ensemble is evaluated. Dynamically deciding to classify early can reduce both mean latency and CPU without harming the accuracy of the original ensemble. To achieve such gains, we propose jointly optimizing the evaluation order of the base models and early-stopping thresholds. Our proposed objective is a combinatorial optimization problem, but we provide a greedy algorithm that achieves a 4-approximation of the optimal solution under certain assumptions, which is also the best achievable polynomial-time approximation bound. Experiments on benchmark and real-world problems show that the proposed Quit When You Can (QWYC) algorithm can speed up average evaluation time by 1.8–2.7 times on even jointly trained ensembles, which are more difficult to speed up than independently or sequentially trained ensembles. QWYC’s joint optimization of ordering and thresholds also performed better in experiments than previous fixed orderings, including gradient boosted trees’ ordering.


Author(s):  
Amin Hosseini ◽  
Touraj Taghikhany ◽  
Milad Jahangiri

In the past few years, many studies have proved the efficiency of Simple Adaptive Control (SAC) in mitigating earthquakes’ damages to building structures. Nevertheless, the weighting matrices of this controller should be selected after a large number of sensitivity analyses. This step is time-consuming and it will not necessarily yield a controller with optimum performance. In the current study, an innovative method is introduced to tuning the SAC’s weighting matrices, which dispenses with excessive sensitivity analysis. In this regard, we try to define an optimization problem using intelligent evolutionary algorithm and utilized control indices in an objective function. The efficiency of the introduced method is investigated in 6-story building structure equipped with magnetorheological dampers under different seismic actions with and without uncertainty in the model of the proposed structure. The results indicate that the controller designed by the introduced method has a desirable performance under different conditions of uncertainty in the model. Furthermore, it improves the seismic performance of structure as compared to controllers designed through sensitivity analysis.


Author(s):  
Guang Dong ◽  
Zheng-Dong Ma ◽  
Gregory Hulbert ◽  
Noboru Kikuchi ◽  
Sudhakar Arepally ◽  
...  

Efficient and reliable sensitivity analyses are critical for topology optimization, especially for multibody dynamics systems, because of the large number of design variables and the complexities and expense in solving the state equations. This research addresses a general and efficient sensitivity analysis method for topology optimization with design objectives associated with time dependent dynamics responses of multibody dynamics systems that include nonlinear geometric effects associated with large translational and rotational motions. An iterative sensitivity analysis relation is proposed, based on typical finite difference methods for the differential algebraic equations (DAEs). These iterative equations can be simplified for specific cases to obtain more efficient sensitivity analysis methods. Since finite difference methods are general and widely used, the iterative sensitivity analysis is also applicable to various numerical solution approaches. The proposed sensitivity analysis method is demonstrated using a truss structure topology optimization problem with consideration of the dynamic response including large translational and rotational motions. The topology optimization problem of the general truss structure is formulated using the SIMP (Simply Isotropic Material with Penalization) assumption for the design variables associated with each truss member. It is shown that the proposed iterative steps sensitivity analysis method is both reliable and efficient.


Sign in / Sign up

Export Citation Format

Share Document