scholarly journals On the principle of optimality for nonstationary deterministic dynamic programming

2008 ◽  
Vol 4 (4) ◽  
pp. 519-525 ◽  
Author(s):  
Takashi Kamihigashi
Author(s):  
Chen Zhang ◽  
Ardalan Vahidi ◽  
Xiaopeng Li ◽  
Dean Essenmacher

This paper investigates the role of partial or complete knowledge of future driving conditions in fuel economy of Plug-in Hybrid Vehicles (PHEVs). We show that with the knowledge of distance to the next charging station only, substantial reduction in fuel use, up to 18%, is possible by planning a blended utilization of electric motor and the engine throughout the entire trip. To achieve this we formulate a modified Equivalent Consumption Minimization Strategy (ECMS) which takes into account the traveling distance. We show further fuel economy gain, in the order of 1–5%, is possible if the future terrain and velocity are known; we quantify this additional increase in fuel economy for a number of velocity cycles and a hilly terrain profile via deterministic dynamic programming.


2015 ◽  
Vol 169 (2) ◽  
pp. 631-655 ◽  
Author(s):  
Irina S. Dolinskaya ◽  
Marina A. Epelman ◽  
Esra Şişikoğlu Sir ◽  
Robert L. Smith

Author(s):  
R. Giancarlo

In this Chapter we present some general algorithmic techniques that have proved to be useful in speeding up the computation of some families of dynamic programming recurrences which have applications in sequence alignment, paragraph formation and prediction of RNA secondary structure. The material presented in this chapter is related to the computation of Levenshtein distances and approximate string matching that have been discussed in the previous three chapters. Dynamic programming is a general technique for solving discrete optimization (minimization or maximization) problems that can be represented by decision processes and for which the principle of optimality holds. We can view a decision process as a directed graph in which nodes represent the states of the process and edges represent decisions. The optimization problem at hand is represented as a decision process by decomposing it into a set of subproblems of smaller size. Such recursive decomposition is continued until we get only trivial subproblems, which can be solved directly. Each node in the graph corresponds to a subproblem and each edge (a, b) indicates that one way to solve subproblem a optimally is to solve first subproblem b optimally. Then, an optimal solution, or policy, is typically given by a path on the graph that minimizes or maximizes some objective function. The correctness of this approach is guaranteed by the principle of optimality which must be satisfied by the optimization problem: An optimal policy has the property that whatever the initial node (state) and initial edge (decision) are, the remaining edges (decisions) must be an optimal policy with regard to the node (state) resulting from the first transition. Another consequence of the principle of optimality is that we can express the optimal cost (and solution) of a subproblem in terms of optimal costs (and solutions) of problems of smaller size. That is, we can express optimal costs through a recurrence relation. This is a key component of dynamic programming, since we can compute the optimal cost of a subproblem only once, store the result in a table, and look it up when needed.


Sign in / Sign up

Export Citation Format

Share Document