scholarly journals The equivalence of viscosity and distributional subsolutions for convex subequations — a strong Bellman principle

2013 ◽  
Vol 44 (4) ◽  
pp. 621-652 ◽  
Author(s):  
F. Reese Harvey ◽  
H. Blaine Lawson
Keyword(s):  
2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Rahul Kumar ◽  
Leonardo Binetti ◽  
T. Hien Nguyen ◽  
Lourdes S. M. Alwis ◽  
Arti Agrawal ◽  
...  

AbstractKnowledge of the distribution of the aspect ratios (ARs) in a chemically-synthesized colloidal solution of Gold Nano Rods (GNRs) is an important measure in determining the quality of synthesis, and consequently the performance of the GNRs generated for various applications. In this work, an algorithm has been developed based on the Bellman Principle of Optimality to readily determine the AR distribution of synthesized GNRs in colloidal solutions. This is achieved by theoretically fitting the longitudinal plasmon resonance of GNRs obtained by UV-visible spectroscopy. The AR distribution obtained from the use of the algorithm developed have shown good agreement with those theoretically generated one as well as with the previously reported results. After bench-marking, the algorithm has been applied to determine the mean and standard deviation of the AR distribution of two GNRs solutions synthesized and examined in this work. The comparison with experimentally derived results from the use of expensive Transmission Electron Microscopic images and Dynamic Light Scattering technique shows that the algorithm developed offers a fast and thus potentially cost-effective solution to determine the quality of the synthesized GNRs specifically needed for many potential applications for the advanced sensor systems.


2020 ◽  
Vol 26 ◽  
pp. 109
Author(s):  
Manil T. Mohan

In this work, we consider the controlled two dimensional tidal dynamics equations in bounded domains. A distributed optimal control problem is formulated as the minimization of a suitable cost functional subject to the controlled 2D tidal dynamics equations. The existence of an optimal control is shown and the dynamic programming method for the optimal control of 2D tidal dynamics system is also described. We show that the feedback control can be obtained from the solution of an infinite dimensional Hamilton-Jacobi equation. The non-differentiability and lack of smoothness of the value function forced us to use the method of viscosity solutions to obtain a solution of the infinite dimensional Hamilton-Jacobi equation. The Bellman principle of optimality for the value function is also obtained. We show that a viscosity solution to the Hamilton-Jacobi equation can be used to derive the Pontryagin maximum principle, which give us the first order necessary conditions of optimality. Finally, we characterize the optimal control using the adjoint variable.


2013 ◽  
Vol 12 (05) ◽  
pp. 1021-1053 ◽  
Author(s):  
WLODZIMIERZ OGRYCZAK ◽  
PATRICE PERNY ◽  
PAUL WENG

A Markov decision process (MDP) is a general model for solving planning problems under uncertainty. It has been extended to multiobjective MDP to address multicriteria or multiagent problems in which the value of a decision must be evaluated according to several viewpoints, sometimes conflicting. Although most of the studies concentrate on the determination of the set of Pareto-optimal policies, we focus here on a more specialized problem that concerns the direct determination of policies achieving well-balanced tradeoffs. To this end, we introduce a reference point method based on the optimization of a weighted ordered weighted average (WOWA) of individual disachievements. We show that the resulting notion of optimal policy does not satisfy the Bellman principle and depends on the initial state. To overcome these difficulties, we propose a solution method based on a linear programming (LP) reformulation of the problem. Finally, we illustrate the feasibility of the proposed method on two types of planning problems under uncertainty arising in navigation of an autonomous agent and in inventory management.


2021 ◽  
Author(s):  
Gabriela Kováčová ◽  
Birgit Rudloff

When dealing with dynamic optimization problems, time consistency is a desirable property as it allows one to solve the problem efficiently through a backward recursion. The mean-risk problem is known to be time inconsistent when considered in its scalarized form. However, when left in its original bi-objective form, it turns out to satisfy a more general time consistency property that seems better suited to a vector optimization problem. In “Time Consistency of the Mean-Risk Problem,” Kováĉova and Rudloff introduce a set-valued version of the famous Bellman principle and show that the bi-objective mean-risk problem does satisfy it. Then, the upper image, a set that contains the efficient frontier on its boundary, recurses backward in time. Kováĉova and Rudloff present conditions under which this recursion can be exploited directly to compute a solution in the spirit of dynamic programming. This opens the door for a new branch in mathematics: dynamic multivariate programming.


Author(s):  
Andreas Lichtenstern ◽  
Rudi Zagst

AbstractIn this article we consider the post-retirement phase optimization problem for a specific pension product in Germany that comes without guarantees. The continuous-time optimization problem is defined consisting of two specialties: first, we have a product-specific pension adjustment mechanism based on a certain capital coverage ratio which stipulates compulsory pension adjustments if the pension fund is underfunded or significantly overfunded. Second, due to the retiree’s fear of and aversion against pension reductions, we introduce a total wealth distribution to an investment portfolio and a buffer portfolio to lower the probability of future potential pension shortenings. The target functional in the optimization, that is to be maximized, is the client’s expected accumulated utility from the stochastic future pension cash flows. The optimization outcome is the optimal investment strategy in the proposed model. Due to the inherent complexity of the continuous-time framework, the discrete-time version of the optimization problem is considered and solved via the Bellman principle. In addition, for computational reasons, a policy function iteration algorithm is introduced to find a stationary solution to the problem in a computationally efficient and elegant fashion. A numerical case study on optimization and simulation completes the work with highlighting the benefits of the proposed model.


Sign in / Sign up

Export Citation Format

Share Document