dynamic programs
Recently Published Documents


TOTAL DOCUMENTS

127
(FIVE YEARS 22)

H-INDEX

19
(FIVE YEARS 2)

2021 ◽  
Vol 68 (6) ◽  
pp. 1-47
Author(s):  
Jonathan Sterling ◽  
Robert Harper

The theory of program modules is of interest to language designers not only for its practical importance to programming, but also because it lies at the nexus of three fundamental concerns in language design: the phase distinction , computational effects , and type abstraction . We contribute a fresh “synthetic” take on program modules that treats modules as the fundamental constructs, in which the usual suspects of prior module calculi (kinds, constructors, dynamic programs) are rendered as derived notions in terms of a modal type-theoretic account of the phase distinction. We simplify the account of type abstraction (embodied in the generativity of module functors) through a lax modality that encapsulates computational effects, placing projectibility of module expressions on a type-theoretic basis. Our main result is a (significant) proof-relevant and phase-sensitive generalization of the Reynolds abstraction theorem for a calculus of program modules, based on a new kind of logical relation called a parametricity structure . Parametricity structures generalize the proof-irrelevant relations of classical parametricity to proof- relevant families, where there may be non-trivial evidence witnessing the relatedness of two programs—simplifying the metatheory of strong sums over the collection of types, for although there can be no “relation classifying relations,” one easily accommodates a “family classifying small families.” Using the insight that logical relations/parametricity is itself a form of phase distinction between the syntactic and the semantic, we contribute a new synthetic approach to phase separated parametricity based on the slogan logical relations as types , by iterating our modal account of the phase distinction. We axiomatize a dependent type theory of parametricity structures using two pairs of complementary modalities (syntactic, semantic) and (static, dynamic), substantiated using the topos theoretic Artin gluing construction. Then, to construct a simulation between two implementations of an abstract type, one simply programs a third implementation whose type component carries the representation invariant.


2021 ◽  
Author(s):  
David B. Brown ◽  
Jingwei Zhang

Allocating Resources Across Systems Coupled by Shared Information Many sequential decision problems involve repeatedly allocating a limited resource across subsystems that are jointly affected by randomly evolving exogenous factors. For example, in adaptive clinical trials, a decision maker needs to allocate patients to treatments in an effort to learn about the efficacy of treatments, but the number of available patients may vary randomly over time. In capital budgeting problems, firms may allocate resources to conduct R&D on new products, but funding budgets may evolve randomly. In many inventory management problems, firms need to allocate limited production capacity to satisfy uncertain demands at multiple locations, and these demands may be correlated due to vagaries in shared market conditions. In this paper, we develop a model involving “shared resources and signals” that captures these and potentially many other applications. The framework is naturally described as a stochastic dynamic program, but this problem is quite difficult to solve. We develop an approximation method based on a “dynamic fluid relaxation”: in this approximation, the subsystem state evolution is approximated by a deterministic fluid model, but the exogenous states (the signals) retain their stochastic evolution. We develop an algorithm for solving the dynamic fluid relaxation. We analyze the corresponding feasible policies and performance bounds from the dynamic fluid relaxation and show that these are asymptotically optimal as the number of subsystems grows large. We show that competing state-of-the-art approaches used in the literature on weakly coupled dynamic programs in general fail to provide asymptotic optimality. Finally, we illustrate the approach on the aforementioned dynamic capital budgeting and multilocation inventory management problems.


Author(s):  
Xiaoyue Li ◽  
John M. Mulvey

The contributions of this paper are threefold. First, by combining dynamic programs and neural networks, we provide an efficient numerical method to solve a large multiperiod portfolio allocation problem under regime-switching market and transaction costs. Second, the performance of our combined method is shown to be close to optimal in a stylized case. To our knowledge, this is the first paper to carry out such a comparison. Last, the superiority of the combined method opens up the possibility for more research on financial applications of generic methods, such as neural networks, provided that solutions to simplified subproblems are available via traditional methods. The research on combining fast starts with neural networks began about four years ago. We observed that Professor Weinan E’s approach for solving systems of differential equations by neural networks had much improved performance when starting close to an optimal solution and could stall if the current iterate was far from an optimal solution. As we all know, this behavior is common with Newton- based algorithms. As a consequence, we discovered that combining a system of differential equations with a feedforward neural network could much improve overall computational performance. In this paper, we follow a similar direction for dynamic portfolio optimization within a regime-switching market with transaction costs. It investigates how to improve efficiency by combining dynamic programming with a recurrent neural network. Traditional methods face the curse of dimensionality. In contrast, the running time of our combined approach grows approximately linearly with the number of risky assets. It is inspiring to explore the possibilities of combined methods in financial management, believing a careful linkage of existing dynamic optimization algorithms and machine learning will be an active domain going forward. Relationship of the authors: Professor John M. Mulvey is Xiaoyue Li’s doctoral advisor.


Author(s):  
Tim Vieira ◽  
Ryan Cotterell ◽  
Jason Eisner
Keyword(s):  

2021 ◽  
Vol 31 ◽  
Author(s):  
MARTIN ERWIG ◽  
PRASHANT KUMAR

Abstract In this paper, we present a method for explaining the results produced by dynamic programming (DP) algorithms. Our approach is based on retaining a granular representation of values that are aggregated during program execution. The explanations that are created from the granular representations can answer questions of why one result was obtained instead of another and therefore can increase the confidence in the correctness of program results. Our focus on dynamic programming is motivated by the fact that dynamic programming offers a systematic approach to implementing a large class of optimization algorithms which produce decisions based on aggregated value comparisons. It is those decisions that the granular representation can help explain. Moreover, the fact that dynamic programming can be formalized using semirings supports the creation of a Haskell library for dynamic programming that has two important features. First, it allows programmers to specify programs by recurrence relationships from which efficient implementations are derived automatically. Second, the dynamic programs can be formulated generically (as type classes), which supports the smooth transition from programs that only produce result to programs that can run with granular representation and also produce explanations. Finally, we also demonstrate how to anticipate user questions about program results and how to produce corresponding explanations automatically in advance.


Sign in / Sign up

Export Citation Format

Share Document