The continuous-time problem with interval-valued functions: applications to economic equilibrium

2018 ◽  
Vol 34 (6) ◽  
pp. 1123-1144
Author(s):  
G. Ruiz-Garzón ◽  
R. Osuna-Gómez ◽  
A. Rufián-Lizana ◽  
Y. Chalco-Cano
1996 ◽  
Vol 28 (04) ◽  
pp. 1145-1176 ◽  
Author(s):  
Sid Browne ◽  
Ward Whitt

We derive optimal gambling and investment policies for cases in which the underlying stochastic process has parameter values that are unobserved random variables. For the objective of maximizing logarithmic utility when the underlying stochastic process is a simple random walk in a random environment, we show that a state-dependent control is optimal, which is a generalization of the celebrated Kelly strategy: the optimal strategy is to bet a fraction of current wealth equal to a linear function of the posterior mean increment. To approximate more general stochastic processes, we consider a continuous-time analog involving Brownian motion. To analyze the continuous-time problem, we study the diffusion limit of random walks in a random environment. We prove that they converge weakly to a Kiefer process, or tied-down Brownian sheet. We then find conditions under which the discrete-time process converges to a diffusion, and analyze the resulting process. We analyze in detail the case of the natural conjugate prior, where the success probability has a beta distribution, and show that the resulting limit diffusion can be viewed as a rescaled Brownian motion. These results allow explicit computation of the optimal control policies for the continuous-time gambling and investment problems without resorting to continuous-time stochastic-control procedures. Moreover they also allow an explicit quantitative evaluation of the financial value of randomness, the financial gain of perfect information and the financial cost of learning in the Bayesian problem.


1996 ◽  
Vol 28 (03) ◽  
pp. 763-783 ◽  
Author(s):  
Terence Chan

The ‘Mabinogion sheep’ problem, originally due to D. Williams, is a nice illustration in discrete time of the martingale optimality principle and the use of local time in stochastic control. The use of singular controls involving local time is even more strikingly highlighted in the context of continuous time. This paper considers a class of diffusion versions of the discrete-time Mabinogion sheep problem. The stochastic version of the Bellman dynamic programming approach leads to a free boundary problem in each case. The most surprising feature in the continuous-time context is the existence of diffusion versions of the original discrete-time problem for which the optimal boundary is different from that in the discrete-time case; even when the optimal boundary is the same, the value functions can be very different.


1985 ◽  
Vol 22 (2) ◽  
pp. 447-453
Author(s):  
Peter Guttorp ◽  
Reg Kulperger ◽  
Richard Lockhart

Weak convergence to reflected Brownian motion is deduced for certain upwardly drifting random walks by coupling them to a simple reflected random walk. The argument is quite elementary, and also gives the right conditions on the drift. A similar argument works for a corresponding continuous-time problem.


1988 ◽  
Vol 20 (3) ◽  
pp. 635-645 ◽  
Author(s):  
David Heath ◽  
Robert P. Kertz

A player starts at x in (-G, G) and attempts to leave the interval in a limited playing time. In the discrete-time problem, G is a positive integer and the position is described by a random walk starting at integer x, with mean increments zero, and variance increment chosen by the player from [0, 1] at each integer playing time. In the continuous-time problem, the player's position is described by an Ito diffusion process with infinitesimal mean parameter zero and infinitesimal diffusion parameter chosen by the player from [0, 1] at each time instant of play. To maximize the probability of leaving the interval (–G, G) in a limited playing time, the player should play boldly by always choosing largest possible variance increment in the discrete-time setting and largest possible diffusion parameter in the continuous-time setting, until the player leaves the interval. In the discrete-time setting, this result affirms a conjecture of Spencer. In the continuous-time setting, the value function of play is identified.


1988 ◽  
Vol 20 (03) ◽  
pp. 635-645 ◽  
Author(s):  
David Heath ◽  
Robert P. Kertz

A player starts at x in (-G, G) and attempts to leave the interval in a limited playing time. In the discrete-time problem, G is a positive integer and the position is described by a random walk starting at integer x, with mean increments zero, and variance increment chosen by the player from [0, 1] at each integer playing time. In the continuous-time problem, the player's position is described by an Ito diffusion process with infinitesimal mean parameter zero and infinitesimal diffusion parameter chosen by the player from [0, 1] at each time instant of play. To maximize the probability of leaving the interval (–G, G) in a limited playing time, the player should play boldly by always choosing largest possible variance increment in the discrete-time setting and largest possible diffusion parameter in the continuous-time setting, until the player leaves the interval. In the discrete-time setting, this result affirms a conjecture of Spencer. In the continuous-time setting, the value function of play is identified.


Fractals ◽  
2000 ◽  
Vol 08 (02) ◽  
pp. 139-145 ◽  
Author(s):  
GOVINDAN RANGARAJAN ◽  
MINGZHOU DING

We study the first passage time (FPT) problem for biased continuous time random walks. Using the recently formulated framework of fractional Fokker-Planck equations, we obtain the Laplace transform of the FPT density function when the bias is constant. When the bias depends linearly on the position, the full FPT density function is derived in terms of Hermite polynomials and generalized Mittag-Leffler functions.


Sign in / Sign up

Export Citation Format

Share Document