scholarly journals Dynkin game under g-expectation in continuous time

2020 ◽  
Vol 9 (2) ◽  
pp. 459-470
Author(s):  
Helin Wu ◽  
Yong Ren ◽  
Feng Hu

Abstract In this paper, we investigate some kind of Dynkin game under g-expectation induced by backward stochastic differential equation (short for BSDE). The lower and upper value functions $$\underline{V}_t=ess\sup \nolimits _{\tau \in {\mathcal {T}_t}} ess\inf \nolimits _{\sigma \in {\mathcal {T}_t}}\mathcal {E}^g_t[R(\tau ,\sigma )]$$ V ̲ t = e s s sup τ ∈ T t e s s inf σ ∈ T t E t g [ R ( τ , σ ) ] and $$\overline{V}_t=ess\inf \nolimits _{\sigma \in {\mathcal {T}_t}} ess\sup \nolimits _{\tau \in {\mathcal {T}_t}}\mathcal {E}^g_t[R(\tau ,\sigma )]$$ V ¯ t = e s s inf σ ∈ T t e s s sup τ ∈ T t E t g [ R ( τ , σ ) ] are defined, respectively. Under some suitable assumptions, a pair of saddle points is obtained and the value function of Dynkin game $$V(t)=\underline{V}_t=\overline{V}_t$$ V ( t ) = V ̲ t = V ¯ t follows. Furthermore, we also consider the constrained case of Dynkin game.

2002 ◽  
Vol 34 (01) ◽  
pp. 141-157 ◽  
Author(s):  
Paul Dupuis ◽  
Hui Wang

We consider a class of optimal stopping problems where the ability to stop depends on an exogenous Poisson signal process - we can only stop at the Poisson jump times. Even though the time variable in these problems has a discrete aspect, a variational inequality can be obtained by considering an underlying continuous-time structure. Depending on whether stopping is allowed at t = 0, the value function exhibits different properties across the optimal exercise boundary. Indeed, the value function is only 𝒞 0 across the optimal boundary when stopping is allowed at t = 0 and 𝒞 2 otherwise, both contradicting the usual 𝒞 1 smoothness that is necessary and sufficient for the application of the principle of smooth fit. Also discussed is an equivalent stochastic control formulation for these stopping problems. Finally, we derive the asymptotic behaviour of the value functions and optimal exercise boundaries as the intensity of the Poisson process goes to infinity or, roughly speaking, as the problems converge to the classical continuous-time optimal stopping problems.


2002 ◽  
Vol 34 (1) ◽  
pp. 141-157 ◽  
Author(s):  
Paul Dupuis ◽  
Hui Wang

We consider a class of optimal stopping problems where the ability to stop depends on an exogenous Poisson signal process - we can only stop at the Poisson jump times. Even though the time variable in these problems has a discrete aspect, a variational inequality can be obtained by considering an underlying continuous-time structure. Depending on whether stopping is allowed att= 0, the value function exhibits different properties across the optimal exercise boundary. Indeed, the value function is only𝒞0across the optimal boundary when stopping is allowed att= 0 and𝒞2otherwise, both contradicting the usual𝒞1smoothness that is necessary and sufficient for the application of the principle of smooth fit. Also discussed is an equivalent stochastic control formulation for these stopping problems. Finally, we derive the asymptotic behaviour of the value functions and optimal exercise boundaries as the intensity of the Poisson process goes to infinity or, roughly speaking, as the problems converge to the classical continuous-time optimal stopping problems.


1992 ◽  
Vol 29 (01) ◽  
pp. 104-115 ◽  
Author(s):  
M. Sun

This paper introduces several versions of starting-stopping problem for the diffusion model defined in terms of a stochastic differential equation. The problem could be regarded as a stochastic differential game in which the player can only decide when to start the game and when to quit the game in order to maximize his fortune. Nested variational inequalities arise in studying such a problem, with which we are able to characterize the value function and to obtain optimal strategies.


1992 ◽  
Vol 29 (1) ◽  
pp. 104-115 ◽  
Author(s):  
M. Sun

This paper introduces several versions of starting-stopping problem for the diffusion model defined in terms of a stochastic differential equation. The problem could be regarded as a stochastic differential game in which the player can only decide when to start the game and when to quit the game in order to maximize his fortune. Nested variational inequalities arise in studying such a problem, with which we are able to characterize the value function and to obtain optimal strategies.


2014 ◽  
Vol 2 (4) ◽  
pp. 313-334
Author(s):  
Jianfen Feng ◽  
Dianfa Chen ◽  
Mei Yu

AbstractIn this paper, a new approach is developed to estimate the value of defaultable securities under the actual probability measure. This model gives the price framework by means of the method of backward stochastic differential equation. Such a method solves some problems in most of existing literatures with respect to pricing the credit risk and relaxes certain market limitations. We provide the price of defaultable securities in discrete time and in continuous time respectively, which is favorable to practice to manage real credit risk for finance institutes.


Author(s):  
Yangchen Pan ◽  
Hengshuai Yao ◽  
Amir-massoud Farahmand ◽  
Martha White

Dyna is an architecture for model based reinforcement learning (RL), where simulated experience from a model is used to update policies or value functions. A key component of Dyna is search control, the mechanism to generate the state and action from which the agent queries the model, which remains largely unexplored. In this work, we propose to generate such states by using the trajectory obtained from Hill Climbing (HC) the current estimate of the value function. This has the effect of propagating value from high value regions and of preemptively updating value estimates of the regions that the agent is likely to visit next. We derive a noisy projected natural gradient algorithm for hill climbing, and highlight a connection to Langevin dynamics. We provide an empirical demonstration on four classical domains that our algorithm, HC Dyna, can obtain significant sample efficiency improvements. We study the properties of different sampling distributions for search control, and find that there appears to be a benefit specifically from using the samples generated by climbing on current value estimates from low value to high value region.


Author(s):  
Georg A. Gottwald ◽  
Ian Melbourne

A recent paper of Melbourne & Stuart (2011 A note on diffusion limits of chaotic skew product flows. Nonlinearity 24 , 1361–1367 (doi:10.1088/0951-7715/24/4/018)) gives a rigorous proof of convergence of a fast–slow deterministic system to a stochastic differential equation with additive noise. In contrast to other approaches, the assumptions on the fast flow are very mild. In this paper, we extend this result from continuous time to discrete time. Moreover, we show how to deal with one-dimensional multiplicative noise. This raises the issue of how to interpret certain stochastic integrals; it is proved that the integrals are of Stratonovich type for continuous time and neither Stratonovich nor Itô for discrete time. We also provide a rigorous derivation of super-diffusive limits where the stochastic differential equation is driven by a stable Lévy process. In the case of one-dimensional multiplicative noise, the stochastic integrals are of Marcus type both in the discrete and continuous time contexts.


Sign in / Sign up

Export Citation Format

Share Document