scholarly journals Mean-Field SDE Driven by a Fractional Brownian Motion and Related Stochastic Control Problem

2017 ◽  
Vol 55 (3) ◽  
pp. 1500-1533 ◽  
Author(s):  
Rainer Buckdahn ◽  
Shuai Jing
2020 ◽  
Vol 28 (4) ◽  
pp. 291-306
Author(s):  
Tayeb Bouaziz ◽  
Adel Chala

AbstractWe consider a stochastic control problem in the case where the set of the control domain is convex, and the system is governed by fractional Brownian motion with Hurst parameter {H\in(\frac{1}{2},1)} and standard Wiener motion. The criterion to be minimized is in the general form, with initial cost. We derive a stochastic maximum principle of optimality by using two famous approaches. The first one is the Doss–Sussmann transformation and the second one is the Malliavin derivative.


1984 ◽  
Vol 16 (1) ◽  
pp. 16-16 ◽  
Author(s):  
Ioannis Karatzas ◽  
Steven E. Shreve

The stochastic control problem of tracking a Brownian motion by a process of bounded variation is reduced to a control problem with reflection at the origin, and the latter is related to a question of optimal stopping of Brownian motion absorbed at the origin. Direct probabilistic arguments can be used to show equivalences between the various problems.


1984 ◽  
Vol 16 (1) ◽  
pp. 15-15
Author(s):  
Joannis Karatzas ◽  
Steven E. Shreve

The stochastic control problem of tracking a Brownian motion by a non-decreasing process (monotone follower) is related to a question of optimal stopping. Direct probabilistic arguments are employed to show that the two problems are equivalent and that both admit optimal solutions.


2018 ◽  
Vol 24 (1) ◽  
pp. 437-461 ◽  
Author(s):  
Huyên Pham ◽  
Xiaoli Wei

We consider the stochastic optimal control problem of McKean−Vlasov stochastic differential equation where the coefficients may depend upon the joint law of the state and control. By using feedback controls, we reformulate the problem into a deterministic control problem with only the marginal distribution of the process as controlled state variable, and prove that dynamic programming principle holds in its general form. Then, by relying on the notion of differentiability with respect to probability measures recently introduced by [P.L. Lions, Cours au Collège de France: Théorie des jeux à champ moyens, audio conference 2006−2012], and a special Itô formula for flows of probability measures, we derive the (dynamic programming) Bellman equation for mean-field stochastic control problem, and prove a verification theorem in our McKean−Vlasov framework. We give explicit solutions to the Bellman equation for the linear quadratic mean-field control problem, with applications to the mean-variance portfolio selection and a systemic risk model. We also consider a notion of lifted viscosity solutions for the Bellman equation, and show the viscosity property and uniqueness of the value function to the McKean−Vlasov control problem. Finally, we consider the case of McKean−Vlasov control problem with open-loop controls and discuss the associated dynamic programming equation that we compare with the case of closed-loop controls.


2012 ◽  
Author(s):  
Krishnamoorthy Kalyanam ◽  
Swaroop Darbha ◽  
Myoungkuk Park ◽  
Meir Pachter ◽  
Phil Chandler ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document