Bilevel Integer Programs with Stochastic Right-Hand Sides

Author(s):  
Junlong Zhang ◽  
Osman Y. Özaltın

We develop an exact value function-based approach to solve a class of bilevel integer programs with stochastic right-hand sides. We first study structural properties and design two methods to efficiently construct the value function of a bilevel integer program. Most notably, we generalize the integer complementary slackness theorem to bilevel integer programs. We also show that the value function of a bilevel integer program can be characterized by its values on a set of so-called bilevel minimal vectors. We then solve the value function reformulation of the original bilevel integer program with stochastic right-hand sides using a branch-and-bound algorithm. We demonstrate the performance of our solution methods on a set of randomly generated instances. We also apply the proposed approach to a bilevel facility interdiction problem. Our computational experiments show that the proposed solution methods can efficiently optimize large-scale instances. The performance of our value function-based approach is relatively insensitive to the number of scenarios, but it is sensitive to the number of constraints with stochastic right-hand sides. Summary of Contribution: Bilevel integer programs arise in many different application areas of operations research including supply chain, energy, defense, and revenue management. This paper derives structural properties of the value functions of bilevel integer programs. Furthermore, it proposes exact solution algorithms for a class of bilevel integer programs with stochastic right-hand sides. These algorithms extend the applicability of bilevel integer programs to a larger set of decision-making problems under uncertainty.

Author(s):  
Yangchen Pan ◽  
Hengshuai Yao ◽  
Amir-massoud Farahmand ◽  
Martha White

Dyna is an architecture for model based reinforcement learning (RL), where simulated experience from a model is used to update policies or value functions. A key component of Dyna is search control, the mechanism to generate the state and action from which the agent queries the model, which remains largely unexplored. In this work, we propose to generate such states by using the trajectory obtained from Hill Climbing (HC) the current estimate of the value function. This has the effect of propagating value from high value regions and of preemptively updating value estimates of the regions that the agent is likely to visit next. We derive a noisy projected natural gradient algorithm for hill climbing, and highlight a connection to Langevin dynamics. We provide an empirical demonstration on four classical domains that our algorithm, HC Dyna, can obtain significant sample efficiency improvements. We study the properties of different sampling distributions for search control, and find that there appears to be a benefit specifically from using the samples generated by climbing on current value estimates from low value to high value region.


2020 ◽  
Vol 9 (2) ◽  
pp. 459-470
Author(s):  
Helin Wu ◽  
Yong Ren ◽  
Feng Hu

Abstract In this paper, we investigate some kind of Dynkin game under g-expectation induced by backward stochastic differential equation (short for BSDE). The lower and upper value functions $$\underline{V}_t=ess\sup \nolimits _{\tau \in {\mathcal {T}_t}} ess\inf \nolimits _{\sigma \in {\mathcal {T}_t}}\mathcal {E}^g_t[R(\tau ,\sigma )]$$ V ̲ t = e s s sup τ ∈ T t e s s inf σ ∈ T t E t g [ R ( τ , σ ) ] and $$\overline{V}_t=ess\inf \nolimits _{\sigma \in {\mathcal {T}_t}} ess\sup \nolimits _{\tau \in {\mathcal {T}_t}}\mathcal {E}^g_t[R(\tau ,\sigma )]$$ V ¯ t = e s s inf σ ∈ T t e s s sup τ ∈ T t E t g [ R ( τ , σ ) ] are defined, respectively. Under some suitable assumptions, a pair of saddle points is obtained and the value function of Dynkin game $$V(t)=\underline{V}_t=\overline{V}_t$$ V ( t ) = V ̲ t = V ¯ t follows. Furthermore, we also consider the constrained case of Dynkin game.


2009 ◽  
Vol 9 (1) ◽  
Author(s):  
Axel Anderson

This paper characterizes the behavior of value functions in dynamic stochastic discounted programming models near fixed points of the state space. When the second derivative of the flow payoff function is bounded, the value function is proportional to a linear function plus geometric term. A specific formula for the exponent of this geometric term is provided. This exponent continuously falls in the rate of patience.If the state variable is a martingale, the second derivative of the value function is unbounded. If the state variable is instead a strict local submartingale, then the same holds for the first derivative of the value function. Thus, the proposed approximation is more accurate than Taylor series approximation.The approximation result is used to characterize locally optimal policies in several fundamental economic problems.


Mathematics ◽  
2020 ◽  
Vol 8 (7) ◽  
pp. 1109 ◽  
Author(s):  
Agnieszka Wiszniewska-Matyszkiel ◽  
Rajani Singh

We study general classes of discrete time dynamic optimization problems and dynamic games with feedback controls. In such problems, the solution is usually found by using the Bellman or Hamilton–Jacobi–Bellman equation for the value function in the case of dynamic optimization and a set of such coupled equations for dynamic games, which is not always possible accurately. We derive general rules stating what kind of errors in the calculation or computation of the value function do not result in errors in calculation or computation of an optimal control or a Nash equilibrium along the corresponding trajectory. This general result concerns not only errors resulting from using numerical methods but also errors resulting from some preliminary assumptions related to replacing the actual value functions by some a priori assumed constraints for them on certain subsets. We illustrate the results by a motivating example of the Fish Wars, with singularities in payoffs.


2002 ◽  
Vol 34 (01) ◽  
pp. 141-157 ◽  
Author(s):  
Paul Dupuis ◽  
Hui Wang

We consider a class of optimal stopping problems where the ability to stop depends on an exogenous Poisson signal process - we can only stop at the Poisson jump times. Even though the time variable in these problems has a discrete aspect, a variational inequality can be obtained by considering an underlying continuous-time structure. Depending on whether stopping is allowed at t = 0, the value function exhibits different properties across the optimal exercise boundary. Indeed, the value function is only 𝒞 0 across the optimal boundary when stopping is allowed at t = 0 and 𝒞 2 otherwise, both contradicting the usual 𝒞 1 smoothness that is necessary and sufficient for the application of the principle of smooth fit. Also discussed is an equivalent stochastic control formulation for these stopping problems. Finally, we derive the asymptotic behaviour of the value functions and optimal exercise boundaries as the intensity of the Poisson process goes to infinity or, roughly speaking, as the problems converge to the classical continuous-time optimal stopping problems.


2003 ◽  
Vol 05 (02) ◽  
pp. 167-189 ◽  
Author(s):  
Ştefan Mirică

We give complete proofs to the verification theorems announced recently by the author for the "pairs of relatively optimal feedback strategies" of an autonomous differential game. These concepts are considered to describe the possibly optimal solutions of a differential game while the corresponding value functions are used as "instruments" for proving the relative optimality and also as "auxiliary characteristics" of the differential game. The 6 verification theorems in the paper are proved under different regularity assumptions accompanied by suitable differential inequalities verified by the generalized derivatives, mainly of contingent type, of the value function.


Sign in / Sign up

Export Citation Format

Share Document