scholarly journals Modeling the Marginal Value of Rainforest Losses: A Dynamic Value Function Approach

2015 ◽  
Author(s):  
Jon Strand
Optimization ◽  
2018 ◽  
Vol 68 (2-3) ◽  
pp. 433-455 ◽  
Author(s):  
Stephan Dempe ◽  
Boris S. Mordukhovich ◽  
Alain B. Zemkoho

2017 ◽  
Vol 27 (1) ◽  
pp. 5-27
Author(s):  
Dmitry B. Rokhlin ◽  
Anatoly Usov

Abstract We consider a model of fishery management, where n agents exploit a single population with strictly concave continuously differentiable growth function of Verhulst type. If the agent actions are coordinated and directed towards the maximization of the discounted cooperative revenue, then the biomass stabilizes at the level, defined by the well known “golden rule”. We show that for independent myopic harvesting agents such optimal (or ε-optimal) cooperative behavior can be stimulated by the proportional tax, depending on the resource stock, and equal to the marginal value function of the cooperative problem. To implement this taxation scheme we prove that the mentioned value function is strictly concave and continuously differentiable, although the instantaneous individual revenues may be neither concave nor differentiable.


Author(s):  
Kaisheng Liu ◽  
Yumei Xing

This article puts forward the bi-matrix games with crisp parametric payoffs based on interval value function approach. We conclude that the equilibrium solution of the game model can converted into optimal solutions of the pair of the non-linear optimization problem. Finally, experiment results show the efficiency of the model.


2020 ◽  
Vol 56 (3) ◽  
pp. 675-693
Author(s):  
Hans M. Amman ◽  
Marco P. Tucci

AbstractIn a previous paper Amman et al. (Macroecon Dyn, 2018) compare the two dominant approaches for solving models with optimal experimentation (also called active learning), i.e. the value function and the approximation method. By using the same model and dataset as in Beck and Wieland (J Econ Dyn Control 26:1359–1377, 2002), they find that the approximation method produces solutions close to those generated by the value function approach and identify some elements of the model specifications which affect the difference between the two solutions. They conclude that differences are small when the effects of learning are limited. However the dataset used in the experiment describes a situation where the controller is dealing with a nonstationary process and there is no penalty on the control. The goal of this paper is to see if their conclusions hold in the more commonly studied case of a controller facing a stationary process and a positive penalty on the control.


Sign in / Sign up

Export Citation Format

Share Document