scholarly journals Risk aversion and risk seeking in multicriteria forest management: a Markov decision process approach

2017 ◽  
Vol 47 (6) ◽  
pp. 800-807 ◽  
Author(s):  
Joseph Buongiorno ◽  
Mo Zhou ◽  
Craig Johnston

Markov decision process models were extended to reflect some consequences of the risk attitude of forestry decision makers. One approach consisted of maximizing the expected value of a criterion subject to an upper bound on the variance or, symmetrically, minimizing the variance subject to a lower bound on the expected value. The other method used the certainty equivalent criterion, a weighted average of the expected value and variance. The two approaches were applied to data for mixed softwood–hardwood forests in the southern United States with multiple financial and ecological criteria. Compared with risk neutrality or risk seeking, financial risk aversion reduced expected annual financial returns and production and led to shorter cutting cycles that lowered the expected diversity of tree species and size, stand basal area, stored CO2e, and old-growth area.

2021 ◽  
Author(s):  
Martin Sieberer ◽  
Torsten Clemens

Abstract Hydrocarbon field (re-)development requires that a multitude of decisions are made under uncertainty. These decisions include the type and size of surface facilities, location, configuration and number of wells but also which data to acquire. Both types of decisions, which development to choose and which data to acquire, are strongly coupled. The aim of appraisal is to maximize value while minimizing data acquisition costs. These decisions have to be done under uncertainty owing to the inherent uncertainty of the subsurface but also of other costs and economic parameters. Conventional Value Of Information (VOI) evaluations can be used to determine how much can be spend to acquire data. However, VOI is very challenging to calculate for complex sequences of decisions with various costs and including the risk attitude of the decision maker. We are using a fully observable Markov-Decision-Process (MDP) to determine the policy for the sequence and type of measurements and decisions to do. A fully observable MDP is characterised by the states (here: description of the system at a certain point in time), actions (here: measurements and development scenario), transition function (probabilities of transitioning from one state to the next), and rewards (costs for measurements, Expected Monetary Value (EMV) of development options). Solving the MDP gives the optimum policy, sequence of the decisions, the Probability Of Maturation (POM) of a project, the Expected Monetary Value (EMV), the expected loss, the expected appraisal costs, and the Probability of Economic Success (PES). These key performance indicators can then be used to select in a portfolio of projects the ones generating the highest expected reward for the company. Combining the production forecasts from numerical model ensembles with probabilistic capital and operating expenditures and economic parameters allows for quantitative decision making under uncertainty.


Sign in / Sign up

Export Citation Format

Share Document