State of the Art of Contingent Valuation

Author(s):  
Tim Haab ◽  
Lynne Lewis ◽  
John Whitehead

The contingent valuation method (CVM) is a stated preference approach to the valuation of non-market goods. It has a 50+-year history beginning with a clever suggestion to simply ask people for their consumer surplus. The first study was conducted in the 1960s and over 10,000 studies have been conducted to date. The CVM is used to estimate the use and non-use values of changes in the environment. It is one of the more flexible valuation methods, having been applied in a large number of contexts and policies. The CVM requires construction of a hypothetical scenario that makes clear what will be received in exchange for payment. The scenario must be realistic and consequential. Economists prefer revealed preference methods for environmental valuation due to their reliance on actual behavior data. In unguarded moments, economists are quick to condemn stated preference methods due to their reliance on hypothetical behavior data. Stated preference methods should be seen as approaches to providing estimates of the value of certain changes in the allocation of environmental and natural resources for which no other method can be used. The CVM has a tortured history, having suffered slings and arrows from industry-funded critics following the Exxon Valdez and British Petroleum (BP)–Deepwater Horizon oil spills. The critics have harped on studies that fail certain tests of hypothetical bias and scope, among others. Nonetheless, CVM proponents have found that it produces similar value estimates to those estimated from revealed preference methods such as the travel cost and hedonic methods. The CVM has produced willingness to pay (WTP) estimates that exhibit internal validity. CVM research teams must have a range of capabilities. A CVM study involves survey design so that the elicited WTP estimates have face validity. Questionnaire development and data collection are skills that must be mastered. Welfare economic theory is used to guide empirical tests of theory such as the scope test. Limited dependent variable econometric methods are often used with panel data to test value models and develop estimates of WTP. The popularity of the CVM is on the wane; indeed, another name for this article could be “the rise and fall of CVM,” not because the CVM is any less useful than other valuation methods. It is because the best practice in the CVM is merging with discrete choice experiments, and researchers seem to prefer to call their approach discrete choice experiments. Nevertheless, the problems that plague discrete choice experiments are the same as those that plague contingent valuation. Discrete choice experiment–contingent valuation–stated preference researchers should continue down the same familiar path of methods development.

2019 ◽  
Vol 47 (3) ◽  
pp. 1133-1172
Author(s):  
Nathan P Kemper ◽  
Jennie S Popp ◽  
Rodolfo M Nayga

Abstract One limitation of stated-preference methods is the formation of hypothetical bias. To address this, the honesty oath has been used as an ex ante technique to reduce hypothetical bias. Our study provides a query account of the honesty oath in a discrete-choice experiment setting by using Query Theory to examine the mechanism behind the effectiveness of the honesty oath. Our results show that the honesty oath can change the content and order of queries; potentially reducing hypothetical bias in discrete choice experiments. The study suggests the potential usefulness of Query Theory in examining thought processes of respondents in valuation studies.


2015 ◽  
Vol 44 (1) ◽  
pp. 1-16 ◽  
Author(s):  
Frederik Haarig ◽  
Stephan Mühlig

Hintergrund: Im Zuge der wachsenden Bedeutung von Ansätzen zur Patientenorientierung und -partizipation in der Gesundheitsversorgung gewinnt die Bestimmung subjektiver Therapiezielpräferenzen unterschiedlicher Akteure (Patienten, Behandler, Angehörige) zunehmend an Forschungsinteresse. Stated-Preference-Methods ermöglichen die systematische Untersuchung speziell patientenorientierter Fragestellungen. Ziele der Studie: Identifikation und Beschreibung (nach formalen, methodischen und inhaltlichen Merkmalen) von Studien mit Stated-Preference-Methods (Conjoint Measurements, Conjoint Analysis, Discrete Choice Experiments) in der Versorgung von Patienten mit psychischen Störungen mit dem Ziel, eine Bewertung zur Anwendbarkeit der Methode (Potential, Nutzen, Grenzen) in zukünftiger patientenorienterter Forschung abzuleiten. Methode: Systematische Literaturrecherche mit folgenden Studieneinschlusskriterien: Participants: Interventionen zur Behandlung von Patienten mit psychischer Störung; Intervention: psychotherapeutische, psychiatrische, hausärztliche Behandlungen (stationär, teil-stationär, ambulant); Comparison: Studien mit keiner (Ein-Gruppen-Design) oder mindestens einer Kontrollgruppe; Outcomes: conjoint-spezifische Angaben zu Nutzenwerten. Ergebnisse: Conjoint-Analysen werden in unterschiedlichen Forschungsdesigns und unter heterogenen Rahmenbedingungen (Stichprobe, Störungsbild, Setting, Intervention, Zieldimension) zur Messung von Therapiezielpräferenzen eingesetzt. Die Erstellung des Conjoint-Designs erfolgt in der Regel reduziert (orthogonal), mithilfe von Softwarepaketen, die Erhebung als Fragebogen. Schlussfolgerungen: Conjoint-Analysen ermöglichen differenzierte Aussagen über Therapiepräferenzstrukturen auf Basis relationaler Beurteilungsszenarien und stellen damit eine fundiertere Basis zur Verbesserung der Patientenorientierung in der Gesundheitsversorgung zur Verfügung. Die Befundlage belegt, dass sich die Methode zur Untersuchung patientenorientierter Fragestellungen (mehrheitlich zu Pharmakotherapie und Kombinationsbehandlung) in der Versorgung psychischer Störungen (depressive Störungen, ADHS, Schizophrenie, bipolare Störungen, Tabak- und Alkoholabhängigkeit und chronische Schmerzen) eignet. Allerdings ist der erfolgreiche Einsatz der Methodik an einige Voraussetzungen geknüpft (u. a. Unabhängigkeit der betrachteten Therapiezielaspekte, Designkomplexität). Forschungsbedarf besteht u. a. im Hinblick auf bisher nicht untersuchte Störungsbilder (u. a. somatoforme, Angst-, Ess-, Persönlichkeitsstörungen) und Interventionen (u. a. reine Psychotherapie, störungsspezifische Behandlungen).


2020 ◽  
Vol 10 (4) ◽  
pp. 756-767 ◽  
Author(s):  
James B. Tidwell

Abstract Significant investment is needed to improve peri-urban sanitation. Consumer willingness to pay may bridge some of this gap. While contingent valuation has been frequently used to assess this demand, there are few comparative studies to validate this method for water and sanitation. We use contingent valuation to estimate demand for flushing toilets, solid doors, and inside and outside locks on doors and compare this with results from hedonic pricing and discrete choice experiments. We collected data for a randomized, controlled trial in peri-urban Lusaka, Zambia in 2017. Tenants were randomly allocated to discrete choice experiments (n = 432) or contingent valuation (n = 458). Estimates using contingent valuation were lower than discrete choice experiments for solid doors (US$2.6 vs. US$3.4), higher for flushing toilets ($3.4 vs. $2.2), and were of the opposite sign for inside and outside locks ($1.6 vs. $ − 1.1). Hedonic pricing aligned more closely to discrete choice experiments for flushing toilets ($1.7) and locks (−$0.9), suggesting significant and inconsistent bias in contingent valuation estimates. While these results provide strong evidence of consumer willingness to pay for sanitation, researchers and policymakers should carefully consider demand assessment methods due to the inconsistent, but often inflated bias of contingent valuation.


Author(s):  
Denzil G. Fiebig ◽  
Hong Il Yoo

Stated preference methods are used to collect individual-level data on what respondents say they would do when faced with a hypothetical but realistic situation. The hypothetical nature of the data has long been a source of concern among researchers as such data stand in contrast to revealed preference data, which record the choices made by individuals in actual market situations. But there is considerable support for stated preference methods as they are a cost-effective means of generating data that can be specifically tailored to a research question and, in some cases, such as gauging preferences for a new product or non-market good, there may be no practical alternative source of data. While stated preference data come in many forms, the primary focus in this article is data generated by discrete choice experiments, and thus the econometric methods will be those associated with modeling binary and multinomial choices with panel data.


Author(s):  
Deborah J. Street ◽  
Rosalie Viney

Discrete choice experiments are a popular stated preference tool in health economics and have been used to address policy questions, establish consumer preferences for health and healthcare, and value health states, among other applications. They are particularly useful when revealed preference data are not available. Most commonly in choice experiments respondents are presented with a situation in which a choice must be made and with a a set of possible options. The options are described by a number of attributes, each of which takes a particular level for each option. The set of possible options is called a “choice set,” and a set of choice sets comprises the choice experiment. The attributes and levels are chosen by the analyst to allow modeling of the underlying preferences of respondents. Respondents are assumed to make utility-maximizing decisions, and the goal of the choice experiment is to estimate how the attribute levels affect the utility of the individual. Utility is assumed to have a systematic component (related to the attributes and levels) and a random component (which may relate to unobserved determinants of utility, individual characteristics or random variation in choices), and an assumption must be made about the distribution of the random component. The structure of the set of choice sets, from the universe of possible choice sets represented by the attributes and levels, that is shown to respondents determines which models can be fitted to the observed choice data and how accurately the effect of the attribute levels can be estimated. Important structural issues include the number of options in each choice set and whether or not options in the same choice set have common attribute levels. Two broad approaches to constructing the set of choice sets that make up a DCE exist—theoretical and algorithmic—and no consensus exists about which approach consistently delivers better designs, although simulation studies and in-field comparisons of designs constructed by both approaches exist.


Author(s):  
Anders Dugstad ◽  
Kristine M. Grimsrud ◽  
Gorm Kipperberg ◽  
Henrik Lindhjem ◽  
Ståle Navrud

AbstractSensitivity to scope in nonmarket valuation refers to the property that people are willing to pay more for a higher quality or quantity of a nonmarket public good. Establishing significant scope sensitivity has been an important check of validity and a point of contention for decades in stated preference research, primarily in contingent valuation. Recently, researchers have begun to differentiate between statistical and economic significance. This paper contributes to this line of research by studying the significance of scope effects in discrete choice experiments (DCEs) using the scope elasticity of willingness to pay concept. We first formalize scope elasticity in a DCE context and relate it to economic significance. Next, we review a selection of DCE studies from the environmental valuation literature and derive their implied scope elasticity estimates. We find that scope sensitivity analysis as validity diagnostics is uncommon in the DCE literature and many studies assume unitary elastic scope sensitivity by employing a restrictive functional form in estimation. When more flexible specifications are employed, the tendency is towards inelastic scope sensitivity. Then, we apply the scope elasticity concept to primary DCE data on people’s preferences for expanding the production of renewable energy in Norway. We find that the estimated scope elasticities vary between 0.13 and 0.58, depending on the attribute analyzed, model specification, geographic subsample, and the unit of measurement for a key attribute. While there is no strict and universally applicable benchmark for determining whether scope effects are economically significant, we deem these estimates to be of an adequate and plausible order of magnitude. Implications of the results for future DCE research are provided.


Sign in / Sign up

Export Citation Format

Share Document