Risk Preferences and Incentives for Evidence Acquisition and Disclosure

2020 ◽  
Vol 36 (2) ◽  
pp. 314-342
Author(s):  
Erin Giffin ◽  
Erik Lillethun

Abstract Civil disputes feature parties with biased incentives acquiring evidence with costly effort. Evidence may then be revealed at trial or concealed to persuade a judge or jury. Using a persuasion game, we examine how a litigant’s risk preferences influence evidence acquisition incentives. We find that high risk aversion depresses equilibrium evidence acquisition. We then study the problem of designing legal rules to balance good decision making against the costs of acquisition. We characterize the optimal design, which differs from equilibrium decision rules. Notably, for very risk-averse litigants, the design is “over-incentivized” with stronger rewards and punishments than in equilibrium. We find similar results for various common legal rules, including admissibility of evidence and maximum awards. These results have implications for how rules could differentiate between high risk aversion types (e.g., individuals) and low risk aversion types (e.g., corporations) to improve evidence acquisition efficiency.

2004 ◽  
Vol 2 (2) ◽  
Author(s):  
Gary E. Marche

Although corruption and optimal law enforcement literature have addressed the effects of corruption, little has been done to analyze the decision to become corrupt. For example, little is known about risk preferences and how they might affect the nature of a corrupt exchange scheme. To answer this question, a theoretical analysis is developed that considers the noncoercive incentivea and circumstances necessary for a law enforcement official, assumed averse to criminal risk, to choose a corrupt exchange with organized crime that involves murder. Risk-aversion and the severity of the crime involved are shown to reduce the likelihood of detecting the corruption scheme and murder is shown to be optimal. Corruption schemes involving less risk averse offenders are analyzed and compared.


2015 ◽  
Vol 22 (4) ◽  
pp. 426-435 ◽  
Author(s):  
Nelleke C. van Wouwe ◽  
Kristen E. Kanoff ◽  
Daniel O. Claassen ◽  
K. Richard Ridderinkhof ◽  
Peter Hedera ◽  
...  

AbstractObjectives: Huntington’s disease (HD) is a neurodegenerative disorder that produces a bias toward risky, reward-driven decisions in situations where the outcomes of decisions are uncertain and must be discovered. However, it is unclear whether HD patients show similar biases in decision-making when learning demands are minimized and prospective risks and outcomes are known explicitly. We investigated how risk decision-making strategies and adjustments are altered in HD patients when reward contingencies are explicit. Methods: HD (N=18) and healthy control (HC; N=17) participants completed a risk-taking task in which they made a series of independent choices between a low-risk/low reward and high-risk/high reward risk options. Results: Computational modeling showed that compared to HC, who showed a clear preference for low-risk compared to high-risk decisions, the HD group valued high-risks more than low-risk decisions, especially when high-risks were rewarded. The strategy analysis indicated that when high-risk options were rewarded, HC adopted a conservative risk strategy on the next trial by preferring the low-risk option (i.e., they counted their blessings and then played the surer bet). In contrast, following a rewarded high-risk choice, HD patients showed a clear preference for repeating the high-risk choice. Conclusions: These results indicate a pattern of high-risk/high-reward decision bias in HD that persists when outcomes and risks are certain. The allure of high-risk/high-reward decisions in situations of risk certainty and uncertainty expands our insight into the dynamic decision-making deficits that create considerable clinical burden in HD. (JINS, 2016, 22, 426–435)


Blood ◽  
2014 ◽  
Vol 124 (21) ◽  
pp. 4833-4833
Author(s):  
Shannon M Bates ◽  
Pablo Alonso-Coello ◽  
Mark Eckman ◽  
Kari A Tikkinen ◽  
Shanil Ebrahim ◽  
...  

Abstract Background: The risk of pregnancy-related venous thromboembolism (VTE) is increased in women with a history of thrombosis. Although antepartum low molecular weight heparin (LMWH) prophylaxis can reduce this risk; the baseline risk of recurrence and the absolute magnitude of the risk reduction with prophylaxis are uncertain. Further, LMWH prophylaxis is costly, burdensome, medicalizes pregnancy, and may increase the risk of bleeding. Therefore, uncertainty persists regarding the net benefit of thromboprophylaxis and recommendations about the use of antepartum LMWH should be sensitive to pregnant women’s values and preferences, which have not previously been studied. Methods: We undertook an international multicenter cross-sectional interview study that included women with a history of VTE who were pregnant, planning pregnancy, or might consider pregnancy in the future. Women were classified as high (5 to 10%) or low (1 to 5%) risk of recurrent antepartum VTE. We ascertained willingness to receive LMWH during pregnancy through direct choice exercises involving real-life scenarios using the participant’s estimated VTE (high or low) and bleeding risks, hypothetical scenarios (low, medium and high risk of recurrence) and a probability trade-off exercise. Study outcomes included the minimum absolute reduction in VTE risk at which women changed from declining to accepting LMWH, along with possible determinants of this threshold, and participant choice of management strategy in her real-life and the three hypothetical scenarios. Results: 123 women from seven centers in six countries participated. Using a fixed 16% VTE risk without prophylaxis, the mean threshold reduction in risk at which women were willing to use LMWH was 4.3% (95% CI, 3.5 – 5.1%). Pregnant women and those planning a pregnancy (compared to those who might consider pregnancy in the future) and those with less than 2 weeks of experience with using LMWH during pregnancy (compared to those with more experience) required a greater risk reduction to use prophylaxis. In the real life scenario, there was there a significant difference in the proportion of women choosing prophylaxis between those at high risk (87.1%) and low risk (60.0%) of recurrence (p=0.01). The proportion of women choosing to use LMWH prophylaxis was 65.1% for the low risk hypothetical scenario (4% risk of recurrence), 79.7% for the medium risk scenario (10% risk of recurrence) and 87.8% for the high risk scenario (16% risk of recurrence). Conclusions: Most women with prior VTE will choose prophylaxis during a subsequent pregnancy, regardless of whether they are categorized as high or low risk of recurrence. Nevertheless, 40% of lower risk women will decline LMWH, as will over 10% of high risk women. Thus, these results mandate individualized clinical decision-making for women considering LMWH use during pregnancy, and weak guideline recommendations for LMWH use that highlight the need for individualized decision-making. Disclosures No relevant conflicts of interest to declare.


1999 ◽  
Vol 84 (1) ◽  
pp. 114-116 ◽  
Author(s):  
DeAnna L. Mori ◽  
Wayne Klein ◽  
Patricia Gallagher

Psychosocial factors are presented which affect clinical decision-making regarding the allocation of renal organs. Patients were rated as being either High Risk or Low Risk transplant candidates High Risk candidates were scored as being significantly different from the Low Risk candidates on many psychosocial variables. Interestingly, significant differences were not found between these two groups on either the MMPI–2 or the Beck Depression Inventory. The validity of using information from these inventories to allocate organs is discussed.


2015 ◽  
Vol 3 (2) ◽  
pp. 130-144 ◽  
Author(s):  
Nikolay Zubanov

Purpose – The purpose of this paper is to consider the influence of individual risk preferences on the effectiveness of incentive pay schemes, by examining the link between individual effort and risk aversion in situations where outcome uncertainty multiplies with effort. Such “multiplicative noise” situations are common, occurring whenever payment is awarded per success rather than per attempt. Design/methodology/approach – The paper develops a theoretical model which predicts a negative risk aversion-effort link under multiplicative noise without a performance target (PT), and a weaker negative link once the target is introduced. This model is then taken to the data from a lab experiment where participants were randomly assigned to a control group, which received fixed pay, and a treatment group, which received a piece rate awarded with a certain probability, with and without a PT. Risk aversion is measured with a menu of lottery choices offered at the end of the experiment. Findings – Compared to their peers in the control group, the more risk-averse participants in the treatment group put in progressively less effort in the absence of a PT. The introduction of a PT substantially weakens this negative risk aversion-effort link, so that there are no more significant differences in performance between the more and the less risk averse. Research limitations/implications – The paper’s findings speak to the empirical puzzles of incentive pay schemes backfiring and of the proliferation of PTs. The negative risk aversion-effort link may be one reason behind the failure of incentive schemes to deliver improved performance, whereas the weakening of this link may be one justification for the existence of PTs. Practical implications – In the multiplicative noise environments, managers should take their workers’ risk preferences into account when designing incentive pay schemes. A PT may be a useful motivational tool for the risk-averse workers who are more likely to under-perform. Originality/value – The multiplicative noise environment has been largely overlooked by the existing literature, yet it is common in practice. An example is the work of a sales agent who receives a bonus per sales which succeeds with a certain probability after each customer contact. This paper is one of the first to model, and test experimentally, worker performance in this environment.


2019 ◽  
Vol 61 (3) ◽  
pp. 34-48
Author(s):  
Matthew Rabin ◽  
Max Bazerman

Managers often engage in risk-averse behavior, and economists, decision analysts, and managers treat risk aversion as a preference. In many cases, acting in a risk-averse manner is a mistake, but managers can correct this mistake with greater reflection. This article provides guidance on how individuals and organizations can move toward greater reflection and a more profitable aggregate portfolio of decisions. Inconsistency in risk preferences across decisions is a costly mistake for both individuals and for organizations.


2018 ◽  
Vol 36 (4_suppl) ◽  
pp. 614-614 ◽  
Author(s):  
Frank A. Sinicrope ◽  
Qian Shi ◽  
Fabienne Hermitte ◽  
Erica N Heying ◽  
Al Bowen Benson ◽  
...  

614 Background: A consensus interpretation of the IDEA colon cancer study results suggested that risk categories based on T/N stage grouping be used to guide decision-making for duration (3 vs 6 months) of adjuvant FOLFOX or CapeOX chemotherapy. Given the prognostic potential of immune biomarkers, we examined the immunoscore and individual T and B lymphocyte markers in low and high risk T/N subsets of stage III colon carcinoma patients (N=600) treated with adjuvant FOLFOX. Methods: Immunoscore (CD3+, CD8+) and individual T-cell and CD20+ B-cell immunostain densities in central tumor (CT) and invasive margin (IM) of FFPE sections were quantified by image analysis. A predetermined immunoscore categorization was used [high (2-4) vs low (0-1)]. Individual markers were analyzed by backwards selection wherein CD3+ IM was most robust for prognosis and an optimized cutoff was then determined. Associations with disease-free survival (DFS) were analyzed by multivariable Cox regression adjusting for age, T/N stage, sidedness, KRAS/BRAF, and DNA mismatch repair. Results: In low and high risk T/N patient subsets, the immunoscore and CD3+ IM were each significantly discriminant for prognosis. Among low risk (T1-3N1) patients, a high vs low immunoscore was associated with a 91% vs 77% 3-year DFS [HR 0.57, 95% confidence interval (CI) 0.34-0.95, adjusted (adj) P= 0.026]. Among high risk (T4 or N2) patients, a high vs low immunoscore was associated with a 68% vs 54% 3-year DFS (HR 0.64, 95% CI 0.42-0.98, Padj= 0.034]. Similarly, a high vs low intratumoral CD3+ density at the invasive margin (IM) was significantly associated with prognosis in low risk [HR 0.37, 95% CI 0.21- 0.66), Padj< 0.0003] and in high risk [HR 0.47, 95% CI, 0.27- 0.80), Padj< 0.0028] patient subsets. Conclusions: Immunoscore and CD3+ IM were shown to prognostically stratify FOLFOX-treated patients within both low and high risk T/N subsets. These data underscore limitations of the T/N risk classification for adjuvant treatment decisions in stage III patients, and demonstrate the ability of T-cell markers to enhance prognostication to guide clinical decision-making.


2018 ◽  
Vol 10 (8) ◽  
pp. 1
Author(s):  
Fan Liu

Risk and time preferences influence the insurance purchase decisions under uncertainty. Accident forgiveness, often considered as “premium insurance,” protects policyholders against a premium increase in the next period if an at-fault accident occurs. In this paper, by conducting a unique experiment in the controlled laboratory conditions, we examine the role of risk and time preferences in accident forgiveness purchase decisions. We find that individual discount rates and product price significantly affect premium insurance purchase decision. Interestingly, we also find evidence that less risk averse policyholders in general behave more like risk neutral when making insurance decision. Risk attitudes affect insurance decision-making only among those who have relatively high degree of risk aversion.


2021 ◽  
pp. 194855062199927
Author(s):  
Roxie Chuang ◽  
Keiko Ishii ◽  
Heejung S. Kim ◽  
David K. Sherman

This research investigated cross-cultural differences in strategic risky decisions in baseball—among professional baseball teams in North America and Japan (Study 1) and among baseball fans in the United States and Japan (Study 2—preregistered). Study 1 analyzed archival data from professional baseball leagues and demonstrated that outcomes reflecting high risk-high payoff strategies were more prevalent in North America, whereas outcomes reflecting low risk-low payoff strategies were more prevalent in Japan. Study 2 investigated fans’ strategic decision making with a wider range of baseball strategies as well as an underlying reason for the difference: approach/avoidance motivational orientation. European American participants preferred high risk-high payoff strategies, Japanese participants preferred low risk-low payoff strategies, and this cultural variation was explained by cultural differences in motivational orientation. Baseball, which exemplifies a domain where strategic decision making has observable consequences, can demonstrate the power of culture through the actions and preferences of players and fans alike.


Sign in / Sign up

Export Citation Format

Share Document