scholarly journals How robust is rational choice?

2021 ◽  
Author(s):  
Felix Jan Nitsch ◽  
Tobias Kalenscher

Neoclassic economic choice theory assumes that decision-makers make choices as if they were rational agents. This assumption has been critically challenged over the last decades, yet systematic aggregation of evidence beyond single experiments is still surprisingly sparse. Here, we asked how robust choice-consistency, as a proxy for rationality, is to endogenous and exogeneous factors. To this end, we conducted a systematic quantitative literature research, reviewing 5327 articles, identifying 44 as relevant that contained hypothesis tests on possible influence factors of choice-consistency. To assess the evidential value of any effect of such influence factors on choice- consistency, we conducted a robust p-curve analysis. Our results indicate that choice-consistency is affected by endogenous or exogeneous factors. This result holds for multiple testing procedures and a robustness check. However, due to the breadth of the contemporary research agenda, the lack of replications and the unavailability of original data in the field of choice-consistency, it is currently not possible to draw meaningful conclusions regarding specific influence factors. Despite this lack of specificity, our results implicate that people’s decisions might be a noisier and more biased indicator of their underlying preferences than previously thought. Hence, we provide systematic evidence for the wide-spread belief that rationality cannot be assumed unconditionally.

Author(s):  
Damian Clarke ◽  
Joseph P. Romano ◽  
Michael Wolf

When considering multiple-hypothesis tests simultaneously, standard statistical techniques will lead to overrejection of null hypotheses unless the multiplicity of the testing framework is explicitly considered. In this article, we discuss the Romano–Wolf multiple-hypothesis correction and document its implementation in Stata. The Romano–Wolf correction (asymptotically) controls the familywise error rate, that is, the probability of rejecting at least one true null hypothesis among a family of hypotheses under test. This correction is considerably more powerful than earlier multiple-testing procedures, such as the Bonferroni and Holm corrections, given that it takes into account the dependence structure of the test statistics by resampling from the original data. We describe a command, rwolf, that implements this correction and provide several examples based on a wide range of models. We document and discuss the performance gains from using rwolf over other multiple-testing procedures that control the familywise error rate.


Author(s):  
Robin Markwica

Chapter 2 develops the logic of affect, or emotional choice theory, as an alternative action model besides the traditional logics of consequences and appropriateness. Drawing on research in psychology and sociology, the model captures not only the social nature of emotions but also their bodily and dynamic character. It posits that the interplay between identities, norms, and five key emotions—fear, anger, hope, pride, and humiliation—can shape decision-making in profound ways. The chapter derives a series of propositions how these five key emotions tend to influence the choice behavior of political leaders whose countries are targeted by coercive diplomacy. These propositions specify the affective conditions under which target leaders are likely to accept or reject a coercer’s demands. Even when emotions produce powerful impulses, humans will not necessarily act on them, however. The chapter thus also incorporates decision-makers’ limited ability to regulate their emotions into the logic of affect.


Author(s):  
Vijitashwa Pandey ◽  
Deborah Thurston

Design for disassembly and reuse focuses on developing methods to minimize difficulty in disassembly for maintenance or reuse. These methods can gain substantially if the relationship between component attributes (material mix, ease of disassembly etc.) and their likelihood of reuse or disposal is understood. For products already in the marketplace, a feedback approach that evaluates willingness of manufacturers or customers (decision makers) to reuse a component can reveal how attributes of a component affect reuse decisions. This paper introduces some metrics and combines them with ones proposed in literature into a measure that captures the overall value of a decision made by the decision makers. The premise is that the decision makers would choose a decision that has the maximum value. Four decisions are considered regarding a component’s fate after recovery ranging from direct reuse to disposal. A method on the lines of discrete choice theory is utilized that uses maximum likelihood estimates to determine the parameters that define the value function. The maximum likelihood method can take inputs from actual decisions made by the decision makers to assess the value function. This function can be used to determine the likelihood that the component takes a certain path (one of the four decisions), taking as input its attributes, which can facilitate long range planning and also help determine ways reuse decisions can be influenced.


Symmetry ◽  
2021 ◽  
Vol 13 (5) ◽  
pp. 810
Author(s):  
Zitai Xu ◽  
Chunfang Chen ◽  
Yutao Yang

In decision-making process, decision-makers may make different decisions because of their different experiences and knowledge. The abnormal preference value given by the biased decision-maker (the value that is too large or too small in the original data) may affect the decision result. To make the decision fair and objective, this paper combines the advantages of the power average (PA) operator and the Bonferroni mean (BM) operator to define the generalized fuzzy soft power Bonferroni mean (GFSPBM) operator and the generalized fuzzy soft weighted power Bonferroni mean (GFSWPBM) operator. The new operator not only considers the overall balance between data and information but also considers the possible interrelationships between attributes. The excellent properties and special cases of these ensemble operators are studied. On this basis, the idea of the bidirectional projection method based on the GFSWPBM operator is introduced, and a multi-attribute decision-making method, with a correlation between attributes, is proposed. The decision method proposed in this paper is applied to a software selection problem and compared to the existing methods to verify the effectiveness and feasibility of the proposed method.


2021 ◽  
Vol 18 (5) ◽  
pp. 521-528
Author(s):  
Eric S Leifer ◽  
James F Troendle ◽  
Alexis Kolecki ◽  
Dean A Follmann

Background/aims: The two-by-two factorial design randomizes participants to receive treatment A alone, treatment B alone, both treatments A and B( AB), or neither treatment ( C). When the combined effect of A and B is less than the sum of the A and B effects, called a subadditive interaction, there can be low power to detect the A effect using an overall test, that is, factorial analysis, which compares the A and AB groups to the C and B groups. Such an interaction may have occurred in the Action to Control Cardiovascular Risk in Diabetes blood pressure trial (ACCORD BP) which simultaneously randomized participants to receive intensive or standard blood pressure, control and intensive or standard glycemic control. For the primary outcome of major cardiovascular event, the overall test for efficacy of intensive blood pressure control was nonsignificant. In such an instance, simple effect tests of A versus C and B versus C may be useful since they are not affected by a subadditive interaction, but they can have lower power since they use half the participants of the overall trial. We investigate multiple testing procedures which exploit the overall tests’ sample size advantage and the simple tests’ robustness to a potential interaction. Methods: In the time-to-event setting, we use the stratified and ordinary logrank statistics’ asymptotic means to calculate the power of the overall and simple tests under various scenarios. We consider the A and B research questions to be unrelated and allocate 0.05 significance level to each. For each question, we investigate three multiple testing procedures which allocate the type 1 error in different proportions for the overall and simple effects as well as the AB effect. The Equal Allocation 3 procedure allocates equal type 1 error to each of the three effects, the Proportional Allocation 2 procedure allocates 2/3 of the type 1 error to the overall A (respectively, B) effect and the remaining type 1 error to the AB effect, and the Equal Allocation 2 procedure allocates equal amounts to the simple A (respectively, B) and AB effects. These procedures are applied to ACCORD BP. Results: Across various scenarios, Equal Allocation 3 had robust power for detecting a true effect. For ACCORD BP, all three procedures would have detected a benefit of intensive glycemia control. Conclusions: When there is no interaction, Equal Allocation 3 has less power than a factorial analysis. However, Equal Allocation 3 often has greater power when there is an interaction. The R package factorial2x2 can be used to explore the power gain or loss for different scenarios.


2015 ◽  
Vol 105 (01-02) ◽  
pp. 65-71
Author(s):  
A. Martini ◽  
A. Rohe ◽  
U. Stache ◽  
F. Trenker

Die Komplexität bei der Planung und Optimierung von Routenzugsystemen ist auf die Vielzahl der unterschiedlichen Gestaltungsmöglichkeiten und auf Interdependenzen zwischen den Einflussfaktoren zurückzuführen. Die im Fachartikel vorgestellte Verfahrensweise zur Einflussstärkenberechnung verschiedener Dimensionierungsparameter dient der Rangfolgebildung systemspezifischer Einflussfaktoren. Durch quantitativ-explorative Untersuchungen werden zudem Hypothesen für weitere Arbeiten gewonnen.   The complexity of planning and optimizing internal milkrun systems is a consequence of the multitude of different design options and interdependencies between the factors of influence. The calculation method for measuring the influence of different dimensioning parameters presented in this article serves to rank system-specific influence factors. Hypotheses for further research are obtained via quantitative-exploratory studies.


Author(s):  
Dan Lin ◽  
Ziv Shkedy ◽  
Dani Yekutieli ◽  
Tomasz Burzykowski ◽  
Hinrich W.H. Göhlmann ◽  
...  

Dose-response studies are commonly used in experiments in pharmaceutical research in order to investigate the dependence of the response on dose, i.e., a trend of the response level toxicity with respect to dose. In this paper we focus on dose-response experiments within a microarray setting in which several microarrays are available for a sequence of increasing dose levels. A gene is called differentially expressed if there is a monotonic trend (with respect to dose) in the gene expression. We review several testing procedures which can be used in order to test equality among the gene expression means against ordered alternatives with respect to dose, namely Williams' (Williams 1971 and 1972), Marcus' (Marcus 1976), global likelihood ratio test (Bartholomew 1961, Barlow et al. 1972, and Robertson et al. 1988), and M (Hu et al. 2005) statistics. Additionally we introduce a modification to the standard error of the M statistic. We compare the performance of these five test statistics. Moreover, we discuss the issue of one-sided versus two-sided testing procedures. False Discovery Rate (Benjamni and Hochberg 1995, Ge et al. 2003), and resampling-based Familywise Error Rate (Westfall and Young 1993) are used to handle the multiple testing issue. The methods above are applied to a data set with 4 doses (3 arrays per dose) and 16,998 genes. Results on the number of significant genes from each statistic are discussed. A simulation study is conducted to investigate the power of each statistic. A R library IsoGene implementing the methods is available from the first author.


Sign in / Sign up

Export Citation Format

Share Document