Getting the Assumptions Right: A Reply to Laver and Shepsle

1999 ◽  
Vol 29 (2) ◽  
pp. 402-412 ◽  
Author(s):  
PAUL V. WARWICK

For some time now, formal modelling has been touted by its supporters as a panacea for political science – or at least as a major step forward in the discipline's development. Certainly, it embodies a number of praiseworthy elements. Its insistence on starting with a parsimonious and precisely formulated set of assumptions cannot help but constrain slippery thinking, for example, and its rigorous working out of implications, while often demonstrating the obvious, occasionally leads to unanticipated and intriguing results. Moreover, the combination of precision and rigour holds forth the promise of generating relatively clear-cut tests of rival explanations, a major boon – if it proves true – in a discipline more inclined to abandon theories than to disconfirm them.How much better the analytical or formal orientation is, then, than the ‘funnel of causality’ approach of empiricists whose quest for the highest explained variance seldom produces more than a miscellaneous grab-bag of influences on the dependent phenomenon. Empirical work of that sort may have some limited utility in identifying possible causes, to be sure, but at some point the scholarly enterprise must move to the higher level of elaborating a clear logical structure among causal factors. Here, empirical success in accounting for observed phenomena cannot be the sole guide: the best theory is the one that provides the most accurate idea of what is actually going on in the real world, not the one with the best correlations.

2017 ◽  
Author(s):  
James Gibson

Despite what we learn in law school about the “meeting of the minds,” most contracts are merely boilerplate—take-it-or-leave-it propositions. Negotiation is nonexistent; we rely on our collective market power as consumers to regulate contracts’ content. But boilerplate imposes certain information costs because it often arrives late in the transaction and is hard to understand. If those costs get too high, then the market mechanism fails. So how high are boilerplate’s information costs? A few studies have attempted to measure them, but they all use a “horizontal” approach—i.e., they sample a single stratum of boilerplate and assume that it represents the whole transaction. Yet real-world transactions often involve multiple layers of contracts, each with its own information costs. What is needed, then, is a “vertical” analysis, a study that examines fewer contracts of any one kind but tracks all the contracts the consumer encounters, soup to nuts. This Article presents the first vertical study of boilerplate. It casts serious doubt on the market mechanism and shows that existing scholarship fails to appreciate the full scale of the information cost problem. It then offers two regulatory solutions. The first works within contract law’s unconscionability doctrine, tweaking what the parties need to prove and who bears the burden of proving it. The second, more radical solution involves forcing both sellers and consumers to confront and minimize boilerplate’s information costs—an approach I call “forced salience.” In the end, the boilerplate experience is as deep as it is wide. Our empirical work should reflect that fact, and our policy proposals should too.


2021 ◽  
Vol 11 (6) ◽  
pp. 478
Author(s):  
Ching Chang ◽  
Chien-Hao Huang ◽  
Hsiao-Jung Tseng ◽  
Fang-Chen Yang ◽  
Rong-Nan Chien

Background: Hepatic encephalopathy (HE), a neuropsychiatric complication of decompensated cirrhosis, is associated with high mortality and high risk of recurrence. Rifaximin add-on to lactulose for 3 to 6 months is recommended for the prevention of recurrent episodes of HE after the second episode. However, whether the combination for more than 6 months is superior to lactulose alone in the maintenance of HE remission is less evident. Therefore, the aim of this study is to evaluate the one-year efficacy of rifaximin add-on to lactulose for the maintenance of HE remission in Taiwan. Methods: We conducted a real-world single-center retrospective cohort study to compare the long-term efficacy of rifaximin add-on to lactulose (group R + L) versus lactulose alone (group L, control group). Furthermore, the treatment efficacy before and after rifaximin add-on to lactulose was also analyzed. The primary endpoint of our study was time to first HE recurrence (Conn score ≥ 2). All patients were followed up every three months until death, and censored at one year if still alive. Results and Conclusions: 12 patients were enrolled in group R + L. Another 31 patients were stratified into group L. Sex, comorbidity, ammonia level, and ascites grade were matched while age, HE grade, and model for end-stage liver disease (MELD) score were adjusted in the multivariable logistic regression model. Compared with group L, significant improvement in the maintenance of HE remission and decreased episodes and days of HE-related hospitalizations were demonstrated in group R + L. The serum ammonia levels were significantly lower at the 3rd and 6th month in group 1. Concerning changes before and after rifaximin add-on in group R + L, mini-mental status examination (MMSE), episodes of hospitalization, and variceal bleeding also improved at 6 and 12 months. Days of hospitalization, serum ammonia levels also improved at 6th month. Except for concern over price, no patients discontinued rifaximin due to adverse events or complications. The above results provide evidence for the one-year use of rifaximin add-on to lactulose in reducing HE recurrence and HE-related hospitalization for patients with decompensated cirrhosis.


2020 ◽  
Vol 36 (S1) ◽  
pp. 37-37
Author(s):  
Americo Cicchetti ◽  
Rossella Di Bidino ◽  
Entela Xoxi ◽  
Irene Luccarini ◽  
Alessia Brigido

IntroductionDifferent value frameworks (VFs) have been proposed in order to translate available evidence on risk-benefit profiles of new treatments into Pricing & Reimbursement (P&R) decisions. However limited evidence is available on the impact of their implementation. It's relevant to distinguish among VFs proposed by scientific societies and providers, which usually are applicable to all treatments, and VFs elaborated by regulatory agencies and health technology assessment (HTA), which focused on specific therapeutic areas. Such heterogeneity in VFs has significant implications in terms of value dimension considered and criteria adopted to define or support a price decision.MethodsA literature research was conducted to identify already proposed or adopted VF for onco-hematology treatments. Both scientific and grey literature were investigated. Then, an ad hoc data collection was conducted for multiple myeloma; breast, prostate and urothelial cancer; and Non Small Cell Lung Cancer (NSCLC) therapies. Pharmaceutical products authorized by European Medicines Agency from January 2014 till December 2019 were identified. Primary sources of data were European Public Assessment Reports and P&R decision taken by the Italian Medicines Agency (AIFA) till September 2019.ResultsThe analysis allowed to define a taxonomy to distinguish categories of VF relevant to onco-hematological treatments. We identified the “real-world” VF that emerged given past P&R decisions taken at the Italian level. Data was collected both for clinical and economical outcomes/indicators, as well as decisions taken on innovativeness of therapies. Relevant differences emerge between the real world value framework and the one that should be applied given the normative framework of the Italian Health System.ConclusionsThe value framework that emerged from the analysis addressed issues of specific aspects of onco-hematological treatments which emerged during an ad hoc analysis conducted on treatment authorized in the last 5 years. The perspective adopted to elaborate the VF was the one of an HTA agency responsible for P&R decisions at a national level. Furthermore, comparing a real-world value framework with the one based on the general criteria defined by the national legislation, our analysis allowed identification of the most critical point of the current national P&R process in terms ofsustainability of current and future therapies as advance therapies and agnostic-tumor therapies.


1973 ◽  
Vol 93 ◽  
pp. 74-103 ◽  
Author(s):  
John Gould

To Professor E. R. Dodds, through his edition of Euripides'Bacchaeand again inThe Greeks and the Irrational, we owe an awareness of new possibilities in our understanding of Greek literature and of the world that produced it. No small part of that awareness was due to Professor Dodds' masterly and tactful use of comparative ethnographic material to throw light on the relation between literature and social institutions in ancient Greece. It is in the hope that something of my own debt to him may be conveyed that this paper is offered here, equally in gratitude, admiration and affection.The working out of the anger of Achilles in theIliadbegins with a great scene of divine supplication in which Thetis prevails upon Zeus to change the course of things before Troy in order to restore honour to Achilles; it ends with another, human act in which Priam supplicates Achilles to abandon his vengeful treatment of the dead body of Hector and restore it for a ransom. The first half of theOdysseyhinges about another supplication scene of crucial significance, Odysseus' supplication of Arete and Alkinoos on Scherie. Aeschylus and Euripides both wrote plays called simplySuppliants, and two cases of a breach of the rights of suppliants, the cases of the coup of Kylon and that of Pausanias, the one dating from the mid-sixth century, the other from around 470 B.C. or soon after, played a dominant role in the diplomatic propaganda of the Spartans and Athenians on the eve of the Peloponnesian War.


2020 ◽  
Vol 3 (1) ◽  
pp. 681-693
Author(s):  
Ariel Furstenberg

AbstractThis article proposes to narrow the gap between the space of reasons and the space of causes. By articulating the standard phenomenology of reasons and causes, we investigate the cases in which the clear-cut divide between reasons and causes starts to break down. Thus, substituting the simple picture of the relationship between the space of reasons and the space of causes with an inverted and complex one, in which reasons can have a causal-like phenomenology and causes can have a reason-like phenomenology. This is attained by focusing on “swift reasoned actions” on the one hand, and on “causal noisy brain mechanisms” on the other hand. In the final part of the article, I show how an analogous move, that of narrowing the gap between one’s normative framework and the space of reasons, can be seen as an extension of narrowing the gap between the space of causes and the space of reasons.


2021 ◽  
Vol 13 (11) ◽  
pp. 5870
Author(s):  
Philipp Kruse

Social Entrepreneurship (SE) describes a new entrepreneurial form combining the generation of financial and social value. In recent years, research interest in SE increased in various disciplines with a particular focus on the characteristics of social enterprises. Whereas a clear-cut definition of SE is yet to be found, there is evidence that culture and economy affect and shape features of SE activity. In addition, sector-dependent differences are supposed. Building on Institutional Theory and employing a mixed qualitative and quantitative approach, this study sheds light on the existence of international and inter-sector differences by examining 161 UK and Indian social enterprises. A content analysis and analyses of variance were employed and yielded similarities as well as several significant differences on an international and inter-sector level, e.g., regarding innovativeness and the generation of revenue. The current study contributes to a more nuanced picture of the SE landscape by comparing social enterprise characteristics in a developed and a developing country on the one hand and different sectors on the other hand. Furthermore, I highlight the benefits of jointly applying qualitative and quantitative methodologies. Future research should pay more attention to the innate heterogeneity among social enterprises and further consolidate and extend these findings.


2021 ◽  
Vol 23 (1) ◽  
pp. 69-85
Author(s):  
Hemank Lamba ◽  
Kit T. Rodolfa ◽  
Rayid Ghani

Applications of machine learning (ML) to high-stakes policy settings - such as education, criminal justice, healthcare, and social service delivery - have grown rapidly in recent years, sparking important conversations about how to ensure fair outcomes from these systems. The machine learning research community has responded to this challenge with a wide array of proposed fairness-enhancing strategies for ML models, but despite the large number of methods that have been developed, little empirical work exists evaluating these methods in real-world settings. Here, we seek to fill this research gap by investigating the performance of several methods that operate at different points in the ML pipeline across four real-world public policy and social good problems. Across these problems, we find a wide degree of variability and inconsistency in the ability of many of these methods to improve model fairness, but postprocessing by choosing group-specific score thresholds consistently removes disparities, with important implications for both the ML research community and practitioners deploying machine learning to inform consequential policy decisions.


2002 ◽  
Vol 7 (3) ◽  
pp. 299-311
Author(s):  
Barbara F. Walter

Although the literature on international negotiation is rich with studies attempting to explain why some wars end in negotiated settlements while others do not, the theoretical and empirical work focuses almost entirely on explaining a single dichotomous variable: whether parties reach agreement or not. This article argues that in order to truly understand how conflicts end, the resolution process must be viewed as taking place in three distinct stages which begins with the decision to initiate negotiations, continues with the decision to strike a mutually agreeable bargain, and ends with the decision to implement the terms of a treaty. Each of these stages is likely to be driven by very different causal factors, and only by drawing clear conceptual and theoretical distinctions between the stages (and then testing them this way) can we begin to understand the full range of factors that truly bring peace.


2012 ◽  
Vol 20 (2) ◽  
pp. 28-50 ◽  
Author(s):  
Piet Strydom

This article offers a critical assessment of the prospects of the emergence of a global cosmopolitan society. For this purpose, it presents an analysis of the different interrelated types of structure formation in the process of cosmopolitisation and the mechanisms sustaining each. It deals with both the generation of a variety of actor-based models of world openness at the micro and meso level and with the reflexive meta-principle of cosmopolitanism forming part of the cognitive order of society at the macro level. But the focus is on the formation of an intermediate, substantive, situational, cultural model of cosmopolitanism which is on the one hand guided by the abstract principle of cosmopolitanism and on the other selectively brings together the actor models. Central to this analysis of cultural model formation is the threefold or triple contingency structure of the communication involved. The diagnosis, which takes a variety of conditions into account, is that the vital central moment of the formation of a substantive cultural model that would frame the organisation of a normative social order is deficient, which implies that the societal learning process supposed to engender it is being diverted, impeded or blocked. An explanation along the lines of critical social theory is proposed with reference to socio-structural and sociocultural causal factors.


2019 ◽  
Author(s):  
Daniel Tang

Agent-based models are a powerful tool for studying the behaviour of complex systems that can be described in terms of multiple, interacting ``agents''. However, because of their inherently discrete and often highly non-linear nature, it is very difficult to reason about the relationship between the state of the model, on the one hand, and our observations of the real world on the other. In this paper we consider agents that have a discrete set of states that, at any instant, act with a probability that may depend on the environment or the state of other agents. Given this, we show how the mathematical apparatus of quantum field theory can be used to reason probabilistically about the state and dynamics the model, and describe an algorithm to update our belief in the state of the model in the light of new, real-world observations. Using a simple predator-prey model on a 2-dimensional spatial grid as an example, we demonstrate the assimilation of incomplete, noisy observations and show that this leads to an increase in the mutual information between the actual state of the observed system and the posterior distribution given the observations, when compared to a null model.


Sign in / Sign up

Export Citation Format

Share Document