predictive success
Recently Published Documents


TOTAL DOCUMENTS

38
(FIVE YEARS 15)

H-INDEX

7
(FIVE YEARS 0)

2021 ◽  
pp. 1-45
Author(s):  
Lanny Zrill

Abstract Simple functional forms for utility require restrictive structural assumptions that are often contrary to observed behavior. Even so, they are widely used in applied economic research. I address this issue using a two-part adaptive experimental design to compare the predictions of a popular parametric model of decision making under risk to those of non-parametric bounds on indifference curves. Interpreting the latter as an approximate upper bound, I find the parametric model sacrifices very little in terms of predictive success. This suggests that, despite their restrictiveness, simple functional forms may nevertheless be useful representations of preferences over risky alternatives.


2021 ◽  
Vol 28 (4) ◽  
pp. 2830-2839
Author(s):  
Rebecca Y. Xu ◽  
Diana Kato ◽  
Gregory R. Pond ◽  
Stephen Sundquist ◽  
James Schoales ◽  
...  

The Canadian Cancer Clinical Trials Network (3CTN) was established in 2014 to address the decline in academic cancer clinical trials (ACCT) activity. Funding was provided to cancer centres to conduct a Portfolio of ACCTs. Larger centres received core funding and were paired with smaller centres to enable support and sharing of resources. All centres were eligible for incentive-based funding for recruitment above pre-3CTN baseline. Established performance measures were collected and tracked. The overall recruitment target was 50% above pre-3CTN baseline by Year 4. An analysis was completed to identify predictive success factors and descriptive statistics were used to summarize site characteristics and outcomes. From 2014–2018, a total of 11,275 patients were recruited to 559 Portfolio trials, an overall increase of 59.6% above pre-3CTN baseline was observed in Year 4. Twenty-five (51%) adult centres met the Year 4 recruitment target and the overall recruitment target was met within three years. Three factors that correlated with sites’ achieving recruitment targets were: time period, region and number of baseline trials. 3CTN was successful in meeting its objectives and will continue to support ACCTs and member cancer centres, monitor performance over time and seek continued funding to ensure success, better trial access and outcomes for patients.


2021 ◽  
pp. 33-55
Author(s):  
Jonathon Hricko

This chapter examines the work in chemistry that led to the discovery of boron and explores the implications of this episode for the scientific realism debate. This episode begins with Lavoisier’s oxygen theory of acidity and his prediction that boracic acid contains oxygen and a hypothetical, combustible substance that he called the boracic radical. The episode culminates in the work of Davy, Gay-Lussac, and Thénard, who used potassium to extract oxygen from boracic acid and thereby discovered boron. This chapter shows that Lavoisier’s theory of acidity, which was not even approximately true, exhibited novel predictive success. Selective realists attempt to accommodate such false-but-successful theories by showing that their success is due to the fact that they have approximately true parts. However, this chapter argues that this episode poses a strong challenge to selective realism because the parts of Lavoisier’s theory that are responsible for its success are not even approximately true.


2021 ◽  
Author(s):  
Kenneth John Locey ◽  
Thomas A. Webb ◽  
Bala Hota

The prevention of unplanned 30-day readmissions of patients discharged with a diagnosis of heart failure (HF) remains a profound challenge among hospital enterprises. Despite the many models and indices developed to predict which HF patients will readmit for any unplanned cause within 30 days, predictive success has been meager. Using simulations of HF readmission models and the diagnostics most often used to evaluate them (C-statistics, ROC curves), we demonstrate common factors that have contributed to the lack of predictive success among studies. We reveal a greater need for precision and alternative metrics such as partial C-statistics and precision-recall curves and demonstrate via simulations how those tools can be used to better gauge predictive success. We suggest how studies can improve their applicability to hospitals and call for a greater understanding of the uncertainty underlying 30-day all-cause HF readmission. Finally, using insights from sampling theory, we suggest a novel uncertainty-based perspective for predicting readmissions and non-readmissions.


2021 ◽  
Author(s):  
Joshua R de Leeuw ◽  
Benjamin Motz ◽  
Emily Fyfe ◽  
Paulo F. Carvalho ◽  
Robert Goldstone

Emphasizing the predictive success and practical utility of psychological science is an admirable goal but it will require a substantive shift in how we design research. Applied research often assumes that findings are transferable to all practices, insensitive to variation between implementations. We describe efforts to quantify and close this practice-to-practice gap in education research.


Synthese ◽  
2021 ◽  
Author(s):  
Gerhard Schurz

AbstractThe paper starts with the distinction between conjunction-of-parts accounts and disjunction-of-possibilities accounts to truthlikeness (Sects. 1, 2). In Sect. 3, three distinctions between kinds of truthlikeness measures (t-measures) are introduced: (i) comparative versus numeric t-measures, (ii) t-measures for qualitative versus quantitative theories, and (iii) t-measures for deterministic versus probabilistic truth. These three kinds of truthlikeness are explicated and developed within a version of conjunctive part accounts based on content elements (Sects. 4, 5). The focus lies on measures of probabilistic truthlikeness, that are divided into t-measures for statistical probabilities and single case probabilities (Sect. 4). The logical notion of probabilistic truthlikeness (evaluated relative to true probabilistic laws) can be treated as a subcase of deterministic truthlikeness for quantitative theories (Sects. 4–6). In contrast, the epistemic notion of probabilistic truthlikeness (evaluated relative to given empirical evidence) creates genuinely new problems, especially for hypotheses about single case probabilities that are evaluated not by comparison to observed frequencies (as statistical probabilities), but by comparison to the truth values of single event statements (Sect. 6). By the method of meta-induction, competing theories about single case probabilities can be aggregated into a combined theory with optimal predictive success and epistemic truthlikeness (Sect. 7).


2021 ◽  
pp. 206-240
Author(s):  
Peter Millican

Peter Millican addresses the issue of how to best interpret Hume’s iconic passages on causation and causal powers and aims to cut through the various interpretations by fixing twelve ‘key points’ and arguing that a reductivist reading makes best sense of them. With these twelve points regarding Hume’s theory fixed, Millican turns toward adjudicating between reductivist, subjectivist, and projectivist interpretations. First, Millican attacks subjectivist interpretations on the grounds that they emphasize melodramatic passages in tension with Hume’s more considered claims, especially the first definition of necessity. Millican backs up the critical comments about subjectivism with a plausibly Humean account of what his ‘impression of power or necessary connexion’ might be. Then he turns to projectivist interpretations. Here, he argues that projectivist readings can be accommodated by the reductivist reading he is defending. After that, he turns to the ‘New Hume’, who allegedly accepted ‘thick’ causal powers, which push beyond the two definitions of cause. However, Millican emphasizes that Hume did accept causal powers in some thinner sense, powers that reduce to causal structures in the world that allow the discovery of laws and enable predictive success.


2021 ◽  
Vol 7 (3) ◽  
pp. 61-82
Author(s):  
Peter Galbács

This paper provides a look into what Lucas meant by the term ‘analogue systems’ and how he conceived making them useful. It is argued that any model with remarkable predictive success can be regarded as an analogue system, the term is thus neutral in terms of usefulness. To be useful Lucas supposed models to meet further requirements. These prerequisites are introduced in two steps in the paper. First, some properties of ‘useless’ Keynesian macroeconometric models come to the fore as contrasting cases. Second, it is argued that Lucas suggested two assumptions as the keys to usefulness for he conceived them as referring to genuine components of social reality and hence as true propositions. One is money as a causal instrument and the other is the choice-theoretic framework to describe the causal mechanisms underlying large-scale fluctuations. Extensive quotes from Lucas’s unpublished materials underpin the claims.


2020 ◽  
Vol 50 (6) ◽  
pp. 373-386
Author(s):  
Michael F. Gorman ◽  
Lakshminarayana Nittala ◽  
Jeffrey M. Alden

Each year, the INFORMS Edelman Award celebrates the best and most impactful implementations of operations research, management science, and analytics. As the Edelman Award approaches its 50-year mark, we provide a history and characterization of the award’s finalists and winners. We provide some basic descriptive analytics about the participating organizations and authors, the impact of their work, and the methods they employed. We also conduct predictive analytics on finalist submissions, gauging contributors to success in establishing winning entries. We find that predicting Edelman winners a priori is extremely difficult; however, given a set of finalists, predictive models based on monetary impact could have predicted the winner over half the time in recent years, but would have had less predictive success in the early years of the competition. We suggest that, by characterizing the finalists, we can give future entrants a better picture of what it takes to compete for the Edelman Award.


2020 ◽  
Vol 30 (5) ◽  
pp. 729-734
Author(s):  
Freek Oude Maatman

Kalis and Borsboom (2020) defend their realism about folk psychology against my challenge to provide a grounding argument for the correctness of folk psychological explanation (Oude Maatman, 2020). In this reply, I show how their clarified realism in fact vindicates this challenge, as it heavily relies on the predictive success of folk psychology. I then proceed by describing how their realist interpretation of “intentional content” complicates the usability of network theory, and show that both their antireductionism and realism are grounded in an empirical gamble against alternatives. I end with a brief defense of my own version of network theory.


Sign in / Sign up

Export Citation Format

Share Document