probability estimates
Recently Published Documents


TOTAL DOCUMENTS

403
(FIVE YEARS 83)

H-INDEX

34
(FIVE YEARS 5)

Author(s):  
Ash Bullement ◽  
Benjamin Kearns

AbstractSurvival extrapolation plays a key role within cost effectiveness analysis and is often subject to substantial uncertainty. Use of external data to improve extrapolations has been identified as a key research priority. We present findings from a pilot study using data from the COU-AA-301 trial of abiraterone acetate for metastatic castration-resistant prostate cancer, to explore how external trial data may be incorporated into survival extrapolations. External trial data were identified via a targeted search of technology assessment reports. Four methods using external data were compared to simple parametric models (SPMs): informal reference to external data to select appropriate SPMs, piecewise models with, and without, hazard ratio adjustment, and Bayesian models fitted with a prior on the shape parameter(s). Survival and hazard plots were compared, and summary metrics (point estimate accuracy and restricted mean survival time) were calculated. Without consideration of external data, several SPMs may have been selected as the ‘best-fitting’ model. The range of survival probability estimates was generally reduced when external data were included in model estimation, and external hazard plots aided model selection. Different methods yielded varied results, even with the same data source, highlighting potential issues when integrating external trial data within model estimation. By using external trial data, the most (in)appropriate models may be more easily identified. However, benefits of using external data are contingent upon their applicability to the research question, and the choice of method can have a large impact on extrapolations.


2021 ◽  
Vol 923 (2) ◽  
pp. 236
Author(s):  
Dorian S. Abbot ◽  
Robert J. Webber ◽  
Sam Hadden ◽  
Darryl Seligman ◽  
Jonathan Weare

Abstract Due to the chaotic nature of planetary dynamics, there is a non-zero probability that Mercury’s orbit will become unstable in the future. Previous efforts have estimated the probability of this happening between 3 and 5 billion years in the future using a large number of direct numerical simulations with an N-body code, but were not able to obtain accurate estimates before 3 billion years in the future because Mercury instability events are too rare. In this paper we use a new rare-event sampling technique, Quantile Diffusion Monte Carlo (QDMC), to estimate that the probability of a Mercury instability event in the next 2 billion years is approximately 10−4 in the REBOUND N-body code. We show that QDMC provides unbiased probability estimates at a computational cost of up to 100 times less than direct numerical simulation. QDMC is easy to implement and could be applied to many problems in planetary dynamics in which it is necessary to estimate the probability of a rare event.


Mathematics ◽  
2021 ◽  
Vol 9 (22) ◽  
pp. 2982
Author(s):  
Liangjun Yu ◽  
Shengfeng Gan ◽  
Yu Chen ◽  
Dechun Luo

Naive Bayes (NB) is easy to construct but surprisingly effective, and it is one of the top ten classification algorithms in data mining. The conditional independence assumption of NB ignores the dependency between attributes, so its probability estimates are often suboptimal. Hidden naive Bayes (HNB) adds a hidden parent to each attribute, which can reflect dependencies from all the other attributes. Compared with other Bayesian network algorithms, it offers significant improvements in classification performance and avoids structure learning. However, the assumption that HNB regards each instance equivalent in terms of probability estimation is not always true in real-world applications. In order to reflect different influences of different instances in HNB, the HNB model is modified into the improved HNB model. The novel hybrid approach called instance weighted hidden naive Bayes (IWHNB) is proposed in this paper. IWHNB combines instance weighting with the improved HNB model into one uniform framework. Instance weights are incorporated into the improved HNB model to calculate probability estimates in IWHNB. Extensive experimental results show that IWHNB obtains significant improvements in classification performance compared with NB, HNB and other state-of-the-art competitors. Meanwhile, IWHNB maintains the low time complexity that characterizes HNB.


2021 ◽  
Author(s):  
Lara Bertram ◽  
Eric Schulz ◽  
Jonathan D. Nelson

Information about risks and probabilities is ubiquitous in our environment, forming the basis for decisions in an uncertain world. Emotions are known to modulate subjective probability assessments when probabilistic information is emotionally valenced. Yet little is known about the role of emotions in subjective probability assessment of affectively neutral events. We investigated this in one correlational study (Study 1, N = 162) and one experimental study (Study 2, N = 119). As predicted, we found that emotional dominance modulated the degree of conservatism in respondents’ neutral probability estimates. Remarkably, this pattern also transferred to realistic risk assessments. Furthermore, respondents’ tendency to use the representativeness heuristic as a proxy for probability was increased in high dominance individuals. Our findings highlight the importance of considering emotions, particularly the little-understood emotion dimension dominance, in research on probabilistic cognition.


2021 ◽  
Vol 25 (6) ◽  
pp. 1431-1451
Author(s):  
Li-Min Wang ◽  
Peng Chen ◽  
Musa Mammadov ◽  
Yang Liu ◽  
Si-Yuan Wu

Of numerous proposals to refine naive Bayes by weakening its attribute independence assumption, averaged one-dependence estimators (AODE) has been shown to be able to achieve significantly higher classification accuracy at a moderate cost in classification efficiency. However, all one-dependence estimators (ODEs) in AODE have the same weights and are treated equally. To address this issue, model weighting, which assigns discriminate weights to ODEs and then linearly combine their probability estimates, has been proved to be an efficient and effective approach. Most information-theoretic weighting metrics, including mutual information, Kullback-Leibler measure and the information gain, place more emphasis on the correlation between root attribute (value) and class variable. We argue that the topology of each ODE can be divided into a set of local directed acyclic graphs (DAGs) based on the independence assumption, and multivariate mutual information is introduced to measure the extent to which the DAGs fit data. Based on this premise, in this study we propose a novel weighted AODE algorithm, called AWODE, that adaptively selects weights to alleviate the independence assumption and make the learned probability distribution fit the instance. The proposed approach is validated on 40 benchmark datasets from UCI machine learning repository. The experimental results reveal that, AWODE achieves bias-variance trade-off and is a competitive alternative to single-model Bayesian learners (such as TAN and KDB) and other weighted AODEs (such as WAODE).


2021 ◽  
pp. 026921632110483
Author(s):  
Nicola White ◽  
Linda JM Oostendorp ◽  
Victoria Vickerstaff ◽  
Christina Gerlach ◽  
Yvonne Engels ◽  
...  

Background: The Surprise Question (‘Would I be surprised if this patient died within 12 months?’) identifies patients in the last year of life. It is unclear if ‘surprised’ means the same for each clinician, and whether their responses are internally consistent. Aim: To determine the consistency with which the Surprise Question is used. Design: A cross-sectional online study of participants located in Belgium, Germany, Italy, The Netherlands, Switzerland and UK. Participants completed 20 hypothetical patient summaries (‘vignettes’). Primary outcome measure: continuous estimate of probability of death within 12 months (0% [certain survival]–100% [certain death]). A threshold (probability estimate above which Surprise Question responses were consistently ‘no’) and an inconsistency range (range of probability estimates where respondents vacillated between responses) were calculated. Univariable and multivariable linear regression explored differences in consistency. Trial registration: NCT03697213. Setting/participants: Registered General Practitioners (GPs). Of the 307 GPs who started the study, 250 completed 15 or more vignettes. Results: Participants had a consistency threshold of 49.8% (SD 22.7) and inconsistency range of 17% (SD 22.4). Italy had a significantly higher threshold than other countries ( p = 0.002). There was also a difference in threshold levels depending on age of clinician, for every yearly increase, participants had a higher threshold. There was no difference in inconsistency between countries ( p = 0.53). Conclusions: There is variation between clinicians regarding the use of the Surprise Question. Over half of GPs were not internally consistent in their responses to the Surprise Question. Future research with standardised terms and real patients is warranted.


2021 ◽  
Vol 73 (1) ◽  
Author(s):  
Takahiro Tsuyuki ◽  
Akio Kobayashi ◽  
Reiko Kai ◽  
Takeshi Kimura ◽  
Satoshi Itaba

AbstractAlong the Nankai Trough subduction zone, southwest Japan, short-term slow slip events (SSEs) are commonly detected in strain and tilt records. These observational data have been used in rectangular fault models with uniform slip to analyze SSEs; however, the assumption of uniform slip precludes the possibility of mapping the slip distribution in detail. We report here an inversion method, based on the joint use of strain and tilt data and evaluated in terms of the Akaike’s Bayesian information criterion (ABIC), to estimate the slip distributions of short-term SSEs on the plate interface. Tests of this method yield slip distributions with smaller errors than are possible with the use of strain or tilt data alone. This method provides detailed spatial slip distributions of short-term SSEs including probability estimates, enabling improved monitoring of their locations and amounts of slip.


Author(s):  
Fabio Rigat

Abstract“What data will show the truth?” is a fundamental question emerging early in any empirical investigation. From a statistical perspective, experimental design is the appropriate tool to address this question by ensuring control of the error rates of planned data analyses and of the ensuing decisions. From an epistemological standpoint, planned data analyses describe in mathematical and algorithmic terms a pre-specified mapping of observations into decisions. The value of exploratory data analyses is often less clear, resulting in confusion about what characteristics of design and analysis are necessary for decision making and what may be useful to inspire new questions. This point is addressed here by illustrating the Popper-Miller theorem in plain terms and using a graphical support. Popper and Miller proved that probability estimates cannot generate hypotheses on behalf of investigators. Consistently with Popper-Miller, we show that probability estimation can only reduce uncertainty about the truth of a merely possible hypothesis. This fact clearly identifies exploratory analysis as one of the tools supporting a dynamic process of hypothesis generation and refinement which cannot be purely analytic. A clear understanding of these facts will enable stakeholders, mathematical modellers and data analysts to better engage on a level playing field when designing experiments and when interpreting the results of planned and exploratory data analyses.


2021 ◽  
pp. JCO.21.00194
Author(s):  
AnnaLynn M. Williams ◽  
Kevin R. Krull ◽  
Carrie R. Howell ◽  
Pia Banerjee ◽  
Tara M. Brinkman ◽  
...  

PURPOSE Eight percent of young-adult childhood cancer survivors meet criteria for frailty, an aging phenotype associated with poor health. In the elderly general population, frailty is associated with neurocognitive decline; this association has not been examined in adult survivors of childhood cancer. METHODS Childhood cancer survivors 18-45 years old (≥ 10 years from diagnosis) were clinically evaluated for prefrailty or frailty (respectively defined as ≥ 2 or ≥ 3 of: muscle wasting, muscle weakness, low energy expenditure, slow walking speed, and exhaustion [Fried criteria]) and completed neuropsychologic assessments at enrollment (January 2008-June 2013) and 5 years later. Weighted linear regression using inverse of sampling probability estimates as weights compared differences in neurocognitive decline in prefrail and frail survivors versus nonfrail survivors, adjusting for diagnosis age, sex, race, CNS–directed therapy (cranial radiation, intrathecal chemotherapy, and neurosurgery), and baseline neurocognitive performance. RESULTS Survivors were on average 30 years old and 22 years from diagnosis; 18% were prefrail and 6% frail at enrollment. Frail survivors declined an average of 0.54 standard deviation (95% CI, −0.93 to −0.15) in short-term verbal recall, whereas nonfrail survivors did not decline (β = .22; difference of βs = −.76; 95% CI, −1.19 to −0.33). Frail survivors declined more than nonfrail survivors on visual-motor processing speed (β = −.40; 95% CI, −0.67 to −0.12), cognitive flexibility (β = −.62; 95% CI, −1.02 to −0.22), and verbal fluency (β = −.23; 95% CI, −0.41 to −0.05). Prefrail and frail survivors experienced greater declines in focused attention (prefrail β = −.35; 95% CI, −0.53 to −0.17; frail β = −.48; 95% CI, −0.83 to −0.12) compared with nonfrail survivors. CONCLUSION Over approximately 5 years, prefrail and frail young-adult survivors had greater declines in cognitive domains associated with aging and dementia compared with nonfrail survivors. Interventions that have global impact, designed to target the mechanistic underpinnings of frailty, may also mitigate or prevent neurocognitive decline.


2021 ◽  
Vol 10 (3) ◽  
pp. 16-31
Author(s):  
Saibal Kumar Saha ◽  
Anindita Adhikary ◽  
Ajeya Jha ◽  
Vijay K. Mehta

Medication non-compliance is common among patients suffering from chronic disease. The research aims to find the effectiveness of food timing as a form of intervention to improve medication compliance. 509 patients were interviewed who were under the medication and had their treatment from Central Referral Hospital, Sikkim. The technique of probability estimates, risk difference, relative risk, and odds ratios were used to do the analysis and predictions of medication compliance when food timing was used as a form of reminder. Analysis of confidence interval at 95% ensured that the results obtained were due to the use of reminder and not by chance. The study reveals that with the help of food timing as a form of reminder, a patient has 50.2% lower odds of deferring from the scheduled time of medicine. There are 129.2% greater odds of completing the course of medication, 41.4% lower odds of missing the medication consciously, and there are 56.6% lower odds of missing the medication dose. The probability numbers indicate the effectiveness of usage of this form of reminder.


Sign in / Sign up

Export Citation Format

Share Document