The win ratio approach for composite endpoints: practical guidance based on previous experience

2020 ◽  
Vol 41 (46) ◽  
pp. 4391-4399 ◽  
Author(s):  
Björn Redfors ◽  
John Gregson ◽  
Aaron Crowley ◽  
Thomas McAndrew ◽  
Ori Ben-Yehuda ◽  
...  

Abstract The win ratio was introduced in 2012 as a new method for examining composite endpoints and has since been widely adopted in cardiovascular (CV) trials. Improving upon conventional methods for analysing composite endpoints, the win ratio accounts for relative priorities of the components and allows the components to be different types of outcomes. For example, the win ratio can combine the time to death with the number of occurrences of a non-fatal outcome such as CV-related hospitalizations (CVHs) in a single hierarchical composite endpoint. The win ratio can provide greater statistical power to detect and quantify a treatment difference by using all available information contained in the component outcomes. The win ratio can also incorporate quantitative outcomes such as exercise tests or quality-of-life scores. There is a need for more practical guidance on how best to design trials using the win ratio approach. This manuscript provides an overview of the principles behind the win ratio and provides insights into how to implement the win ratio in CV trial design and reporting, including how to determine trial size.

2017 ◽  
Vol 28 (1) ◽  
pp. 151-169
Author(s):  
Abderrahim Oulhaj ◽  
Anouar El Ghouch ◽  
Rury R Holman

Composite endpoints are frequently used in clinical outcome trials to provide more endpoints, thereby increasing statistical power. A key requirement for a composite endpoint to be meaningful is the absence of the so-called qualitative heterogeneity to ensure a valid overall interpretation of any treatment effect identified. Qualitative heterogeneity occurs when individual components of a composite endpoint exhibit differences in the direction of a treatment effect. In this paper, we develop a general statistical method to test for qualitative heterogeneity, that is to test whether a given set of parameters share the same sign. This method is based on the intersection–union principle and, provided that the sample size is large, is valid whatever the model used for parameters estimation. We propose two versions of our testing procedure, one based on a random sampling from a Gaussian distribution and another version based on bootstrapping. Our work covers both the case of completely observed data and the case where some observations are censored which is an important issue in many clinical trials. We evaluated the size and power of our proposed tests by carrying out some extensive Monte Carlo simulations in the case of multivariate time to event data. The simulations were designed under a variety of conditions on dimensionality, censoring rate, sample size and correlation structure. Our testing procedure showed very good performances in terms of statistical power and type I error. The proposed test was applied to a data set from a single-center, randomized, double-blind controlled trial in the area of Alzheimer’s disease.


2021 ◽  
Author(s):  
Anthony J Webster

Clinical trials and epidemiological cohort studies often group similar diseases together into a composite endpoint, to increase statistical power. A common example is to use a 3-digit code from the International Classification of Diseases (ICD), to represent a collection of several 4-digit coded diseases. More recently, data-driven studies are using associations with risk factors to cluster diseases, leading this article to reconsider the assumptions needed to study a composite endpoint of several potentially distinct diseases. An important assumption is that the (possibly multivariate) associations are the same for all diseases in a composite endpoint (not heterogeneous). Therefore, multivariate measures of heterogeneity from meta analysis are considered, including multi-variate versions of the I2 statistic and Cochran's Q statistic. Whereas meta-analysis offers tools to test heterogeneity of clustering studies, clustering models suggest an alternative heterogeneity test, of whether data are better described by one, or more, clusters of elements with the same mean. The assumptions needed to model composite endpoints with a proportional hazards model are also considered. It is found that the model can fail if one or more diseases in the composite endpoint have different associations. Tests of the proportional hazards assumption can help identify when this occurs. It is emphasised that in multi-stage diseases such as cancer, some germline genetic variants can strongly modify the baseline hazard function and cannot be adjusted for, but must instead be used to stratify the data.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
I. E. Ceyisakar ◽  
N. van Leeuwen ◽  
Diederik W. J. Dippel ◽  
Ewout W. Steyerberg ◽  
H. F. Lingsma

Abstract Background There is a growing interest in assessment of the quality of hospital care, based on outcome measures. Many quality of care comparisons rely on binary outcomes, for example mortality rates. Due to low numbers, the observed differences in outcome are partly subject to chance. We aimed to quantify the gain in efficiency by ordinal instead of binary outcome analyses for hospital comparisons. We analyzed patients with traumatic brain injury (TBI) and stroke as examples. Methods We sampled patients from two trials. We simulated ordinal and dichotomous outcomes based on the modified Rankin Scale (stroke) and Glasgow Outcome Scale (TBI) in scenarios with and without true differences between hospitals in outcome. The potential efficiency gain of ordinal outcomes, analyzed with ordinal logistic regression, compared to dichotomous outcomes, analyzed with binary logistic regression was expressed as the possible reduction in sample size while keeping the same statistical power to detect outliers. Results In the IMPACT study (9578 patients in 265 hospitals, mean number of patients per hospital = 36), the analysis of the ordinal scale rather than the dichotomized scale (‘unfavorable outcome’), allowed for up to 32% less patients in the analysis without a loss of power. In the PRACTISE trial (1657 patients in 12 hospitals, mean number of patients per hospital = 138), ordinal analysis allowed for 13% less patients. Compared to mortality, ordinal outcome analyses allowed for up to 37 to 63% less patients. Conclusions Ordinal analyses provide the statistical power of substantially larger studies which have been analyzed with dichotomization of endpoints. We advise to exploit ordinal outcome measures for hospital comparisons, in order to increase efficiency in quality of care measurements. Trial registration We do not report the results of a health care intervention.


Molecules ◽  
2021 ◽  
Vol 26 (6) ◽  
pp. 1672
Author(s):  
Ysadora A. Mirabelli-Montan ◽  
Matteo Marangon ◽  
Antonio Graça ◽  
Christine M. Mayr Marangon ◽  
Kerry L. Wilkinson

Smoke taint has become a prominent issue for the global wine industry as climate change continues to impact the length and extremity of fire seasons around the world. Although the issue has prompted a surge in research on the subject in recent years, no singular solution has yet been identified that is capable of maintaining the quality of wine made from smoke-affected grapes. In this review, we summarize the main research on smoke taint, the key discoveries, as well as the prevailing uncertainties. We also examine methods for mitigating smoke taint in the vineyard, in the winery, and post production. We assess the effectiveness of remediation methods (proposed and actual) based on available research. Our findings are in agreement with previous studies, suggesting that the most viable remedies for smoke taint are still the commercially available activated carbon fining and reverse osmosis treatments, but that the quality of the final treated wines is fundamentally dependent on the initial severity of the taint. In this review, suggestions for future studies are introduced for improving our understanding of methods that have thus far only been preliminarily investigated. We select regions that have already been subjected to severe wildfires, and therefore subjected to smoke taint (particularly Australia and California) as a case study to inform other wine-producing countries that will likely be impacted in the future and suggest specific data collection and policy implementation actions that should be taken, even in countries that have not yet been impacted by smoke taint. Ultimately, we streamline the available information on the topic of smoke taint, apply it to a global perspective that considers the various stakeholders involved, and provide a launching point for further research on the topic.


PEDIATRICS ◽  
1992 ◽  
Vol 90 (5) ◽  
pp. 729-732
Author(s):  
Pieter J. J. Sauer

Modern technology makes it possible to keep more sick, extremely small, and vulnerable neonates alive. Many neonatologists in the Netherlands believe they should be concerned not only about the rate of survival of their patients, but also about the way the graduates of their care do, in fact, survive beyond the neonatal period. In most cases, we use all available methods to keep newborns alive. However, in some instances there is great concern about the quality of life, if the newborn should survive; here questions do arise about continuing or withholding treatment. In this commentary, I present my impression of the opinions held by a majority of practicing neonatologists in the Netherlands, as well as some personal thoughts and ideas. Recently, a committee convened by the Ministers of Justice and Health in the Netherlands issued an official report regarding the practice of euthanasia and the rules of medical practice when treatment is withheld.1 In this report of more than 250 pages, only 2 pages focus on the newborn. The following conclusions were made in this small section of the report. In almost one half of the instances of a fatal outcome in a neonatal intensive care unit in the Netherlands, discussions about sustaining or withholding treatment did take place at some stage of the hospital stay. A consideration of the future quality of life was always included in the discussion. The committee agreed with doctors interviewed for the report that there are circumstances in which continuation of intensive care treatment is not necessarily in the best interest of a neonate.


Author(s):  
Yoan Lavie Badie ◽  
Fabien Vannier ◽  
Eve Cariou ◽  
Pauline Fournier ◽  
Romain Itier ◽  
...  

Background: The sustainability of the results of mitraclip procedures is a source of concern. Aims: To investigate risk factors of severe mitral regurgitation (MR) recurrence after Mitraclip in primary MR. Methods and results: Eighty-three patients undergoing successful Mitraclip procedures were retrospectively included. Valve anatomy and Mitraclips placement were comprehensively analyzed by post-processing 3D echocardiographic acquisition. The primary composite endpoint was the recurrence of severe MR. Mean age was 83±7 years-old, 37 (44%) were female. Median follow-up was 381 days (IQR 195-717) and 17 (20%) patients reached the primary endpoint. Main causes of recurrence of severe MR were relapse of a prolapse (64%) and single leaflet detachment (23%). Posterior coaptation line length (HR 1.06 95%CI 1.01-1.12 p=0.02), poor imaging quality (HR 3.84, 95%CI1.12-13.19; p=0.03), and inter-clip distance (HR 1.60, 95%CI 1.27-2.02; p<0.01) were associated with the occurrence of the primary endpoint. Conclusions: Recurrence of severe MR after a MitraClip procedure for primary MR is common and results from a complex interplay between anatomical (tissue excess) and procedural criteria (quality of ultrasound guidance and MitraClips spacing).


2016 ◽  
Vol Vol. 17 no. 3 (Distributed Computing and...) ◽  
Author(s):  
Milan Erdelj ◽  
Nathalie Mitton ◽  
Tahiry Razafindralambo

International audience In this work we present a decentralized deployment algorithm for wireless mobile sensor networks focused on deployment Efficiency, connectivity Maintenance and network Reparation (EMR). We assume that a group of mobile sensors is placed in the area of interest to be covered, without any prior knowledge of the environment. The goal of the algorithm is to maximize the covered area and cope with sudden sensor failures. By relying on the locally available information regarding the environment and neighborhood, and without the need for any kind of synchronization in the network, each sensor iteratively chooses the next-step movement location so as to form a hexagonal lattice grid. Relying on the graph of wireless mobile sensors, we are able to provide the properties regarding the quality of coverage, the connectivity of the graph and the termination of the algorithm. We run extensive simulations to provide compactness properties of the deployment and evaluate the robustness against sensor failures. We show through the analysis and the simulations that EMR algorithm is robust to node failures and can restore the lattice grid. We also show that even after a failure, EMR algorithm call still provide a compact deployment in a reasonable time.


2021 ◽  
Author(s):  
Kay Wilhelm ◽  
Tonelle Handley ◽  
Catherine McHugh McHugh ◽  
David Lowenstein ◽  
Kristy Arrold

BACKGROUND The internet is increasingly seen as an important source of health information for consumers and their families. Accessing information related to their illness and treatment enables consumers to more confidently discuss their health and treatments with their doctors, but the abundance of readily available information also means can be confusing in terms of how reliable the information to enable consumers, families and clinicians to participate in the decision-making process of their care. OBJECTIVE The current study aimed to rate the quality of websites with psychosis-related information (using a validated instrument (DISCERN) and purpose-developed Psychosis Website Quality Checklist (PWQC) to assess quality over time and aid professionals in directing consumers to the best available information. METHODS Entering search terms ‘psychotic’, ‘psychosis’, ‘schizophrenia’, ‘delusion’, ‘hallucination’ into the search engine Google (www.google.com.au) provided 25 websites evaluated by DISCERN and PWQC at two time points, January-March 2014, and January-March 2018, by three diverse health professionals. RESULTS Only the six highest ranked achieved DISCERN scores indicating “good” quality. The overall mean scores of websites were 43.96 (SD=12.08) indicating “fair” quality. PWQC ratings were high on “availability and usability” but poor on “credibility,” “currency,” and “breadth and accuracy”, with no substantial improvement quality over time. Having an editorial/ review process (56% of websites) was significantly associated with higher quality scores on both scales. CONCLUSIONS The quality of available information was ‘fair’ and had not significantly improved over time. While higher-quality websites exist, there is no easy way to assess this on face value. Having a readily identifiable editorial/review process was one indicator of website quality. CLINICALTRIAL Not applicable


Author(s):  
Besma Khalfi ◽  
Cyril De Runz ◽  
Herman Akdag

When analyzing spatial issues, it is often that the geographer is confronted with many problems concerning the uncertainty of the available information. These problems may appear on the geometric or semantic quality of objects and as a result, a low precision is considered. So, it is necessary to develop representation and modeling methods that are suited to the imprecise nature of geographic data. This leads proposing recently F-Perceptory to manage fuzzy geographic data modeling. From the model described in Zoghlami, et al, (2011) some limits are relieved. F-Perceptory does not manage fuzzy composite geographic objects. The paper shows proposition to enhance the approach by the managing this type of objects in modeling and its transformation to the UML. On the technical level, the object modeling tools commonly used do not take into account fuzzy data. The authors propose new functional modules integrated under an existing CASE tool.


2001 ◽  
Vol 21 (2) ◽  
pp. 36-38 ◽  
Author(s):  
M MacKlin

Heart failure is a common reason for admission to the hospital and to critical care units. The care of patients with heart failure is changing almost daily as new research and therapies become available. Nurses caring for these patients must use available information and assessment findings to discern which type of heart failure exists in each patient. In this way, the care provided can be enhanced, and outcomes can be optimized. Critically thinking nurses can positively influence patients' quality of life and potentially reduce the devastating morbidity and mortality associated with heart failure.


Sign in / Sign up

Export Citation Format

Share Document