statistical strategy
Recently Published Documents


TOTAL DOCUMENTS

67
(FIVE YEARS 19)

H-INDEX

9
(FIVE YEARS 1)

Stroke ◽  
2021 ◽  
Author(s):  
Nawaf Yassi ◽  
Kathryn S. Hayward ◽  
Bruce C.V. Campbell ◽  
Leonid Churilov

The coronavirus disease 2019 (COVID-19) pandemic has presented unique challenges to stroke care and research internationally. In particular, clinical trials in stroke are vulnerable to the impacts of the pandemic at multiple stages, including design, recruitment, intervention, follow-up, and interpretation of outcomes. A carefully considered approach is required to ensure the appropriate conduct of stroke trials during the pandemic and to maintain patient and participant safety. This has been recently addressed by the International Council for Harmonisation which, in November 2019, released an addendum to the Statistical Principles for Clinical Trials guidelines entitled Estimands and Sensitivity Analysis in Clinical Trials. In this article, we present the International Council for Harmonisation estimand framework for the design and conduct of clinical trials, with a specific focus on its application to stroke clinical trials. This framework aims to align the clinical and scientific objectives of a trial with its design and end points. It also encourages the prospective consideration of potential postrandomization intercurrent events which may occur during a trial and either impact the ability to measure an end point or its interpretation. We describe the different categories of such events and the proposed strategies for dealing with them, specifically focusing on the COVID-19 pandemic as a source of intercurrent events. We also describe potential practical impacts posed by the COVID-19 pandemic on trials, health systems, study groups, and participants, all of which should be carefully reviewed by investigators to ensure an adequate practical and statistical strategy is in place to protect trial integrity. We provide examples of the implementation of the estimand framework within hypothetical stroke trials in intracerebral hemorrhage and stroke recovery. While the focus of this article is on COVID-19 impacts, the strategies and principles proposed are well suited for other potential events or issues, which may impact clinical trials in the field of stroke.


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Zhiwei Ji ◽  
Jiaheng Gong ◽  
Jiarui Feng

Anomalies in time series, also called “discord,” are the abnormal subsequences. The occurrence of anomalies in time series may indicate that some faults or disease will occur soon. Therefore, development of novel computational approaches for anomaly detection (discord search) in time series is of great significance for state monitoring and early warning of real-time system. Previous studies show that many algorithms were successfully developed and were used for anomaly classification, e.g., health monitoring, traffic detection, and intrusion detection. However, the anomaly detection of time series was not well studied. In this paper, we proposed a long short-term memory- (LSTM-) based anomaly detection method (LSTMAD) for discord search from univariate time series data. LSTMAD learns the structural features from normal (nonanomalous) training data and then performs anomaly detection via a statistical strategy based on the prediction error for observed data. In our experimental evaluation using public ECG datasets and real-world datasets, LSTMAD detects anomalies more accurately than other existing approaches in comparison.


2021 ◽  
Author(s):  
Suzanne B Hendrix ◽  
Robin Mogg ◽  
Sue Jane Wang ◽  
Aloka Chakravarty ◽  
Klaus Romero ◽  
...  

Qualification of a biomarker for use in a medical product development program requires a statistical strategy that aligns available evidence with the proposed context of use (COU), identifies any data gaps to be filled and plans any additional research required to support the qualification. Accumulating, interpreting and analyzing available data is outlined, step-by-step, illustrated by a qualified enrichment biomarker example and a safety biomarker in the process of qualification. The detailed steps aid requestors seeking qualification of biomarkers, allowing them to organize the available evidence and identify potential gaps. This provides a statistical perspective for assessing evidence that parallels clinical considerations and is intended to guide the overall evaluation of evidentiary criteria to support a specific biomarker COU.


Trials ◽  
2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Thea Nørgaard Rønsbo ◽  
Jens Laigaard ◽  
Casper Pedersen ◽  
Ole Mathiesen ◽  
Anders Peder Højer Karlsen

Abstract Background The Consolidated Standards of Reporting Trials (CONSORT) statement aims to improve transparent reporting of randomised clinical trials. It comprises a participant flow diagram with the reporting of essential numbers for enrolment, allocation and analyses. We aimed to quantify the use of participant flow diagrams in randomised clinical trials on postoperative pain management after total hip and knee arthroplasty. Methods We searched PubMed, Embase and CENTRAL up till January 2020. The primary outcome was the proportion of trials with adequate reporting of participant flow diagrams, defined as reporting of number of participants screened for eligibility, randomised and included in the primary analysis. Secondary outcomes were recruitment (randomised:screened) and retention (analysed:randomised) rates, reporting of a statistical strategy, reasons for exclusion from the primary analysis and handling of missing outcome data. Trends over time were assessed with statistical process control. Results Of the 570 included trials, we found adequate reporting in 240 (42%). Reporting with participant flow diagram increased significantly over time. Median recruitment was 73% (IQR 44–91%), and retention was 97% (IQR 93–100%). These rates did not change over time. Trials with adequate reporting of participant flow were more likely to report a statistical strategy (41% vs 8%), reasons for post-randomisation exclusions (100% vs 55%) and handling of missing outcome data (14% vs 6%). Conclusions Adherence to participant flow diagrams for RCTs has increased significantly over time. Still, there is room for improvement of adequate reporting of flow diagrams, to increase transparency of trials details.


Author(s):  
Elysia Saputra ◽  
Amanda Kowalczyk ◽  
Luisa Cusick ◽  
Nathan Clark ◽  
Maria Chikina

Abstract Many evolutionary comparative methods seek to identify associations between phenotypic traits or between traits and genotypes, often with the goal of inferring potential functional relationships between them. Comparative genomics methods aimed at this goal measure the association between evolutionary changes at the genetic level with traits evolving convergently across phylogenetic lineages. However, these methods have complex statistical behaviors that are influenced by nontrivial and oftentimes unknown confounding factors. Consequently, using standard statistical analyses in interpreting the outputs of these methods leads to potentially inaccurate conclusions. Here, we introduce phylogenetic permulations, a novel statistical strategy that combines phylogenetic simulations and permutations to calculate accurate, unbiased P values from phylogenetic methods. Permulations construct the null expectation for P values from a given phylogenetic method by empirically generating null phenotypes. Subsequently, empirical P values that capture the true statistical confidence given the correlation structure in the data are directly calculated based on the empirical null expectation. We examine the performance of permulation methods by analyzing both binary and continuous phenotypes, including marine, subterranean, and long-lived large-bodied mammal phenotypes. Our results reveal that permulations improve the statistical power of phylogenetic analyses and correctly calibrate statements of confidence in rejecting complex null distributions while maintaining or improving the enrichment of known functions related to the phenotype. We also find that permulations refine pathway enrichment analyses by correcting for nonindependence in gene ranks. Our results demonstrate that permulations are a powerful tool for improving statistical confidence in the conclusions of phylogenetic analysis when the parametric null is unknown.


Author(s):  
Olatz Lopez-Fernandez ◽  
José Luis Losada-Lopez ◽  
Mª Luisa Honrubia-Serrano

This study uses an innovative statistical strategy to test the role of certain variables as predictors of problematic Internet and mobile phone usage among adolescents in Spain and in the United Kingdom . A paper-and-pencil questionnaire was used, with socio-demographics and patterns of technology usage as variables, and two tests were administered: the Problematic Internet Entertainment Use Scale for Adolescents (PIEUSA) and the Mobile Phone Problem Use Scale for Adolescents (MPPUSA). The overall sample size was 2228 high school students aged between 11 and 18 from Barcelona and London. PIEUSA and MPPUSA scores were transformed into normed scores, and both were then dichotomized according to three statistical criteria as cut-off points (i.e., median, 80th percentile, and extreme scores below the 25th percentile and above the 75th percentile) in order to establish the relationship between the variables above and the excessive use of the Internet or mobile phones, using a binary logistic regression. The results show that the best predictive model for both technologies includes socio-demographic variables as predictors of extreme scores for excessive Internet and mobile phone usage, with good sensitivity, specificity and classification accuracy, as well as a notable capacity for discrimination according to the receiver-operating characteristic curve. Implications of these findings are discussed.


2021 ◽  
Author(s):  
Silvio Davison ◽  
Francesco Barbariol ◽  
Alvise Benetazzo ◽  
Luigi Cavaleri ◽  
Paola Mercogliano

<p>Over the past decade, model reanalysis data products have found widespread application in many areas of research and have often been used for the assessment of the past and present atmospheric climate. They produce reliable fields at high temporal resolution (1 hour), albeit generally at low-to-mid spatial resolution (0.25°-1.00°). On the other hand, climatological analyses, quite often down-scaled (up to few km) to represent conditions also in enclosed basins, lack the actual historical sequence of events and are often provided at poor temporal resolution (6 hours or daily).</p><p>In this context, we investigated the possibility of using climate model data to scale ERA5 reanalysis wind (25-km and 1-hour resolution data) to assess the Mediterranean Sea wind and wave climate. We propose a statistical strategy to fuse ERA5 wind speeds over the sea with the past and future wind speeds produced by the COSMO-CLM (8-km and daily-mean data) climatological model. In the method, the probability density function of the ERA5 wind speed at each grid point is adjusted to match that of COSMO-CLM using a histogram equalization strategy. In this way, past ERA5 data are corrected to account for the COSMO-CLM wind distribution, while ERA5 scaled wind sequence can be also projected in the future with COSMO-CLM scenarios. Comparison with past observations of wind and waves confirms the validity of the adopted method.</p><p>We have tested this strategy for the assessment of the changing wind and, after WAVEWATCH III model runs, also the wave climate in the northern Adriatic Sea, especially in front of Venice and the MOSE barriers. In general, this data fusion strategy may be applied to produce a scaled wind dataset in enclosed basins and improve past and scenario wave modeling applications based on any reanalysis wind data.</p>


DYNA ◽  
2021 ◽  
Vol 88 (216) ◽  
pp. 152-159
Author(s):  
Paula Daniela Cuadrado Osorio ◽  
Carlos Rafael Castillo Saldarriaga ◽  
Jaime Andres Cubide Cardenas ◽  
Martha Isabel Gómez Alvarez ◽  
Eddy Johana Bautista Bautista

Resistance structures such as chlamydospores produced by the fungus Duddingtonia flagrans allow the reduction of infectious larvae from gastrointestinal nematodes. The objective of this research was to study the effect of carbon and nitrogen sources on the production of chlamydospores of the fungus in a solid state fermentation system (SSF). Twelve substances were studied using a statistical strategy, evaluating their effect on the production of Chlamydospores Finally, using an optimization strategy, the modifications of the substances that favor the production of chlamydospores were defined, and the effect of these on the predatory capacity of the fungus was evaluated. Optimal conditions were the variability of 0.25% w / w ammonium sulfate and 0.56% w / w sodium acetate in broken rice. The maximum concentration reached under this condition was 2.27x107 chlamydospores g of dry substrate-1, with a productivity of 1.62x106 chlamydospores g of dry substrate-1 day-1.


2020 ◽  
Author(s):  
Elysia Saputra ◽  
Amanda Kowalczyk ◽  
Luisa Cusick ◽  
Nathan Clark ◽  
Maria Chikina

AbstractThe wealth of high-quality genomes for numerous species has motivated many investigations into the genetic underpinnings of phenotypes. Comparative genomics methods approach this task by identifying convergent shifts at the genetic level that are associated with traits evolving convergently across independent lineages. However, these methods have complex statistical behaviors that are influenced by non-trivial and oftentimes unknown confounding factors. Consequently, using standard statistical analyses in interpreting the outputs of these methods leads to potentially inaccurate conclusions. Here, we introduce phylogenetic permulations, a novel statistical strategy that combines phylogenetic simulations and permutations to calculate accurate, unbiased p-values from phylogenetic methods. Permulations construct the null expectation for p-values from a given phylogenetic method by empirically generating null phenotypes. Subsequently, empirical p-values that capture the true statistical confidence given the correlation structure in the data are directly calculated based on the empirical null expectation. We examine the performance of permulation methods by analyzing both binary and continuous phenotypes, including marine, subterranean, and long-lived large-bodied mammal phenotypes. Our results reveal that permulations improve the statistical power of phylogenetic analyses and correctly calibrate statements of confidence in rejecting complex null distributions while maintaining or improving the enrichment of known functions related to the phenotype. We also find that permulations refine pathway enrichment analyses by correcting for non-independence in gene ranks. Our results demonstrate that permulations are a powerful tool for improving statistical confidence in the conclusions of phylogenetic analysis when the parametric null is unknown.


Sign in / Sign up

Export Citation Format

Share Document