scholarly journals Sample size and classification error for Bayesian change-point models with unlabelled sub-groups and incomplete follow-up

2016 ◽  
Vol 27 (5) ◽  
pp. 1476-1497 ◽  
Author(s):  
Simon R White ◽  
Graciela Muniz-Terrera ◽  
Fiona E Matthews

Many medical (and ecological) processes involve the change of shape, whereby one trajectory changes into another trajectory at a specific time point. There has been little investigation into the study design needed to investigate these models. We consider the class of fixed effect change-point models with an underlying shape comprised two joined linear segments, also known as broken-stick models. We extend this model to include two sub-groups with different trajectories at the change-point, a change and no change class, and also include a missingness model to account for individuals with incomplete follow-up. Through a simulation study, we consider the relationship of sample size to the estimates of the underlying shape, the existence of a change-point, and the classification-error of sub-group labels. We use a Bayesian framework to account for the missing labels, and the analysis of each simulation is performed using standard Markov chain Monte Carlo techniques. Our simulation study is inspired by cognitive decline as measured by the Mini-Mental State Examination, where our extended model is appropriate due to the commonly observed mixture of individuals within studies who do or do not exhibit accelerated decline. We find that even for studies of modest size ( n = 500, with 50 individuals observed past the change-point) in the fixed effect setting, a change-point can be detected and reliably estimated across a range of observation-errors.

2019 ◽  
Author(s):  
Ashley Edwards ◽  
Keanan Joyner ◽  
Chris Schatschneider

The accuracy of certain internal consistency estimators have been questioned in recent years. The present study tests the accuracy of six reliability estimators (Cronbach’s alpha, Omega, Omega Hierarchical, Revelle’s Omega, and Greatest Lower Bound) in 140 simulated conditions of unidimensional continuous data with uncorrelated errors with varying sample sizes, number of items, population reliabilities, and factor loadings. Under these conditions, alpha and omega yielded the most accurate estimations of the population reliability simulated. Alpha consistently underestimated population reliability and demonstrated evidence for itself as a lower bound. Greater underestimations for alpha were observed when tau equivalence was not met, however, underestimations were small and still provided more accurate estimates than all of the estimators except omega. Estimates of reliability were shown to be impacted by sample size, degree of violation of tau equivalence, population reliability and number of items in a scale. Under the conditions simulated here, estimates quantified by alpha and omega yielded the most accurate reflection of population reliability values. A follow-up regression comparing alpha and omega revealed alpha to be more sensitive to degree of violation of tau equivalence whereas omega was impacted greater by sample size and number of items, especially when population reliability was low.


2020 ◽  
Vol 27 (12) ◽  
pp. 1844-1849
Author(s):  
Ian Barnett ◽  
John Torous ◽  
Harrison T Reeder ◽  
Justin Baker ◽  
Jukka-Pekka Onnela

Abstract Objective Studies that use patient smartphones to collect ecological momentary assessment and sensor data, an approach frequently referred to as digital phenotyping, have increased in popularity in recent years. There is a lack of formal guidelines for the design of new digital phenotyping studies so that they are powered to detect both population-level longitudinal associations as well as individual-level change points in multivariate time series. In particular, determining the appropriate balance of sample size relative to the targeted duration of follow-up is a challenge. Materials and Methods We used data from 2 prior smartphone-based digital phenotyping studies to provide reasonable ranges of effect size and parameters. We considered likelihood ratio tests for generalized linear mixed models as well as for change point detection of individual-level multivariate time series. Results We propose a joint procedure for sequentially calculating first an appropriate length of follow-up and then a necessary minimum sample size required to provide adequate power. In addition, we developed an accompanying accessible sample size and power calculator. Discussion The 2-parameter problem of identifying both an appropriate sample size and duration of follow-up for a longitudinal study requires the simultaneous consideration of 2 analysis methods during study design. Conclusion The temporally dense longitudinal data collected by digital phenotyping studies may warrant a variety of applicable analysis choices. Our use of generalized linear mixed models as well as change point detection to guide sample size and study duration calculations provide a tool to effectively power new digital phenotyping studies.


BMJ Open ◽  
2020 ◽  
Vol 10 (1) ◽  
pp. e033510 ◽  
Author(s):  
Ayako Okuyama ◽  
Matthew Barclay ◽  
Cong Chen ◽  
Takahiro Higashi

ObjectivesThe accuracy of the ascertainment of vital status impacts the validity of cancer survival. This study assesses the potential impact of loss-to-follow-up on survival in Japan, both nationally and in the samples seen at individual hospitals.DesignSimulation studySetting and participantsData of patients diagnosed in 2007, provided by the Hospital-Based Cancer Registries of 177 hospitals throughout Japan.Primary and secondary outcome measuresWe performed simulations for each cancer site, for sample sizes of 100, 1000 and 8000 patients, and for loss-to-follow-up ranging from 1% to 5%. We estimated the average bias and the variation in bias in survival due to loss-to-follow-up.ResultsThe expected bias was not associated with the sample size (with 5% loss-to-follow-up, about 2.1% for the cohort including all cancers), but a smaller sample size led to more variable bias. Sample sizes of around 100 patients, as may be seen at individual hospitals, had very variable bias: with 5% loss-to-follow-up for all cancers, 25% of samples had a bias of <1.02% and 25% of samples had a bias of > 3.06%.ConclusionSurvival should be interpreted with caution when loss-to-follow-up is a concern, especially for poor-prognosis cancers and for small-area estimates.


2021 ◽  
pp. 001316442199418
Author(s):  
Ashley A. Edwards ◽  
Keanan J. Joyner ◽  
Christopher Schatschneider

The accuracy of certain internal consistency estimators have been questioned in recent years. The present study tests the accuracy of six reliability estimators (Cronbach’s alpha, omega, omega hierarchical, Revelle’s omega, and greatest lower bound) in 140 simulated conditions of unidimensional continuous data with uncorrelated errors with varying sample sizes, number of items, population reliabilities, and factor loadings. Estimators that have been proposed to replace alpha were compared with the performance of alpha as well as to each other. Estimates of reliability were shown to be affected by sample size, degree of violation of tau equivalence, population reliability, and number of items in a scale. Under the conditions simulated here, estimates quantified by alpha and omega yielded the most accurate reflection of population reliability values. A follow-up regression comparing alpha and omega revealed alpha to be more sensitive to degree of violation of tau equivalence, whereas omega was affected greater by sample size and number of items, especially when population reliability was low.


Author(s):  
Oumaima Bounou ◽  
Abdellah El Barkany ◽  
Ahmed El Biyaali

Maintenance management is an orderly procedure to address the planning, organization, monitoring and evaluation of maintenance activities and associated costs. The maintenance management allows to have an efficient tool either to the management of the preventive or curative activity, an optimization of the production tool, and finally a follow-up of the costs and the performances. A good maintenance management system can help prevent problems and damages to the operating and storage environment, extend the life of assets, and reduce operating costs.In this paper, we will first present our model on the joint management of spare parts and maintenance. We will do a simulation study of our model, presented in the first section of this paper. The results of this study are presented in the second section through the presentation of the influence of certain parameters of the model on the operation of the system under consideration. This study carried out on the graphical interface of Matlab, which is one of the performance evaluation techniques. It allows to visualize the variations and anomalies which can be reached in the system considered as an overcoming of the repair of the machines by the unforeseen breakdowns.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Moses M. Ngari ◽  
Susanne Schmitz ◽  
Christopher Maronga ◽  
Lazarus K. Mramba ◽  
Michel Vaillant

Abstract Background Survival analyses methods (SAMs) are central to analysing time-to-event outcomes. Appropriate application and reporting of such methods are important to ensure correct interpretation of the data. In this study, we systematically review the application and reporting of SAMs in studies of tuberculosis (TB) patients in Africa. It is the first review to assess the application and reporting of SAMs in this context. Methods Systematic review of studies involving TB patients from Africa published between January 2010 and April 2020 in English language. Studies were eligible if they reported use of SAMs. Application and reporting of SAMs were evaluated based on seven author-defined criteria. Results Seventy-six studies were included with patient numbers ranging from 56 to 182,890. Forty-three (57%) studies involved a statistician/epidemiologist. The number of published papers per year applying SAMs increased from two in 2010 to 18 in 2019 (P = 0.004). Sample size estimation was not reported by 67 (88%) studies. A total of 22 (29%) studies did not report summary follow-up time. The survival function was commonly presented using Kaplan-Meier survival curves (n = 51, (67%) studies) and group comparisons were performed using log-rank tests (n = 44, (58%) studies). Sixty seven (91%), 3 (4.1%) and 4 (5.4%) studies reported Cox proportional hazard, competing risk and parametric survival regression models, respectively. A total of 37 (49%) studies had hierarchical clustering, of which 28 (76%) did not adjust for the clustering in the analysis. Reporting was adequate among 4.0, 1.3 and 6.6% studies for sample size estimation, plotting of survival curves and test of survival regression underlying assumptions, respectively. Forty-five (59%), 52 (68%) and 73 (96%) studies adequately reported comparison of survival curves, follow-up time and measures of effect, respectively. Conclusion The quality of reporting survival analyses remains inadequate despite its increasing application. Because similar reporting deficiencies may be common in other diseases in low- and middle-income countries, reporting guidelines, additional training, and more capacity building are needed along with more vigilance by reviewers and journal editors.


Trials ◽  
2013 ◽  
Vol 14 (S1) ◽  
Author(s):  
Erinn Hade ◽  
Gregory Young ◽  
David Jarjoura ◽  
Richard Love

Sign in / Sign up

Export Citation Format

Share Document