type 2 error
Recently Published Documents


TOTAL DOCUMENTS

26
(FIVE YEARS 14)

H-INDEX

4
(FIVE YEARS 1)

2022 ◽  
pp. 174749302110624
Author(s):  
Coralie English ◽  
Maria Gabriella Ceravolo ◽  
Simone Dorsch ◽  
Avril Drummond ◽  
Dorcas BC Gandhi ◽  
...  

Aims: The aim of this rapid review and opinion paper is to present the state of the current evidence and present future directions for telehealth research and clinical service delivery for stroke rehabilitation. Methods: We conducted a rapid review of published trials in the field. We searched Medline using key terms related to stroke rehabilitation and telehealth or virtual care. We also searched clinical trial registers to identify key ongoing trials. Results: The evidence for telehealth to deliver stroke rehabilitation interventions is not strong and is predominantly based on small trials prone to Type 2 error. To move the field forward, we need to progress to trials of implementation that include measures of adoption and reach, as well as effectiveness. We also need to understand which outcome measures can be reliably measured remotely, and/or develop new ones. We present tools to assist with the deployment of telehealth for rehabilitation after stroke. Conclusion: The current, and likely long-term, pandemic means that we cannot wait for stronger evidence before implementing telehealth. As a research and clinical community, we owe it to people living with stroke internationally to investigate the best possible telehealth solutions for providing the highest quality rehabilitation.


2021 ◽  
Vol 12 ◽  
Author(s):  
Xintong Li ◽  
Lana YH Lai ◽  
Anna Ostropolets ◽  
Faaizah Arshad ◽  
Eng Hooi Tan ◽  
...  

Using real-world data and past vaccination data, we conducted a large-scale experiment to quantify bias, precision and timeliness of different study designs to estimate historical background (expected) compared to post-vaccination (observed) rates of safety events for several vaccines. We used negative (not causally related) and positive control outcomes. The latter were synthetically generated true safety signals with incident rate ratios ranging from 1.5 to 4. Observed vs. expected analysis using within-database historical background rates is a sensitive but unspecific method for the identification of potential vaccine safety signals. Despite good discrimination, most analyses showed a tendency to overestimate risks, with 20%-100% type 1 error, but low (0% to 20%) type 2 error in the large databases included in our study. Efforts to improve the comparability of background and post-vaccine rates, including age-sex adjustment and anchoring background rates around a visit, reduced type 1 error and improved precision but residual systematic error persisted. Additionally, empirical calibration dramatically reduced type 1 to nominal but came at the cost of increasing type 2 error.


2021 ◽  
Author(s):  
Xintong Li ◽  
Lana YH Lai ◽  
Anna Ostropolets ◽  
Faaizah Arshad ◽  
Eng Hooi Tan ◽  
...  

Using real-world data and past vaccination data, we conducted a large-scale experiment to quantify bias, precision and timeliness of different study designs to estimate historical background (expected) compared to post-vaccination (observed) rates of safety events for several vaccines. We used negative (not causally related) and positive control outcomes. The latter were synthetically generated true safety signals with incident rate ratios ranging from 1.5 to 4. Observed vs. expected analysis using within-database historical background rates is a sensitive but unspecific method for the identification of potential vaccine safety signals. Despite good discrimination, most analyses showed a tendency to overestimate risks, with 20%-100% type 1 error, but low (0% to 20%) type 2 error in the large databases included in our study. Efforts to improve the comparability of background and post-vaccine rates, including age-sex adjustment and anchoring background rates around a visit, reduced type 1 error and improved precision but residual systematic error persisted. Additionally, empirical calibration dramatically reduced type 1 to nominal but came at the cost of increasing type 2 error. Our study found that within-database background rate comparison is a sensitive but unspecific method to identify vaccine safety signals. The method is positively biased, with low (<=20%) type 2 error, and 20% to 100% of negative control outcomes were incorrectly identified as safety signals due to type 1 error. Age-sex adjustment and anchoring background rate estimates around a healthcare visit are useful strategies to reduce false positives, with little impact on type 2 error. Sufficient sensitivity was reached for the identification of safety signals by month 1-2 for vaccines with quick uptake (e.g., seasonal influenza), but much later (up to month 9) for vaccines with slower uptake (e.g., varicella-zoster or papillomavirus). Finally, we reported that empirical calibration using negative control outcomes reduces type 1 error to nominal at the cost of increasing type 2 error.


2021 ◽  
Author(s):  
Maximilian Maier ◽  
Daniel Lakens

The default use of an alpha level of 0.05 is suboptimal for two reasons. First, decisions based on data can be made more efficiently by choosing an alpha level that minimizes the combined Type 1 and Type 2 error rate. Second, it is possible that in studies with very high statistical power p-values lower than the alpha level can be more likely when the null hypothesis is true, than when the alternative hypothesis is true (i.e., Lindley's paradox). This manuscript explains two approaches that can be used to justify a better choice of an alpha level than relying on the default threshold of 0.05. The first approach is based on the idea to either minimize or balance Type 1 and Type 2 error rates. The second approach lowers the alpha level as a function of the sample size to prevent Lindley's paradox. An R package and Shiny app are provided to perform the required calculations. Both approaches have their limitations (e.g., the challenge of specifying relative costs and priors), but can offer an improvement to current practices, especially when sample sizes are large. The use of alpha levels that have a better justification should improve statistical inferences and can increase the efficiency and informativeness of scientific research.


2021 ◽  
Vol 39 (15_suppl) ◽  
pp. 6037-6037
Author(s):  
Haitham Mirghani ◽  
Caroline Even ◽  
Alicia Larive ◽  
Jerome Fayette ◽  
Karen Benezery ◽  
...  

6037 Background: Among HPV-positive Oropharyngeal Cancer (OPC) patients (pts), some has a less favorable prognosis (T4, N2/N3, smokers >10 pack-year [p/y]). We assume that neoadjuvant immunotherapy might improve their oncological outcomes, so we tested nivolumab (N) prior to ChemoRadiaTion (CRT). Methods: The study population is restricted to HPV positive OPC pts (both p16+ & HPV-DNA+) with advanced disease (T4, N2/N3) or a smoking history >10 p/y. Pts were randomly allocated 1:2 to receive either cisplatin-based CRT (n=20) or 2 cycles of N 240 mg followed by CRT (n=41). The Primary Endpoint (PE) is the rate of pts who can receive Full Treatment in Due Time (FTDT), according to these criteria: a) 2 N infusions on day 1 and on day 14-16 b) CRT started between days 28-37 after the 1st N infusion c) No RT break ≥1 week d) RT dose received >95% of theoretical dose e) Cisplatin dose received ≥200 mg/m² To achieve FTDT, all criteria are required in the Experimental Arm (EA) while only criteria c), d), and e) are required in the Control Arm (CA). In the EA, the trial was designed in 2 steps, with FTDT rate of 88% considered as inacceptable versus an alternative of 98%, a type I error of 0.10, and a type 2 error of 0.08. As per protocol, patient accrual was temporarily suspended after inclusion of 19 pts in the EA (1st step) and results were reviewed by an Independent Data Monitoring Committee (IDMC). To resume pts’ inclusion, FTDT had to be achieved in 18 pts in the EA. Results: From 07/2019 to 09/2020, 30 pts were enrolled including 11 in the CA (demographics are summarized in table). 2 pts in the EA did not reach the PE. For the 1st patient, the cisplatin dose was <200 mg/m2 due to grade 1 hearing loss and grade 2 tinnitus (1st cycle: 100 mg/m2, 2nd cycle: 80 mg/m2, no 3rd cycle). For the 2nd patient, CRT began at D38 due to logistical issues (maintenance of RT devices). As this delay was unrelated to N or to patient's condition, the IDMC considered that the inclusions could resume for the 2nd step. 7 N-related Adverse Events (AE) were reported in 4 pts including 3 serious AE (ankylosing spondylitis flare-up, colitis, diabetic ketoacidosis). Conclusions: Neoadjuvant N before CRT seems feasible for the treatment of OPC pts. The trial has reopened to inclusion as recommended by the IDMC. Clinical trial information: NCT03838263. [Table: see text]


2021 ◽  
pp. 46-60
Author(s):  
Inayatul Lutfiyyah ◽  
Loggar Bhilawa

This study aims to find an accurate financial difficulty prediction model in English Premier League football, and also to compare with previous research so as to obtain the results of a financial difficulty prediction model that can be used for all football clubs. The way to determine the sample to be examined is using purposive sampling technique with a population of 49 English premier league clubs from 1992-2018, so that the number of samples obtained is 37 samples and then grouped in the categories of financial distress and nonfinancial distress. The method for analyzing data uses the model's accuracy test by comparing the model's prediction results with financial distress and nonfinancial distress sample categories and considering the results of the type 1 and type 2 error levels of each model. Error level 1 results from the sum of prediction errors that are actually financial distress but the results of the prediction of the nonfinancial distress model and vice versa. The results show that the model that has the highest level of accuracy for predicting financial distress in English premier league soccer clubs is the Zmijewski model with an accuracy rate of 72%. Keywords: Financial Distress, Football Club, Accuracy Test, Error Rate


Stroke ◽  
2021 ◽  
Vol 52 (Suppl_1) ◽  
Author(s):  
Mohammad Hajighasemi ◽  
Jeffrey Quinn ◽  
Laurie B Paletz ◽  
Shlee S Song ◽  
Konrad Schlick ◽  
...  

Introduction: Exposure to telestroke in neurology residency is sparse despite its implementation in daily practice. Due to COVID-19, we increased our utilization of telestroke. Given this change in workflow, we investigated how residents utilize telestroke for “Code Brain” (CB) acute stroke evaluations and if expanding the use of telestroke in neurology residency programs is justified. Methods: We retrospectively compared the number of CB evaluations, door to needle (DTN) times, door to decision (DTD) times, and NIHSS from March - June 2019 (in-person evaluations) against those in March - June 2020 (telestroke evaluations). We limited our study to resident-involved cases. Nighttime and weekend CBs are run by a neurology resident and are remotely supervised. Daytime CBs can be run by a resident, fellow, or attending. We assumed that nighttime CBs are resident-run and daytime evaluations are the stroke team’s evaluation. Statistical analysis was performed using R. Results: There were a total of 217 CBs in March - June 2019 and 115 in March - June 2020. In 2019, there were 120 daytime and 97 nighttime CBs; in 2020 there were 62 and 53. The mean DTN for the day for 2019 and 2020 was 45.9 ± 23.2 and 48.4 ± 20.2 (P=0.08). The mean DTN during the night for 2019 and 2020 was 67.3 ± 53.7 and 53.8 ± 25.8 (P=0.64). The mean DTD during the day for 2019 and 2020 was 48.1 ± 41.0 and 44.9 ± 37.6 (P=0.25). The mean DTD during the night for 2019 and 2020 was 60.9 ± 51.1 and 65.8 ± 94.6 (P=0.68). Using a generalized linear model approach to correct for shift (day/night), age, NIHSS, we found no significant differences in DTN or DTD before and after implementing resident-run telestroke. Using Cohen’s approach (1988) we estimated a Type 2 error probability at 7%. Discussion: Our findings show that resident-run telestroke CBs are comparable to CBs run by fellows and attendings as DTN and DTD were not significantly different between the CBs run by residents in-person or via telestroke. These metrics are also comparable for CBs in which a fellow or attending are supervising or running the CB. Implementing resident-run telestroke consultation showed no effect on DTN or DTD after accounting for patient age, NIHSS, or the time of day. These data should reassure programs seeking to grant telestroke privileges to trainees.


2021 ◽  
Author(s):  
James A Watson ◽  
Stephen Kissler ◽  
Nicholas PJ Day ◽  
Yonatan H. Grad ◽  
Nicholas J White

There is no agreed methodology for pharmacometric assessment of candidate antiviral drugs in COVID-19. The most widely used measure of virological response in clinical trials so far is the time to viral clearance assessed by qPCR of viral nucleic acid in eluates from serial nasopharyngeal swabs. We posited that the rate of viral clearance would have better discriminatory value. Using a pharmacodynamic model fit to individual SARS-CoV-2 virus clearance data from 46 uncomplicated COVID-19 infections in a cohort of prospectively followed adults, we simulated qPCR viral load data to compare type 2 errors when using time to clearance and rate of clearance under varying antiviral effects, sample sizes, sampling frequencies and durations of follow-up. The rate of viral clearance is a uniformly superior endpoint as compared to time to clearance with respect to type 2 error, and it is not dependent on initial viral load or assay sensitivity. For greatest efficiency pharmacometric assessments should be conducted in early illness and daily qPCR samples should be taken over 7 to 10 days in each patient studied. Adaptive randomisation and early stopping for success permits more rapid identification of active interventions.


2021 ◽  
Author(s):  
Stanton Hudja ◽  
Jason Ralston ◽  
Siyu Wang ◽  
Jason Anthony Aimone ◽  
Lucas Rentschler ◽  
...  

2020 ◽  
Vol 41 (S1) ◽  
pp. s15-s16
Author(s):  
Kyle Gontjes ◽  
Kristen Gibson ◽  
Bonnie Lansing ◽  
Marco Cassone ◽  
Lona Mody

Background: Although active surveillance for multidrug-resistant organism (MDRO) colonization permits timely intervention, obtaining cultures can be time-consuming, costly, and uncomfortable for patients. We evaluated clinical differences between patients with and without attainable perianal cultures, and we sought to determine whether environmental surveillance could replace perianal screening. Methods: We collected active surveillance cultures from patient hands, nares, groin, and perianal area upon enrollment, at day 14, and monthly thereafter in 6 Michigan nursing homes. Methicillin-resistant Staphylococcus aureus (MRSA), vancomycin-resistant enterococci (VRE), and resistant gram-negative bacilli (RGNB) were identified using standard methods. Patient characteristics were collected by trained research professionals. This substudy focused on visits during which all body sites were sampled. To determine the contribution of perianal screening to MDRO detection, site of colonization was categorized into 2 groups: perianal and non-perianal. We evaluated the utility of multisite surveillance (eg, type 1 and type 2 error) using nonperianal sites and environment surveillance. To evaluate characteristics associated with the acquisition of perianal cultures (eg, selection bias), we compared clinical characteristics, overall patient colonization, and room environment contamination of patients in whom all body sites were sampled during a study visit (533 patients; 1,026 visits) to patients with all body sites except the perianal culture sampled during a study visit (108 patients; 168 visits). Results: Of 651 patients, 533 met the inclusion criteria; average age was 74.5 years, 42.6% were male, and 60.8% were white. Of 1,026 eligible visits, 620 visits detected MDRO colonized patients; 155 MRSA, 363 VRE, and 386 RGNB (Table 1). If perianal cultures were not collected, nonperianal surveillance misses 7.7%, 41.3%, and 45.1% of MRSA, VRE, and RGNB colonized visits, respectively. The addition of environmental surveillance to non-perianal screening detected 95.5%, 82.9%, and 67.9% of MRSA, VRE, and RGNB colonized visits, respectively. The specificity of environmental screening was 85.3%, 72.7%, and 73.4% for MRSA, VRE, and RGNB, respectively. Patients without attainable perianal cultures had significantly more comorbidities, worse functional status, shorter length of stay, and higher baseline presence of wounds than patients with attainable perianal cultures; introducing potential selection bias to surveillance efforts (Table 2). No significant differences in overall patient colonization and room contamination were noted between patients with and without attainable perianal cultures. Conclusion: Perianal screening is important for the detection of VRE and RGNB colonization. Infection prevention must be cognizant of the tradeoff between reducing type 2 error and the selection bias that occurs with required attainment of perianal cultures. In the absence of perianal cultures, environmental surveillance improves MDRO detection while introducing type 1 error.Funding: NoneDisclosures: None


Sign in / Sign up

Export Citation Format

Share Document