scholarly journals Bias, Precision and Timeliness of Historical (Background) Rate Comparison Methods for Vaccine Safety Monitoring: An Empirical Multi-Database Analysis

2021 ◽  
Vol 12 ◽  
Author(s):  
Xintong Li ◽  
Lana YH Lai ◽  
Anna Ostropolets ◽  
Faaizah Arshad ◽  
Eng Hooi Tan ◽  
...  

Using real-world data and past vaccination data, we conducted a large-scale experiment to quantify bias, precision and timeliness of different study designs to estimate historical background (expected) compared to post-vaccination (observed) rates of safety events for several vaccines. We used negative (not causally related) and positive control outcomes. The latter were synthetically generated true safety signals with incident rate ratios ranging from 1.5 to 4. Observed vs. expected analysis using within-database historical background rates is a sensitive but unspecific method for the identification of potential vaccine safety signals. Despite good discrimination, most analyses showed a tendency to overestimate risks, with 20%-100% type 1 error, but low (0% to 20%) type 2 error in the large databases included in our study. Efforts to improve the comparability of background and post-vaccine rates, including age-sex adjustment and anchoring background rates around a visit, reduced type 1 error and improved precision but residual systematic error persisted. Additionally, empirical calibration dramatically reduced type 1 to nominal but came at the cost of increasing type 2 error.

2021 ◽  
Author(s):  
Xintong Li ◽  
Lana YH Lai ◽  
Anna Ostropolets ◽  
Faaizah Arshad ◽  
Eng Hooi Tan ◽  
...  

Using real-world data and past vaccination data, we conducted a large-scale experiment to quantify bias, precision and timeliness of different study designs to estimate historical background (expected) compared to post-vaccination (observed) rates of safety events for several vaccines. We used negative (not causally related) and positive control outcomes. The latter were synthetically generated true safety signals with incident rate ratios ranging from 1.5 to 4. Observed vs. expected analysis using within-database historical background rates is a sensitive but unspecific method for the identification of potential vaccine safety signals. Despite good discrimination, most analyses showed a tendency to overestimate risks, with 20%-100% type 1 error, but low (0% to 20%) type 2 error in the large databases included in our study. Efforts to improve the comparability of background and post-vaccine rates, including age-sex adjustment and anchoring background rates around a visit, reduced type 1 error and improved precision but residual systematic error persisted. Additionally, empirical calibration dramatically reduced type 1 to nominal but came at the cost of increasing type 2 error. Our study found that within-database background rate comparison is a sensitive but unspecific method to identify vaccine safety signals. The method is positively biased, with low (<=20%) type 2 error, and 20% to 100% of negative control outcomes were incorrectly identified as safety signals due to type 1 error. Age-sex adjustment and anchoring background rate estimates around a healthcare visit are useful strategies to reduce false positives, with little impact on type 2 error. Sufficient sensitivity was reached for the identification of safety signals by month 1-2 for vaccines with quick uptake (e.g., seasonal influenza), but much later (up to month 9) for vaccines with slower uptake (e.g., varicella-zoster or papillomavirus). Finally, we reported that empirical calibration using negative control outcomes reduces type 1 error to nominal at the cost of increasing type 2 error.


2020 ◽  
Author(s):  
◽  
Hao Cheng

[ACCESS RESTRICTED TO THE UNIVERSITY OF MISSOURI AT REQUEST OF AUTHOR.] Universities commercialize their discoveries at an increasing pace in order to maximize their economic impact and generate additional funding for research. They form technology transfer offices (TTOs) to evaluate the commercial value of university inventions and choose the most promising ones to patent and commercialize. Uncertainties and asymmetric information in project selection make the TTO choices difficult and can cause both type 1 error (forgo valuable discoveries) and type 2 error (select low-value discoveries). In this dissertation, I examine the TTO's project selection process and the factors that influence the choice of academic inventions for patenting and commercialization, the type 1 error committed, and the final licensing outcome. The dissertation contains three essays. In the first essay, I analyze project selection under uncertainty when both the quality of the proposed project and the motives of the applicant are uncertain. Some inventors may have an incentive to disguise the true quality and commercial value of their discoveries in order to conform to organizational expectations of disclosure while retaining rights to potentially pursue commercialization of their discoveries outside the organization's boundaries for their own benefit. Inventors may equally, ex post, lose interest to the commercialization of their invention due to competing job demands. I develop a model to examine the decision process of a university TTO responsible for the commercialization of academic inventions under such circumstances. The model describes the conditions that prompt Type 1 and Type 2 errors and allows for inferences for minimizing each. Little is known about the factors that make project selection effective or the opposite and there has been limited empirical analysis in this area. The few empirical studies that are available, examine the sources of type 2 error but there is no empirical work that analyzes type 1 error and the contributing factors. Research on type 1 error encounters two main difficulties. First, it is difficult to ascertain the decision process and second, it is challenging to approximate the counterfactual. Using data from the TTO of the University of Missouri, in the second essay I study the factors that influence the project selection process of the TTO in and the ex post type 1 error realized. In most cases, universities pursue commercialization of their inventions through licensing. There have been a few empirical studies that have researched the factors that affect licensing and their relative importance. In the third essay, I examine the characteristics of university inventions that are licensed using almost 10 years of data on several hundred of inventions, their characteristics, and the licensing status.


Author(s):  
Jiao Chen ◽  
Yuan Li ◽  
Jianfeng Yu ◽  
Wenbin Tang

Tolerance modeling is the most basic issue in Computer Aided Tolerancing (CAT). It will negatively influence the performance of subsequent activities such as tolerance analysis to a great extent if the resultant model cannot accurately represent variations in tolerance zone. According to ASME Y14.5M Standard [1], there is a class of profile tolerances for lines and surfaces which should also be interpreted correctly. Aim at this class of tolerances, the paper proposes a unified framework called DOFAS for representing them which composed of three parts: a basic DOF (Degrees of Freedom) model for interpreting geometric variations for profiles, an assessment method for filtering out and rejecting those profiles cannot be accurately represented and a split algorithm for splitting rejected profiles into sub profiles to make their variations interpretable. The scope of discussion in this paper is restricted to the line profiles; we will focus on the surface profiles in forthcoming papers. From the DOF model, two types of errors result from the rotations of the features are identified and formulized. One type of the errors is the result of the misalignment between profile boundary and tolerance zone boundary (noted as type 1); and if the feature itself exceeds the range of tolerance zone the other type of errors will form (noted as type 2). Specifically, it is required that the boundary points of the line profile should align with the corresponding boundary lines of the tolerance zone and an arbitrary point of the line profile should lie within the tolerance zone when line profile rotates in the tolerance zone. To make DOF model as accurate as possible, an assessment method and a split algorithm are developed to evaluate and eliminate these two type errors. It is clear that not all the line features carry the two type errors; as such the assessment method is used as a filter for checking and reserving such features that are consistent with the error conditions. In general, feature with simple geometry is error-free and selected by the filter whereas feature with complex geometry is rejected. According to the two type errors, two sub-procedures of the assessment process are introduced. The first one mathematically is a scheme of solving the maximum deviation of rotation trajectories of profile boundary, so as to neglect the type 1 error if it approaches to zero. The other one is to solve the maximum deviation of trajectories of all points of the feature: type 2 error can be ignored when the retrieved maximum deviation is not greater than prescribed threshold, so that the feature will always stay within the tolerance zone. For such features rejected by the filter which are inconsistent with the error conditions, the split algorithm, which is spread into the three cases of occurrence of type 1 error, occurrence of type 2 error and concurrence of two type errors, is developed to ease their errors. By utilizing and analyzing the geometric and kinematic properties of the feature, the split point is recognized and obtained accordingly. Two sub-features are retrieved from the split point and then substituted into the DOFAS framework recursively until all split features can be represented in desired resolution. The split algorithm is efficient and self-adapting lies in the fact that the rules applied can ensure high convergence rate and expected results. Finally, the implementation with two examples indicates that the DOFAS framework is capable of representing profile tolerances with enhanced accuracy thus supports the feasibility of the proposed approach.


1986 ◽  
Vol 20 (2) ◽  
pp. 189-200 ◽  
Author(s):  
Kevin D. Bird ◽  
Wayne Hall

Statistical power is neglected in much psychiatric research, with the consequence that many studies do not provide a reasonable chance of detecting differences between groups if they exist in the population. This paper attempts to improve current practice by providing an introduction to the essential quantities required for performing a power analysis (sample size, effect size, type 1 and type 2 error rates). We provide simplified tables for estimating the sample size required to detect a specified size of effect with a type 1 error rate of α and a type 2 error rate of β, and for estimating the power provided by a given sample size for detecting a specified size of effect with a type 1 error rate of α. We show how to modify these tables to perform power analyses for multiple comparisons in univariate and some multivariate designs. Power analyses for each of these types of design are illustrated by examples.


1984 ◽  
Vol 64 (3) ◽  
pp. 563-568
Author(s):  
J. N. B. SHRESTHA ◽  
P. S. FISER ◽  
L. AINSWORTH

A comparison was made of raddle markings and presence of spermatozoa in vaginal smears as a means of predicting which ewes will subsequently lamb. Data were obtained from 308 ewe lambs (6–7.5 mo old) and 464 sexually mature ewes (26–28 mo old) mated to experienced rams at the synchronized estrus induced by treatment with fluorogestone-acetate-impregnated intravaginal sponges and i.m. injection of pregnant mares' serum gonadotrophin. Ewes raddled during 12 h intervals from 0 to 72 h after sponge removal were recorded. Vaginal smears taken 48 and 72 h after sponge removal were examined microscopically for presence of spermatozoa. Of the ewes raddled, 41% of the ewe lambs and 86% of mature ewes, lambed. Corresponding results based on presence of spermatozoa in combined vaginal smears data were 51 and 89%. In ewe lambs, 96% of the ewes with no raddle markings and 100% of the ewes with no spermatozoa in vaginal smears were not pregnant. Corresponding results for mature ewes were 56 and 95%, respectively. This study shows that the presence of spermatozoa in vaginal smears 48 and 72 h after induction of a synchronized estrus improves the accuracy of predicting ewes that subsequently lamb to matings at the synchronized estrus over that based on ewes raddled. This improvement results from a reduction in the number of ewe lambs predicted as bred that failed to lamb (Type 1 error), and a substantial improvement in the percentage of mature ewes predicted as non-pregnant that subsequently lambed (Type 2 error). Key words: Raddle markings, spermatozoa, vaginal smears, synchronized estrus, lambing, ewes


Diabetes ◽  
2020 ◽  
Vol 69 (Supplement 1) ◽  
pp. 126-LB ◽  
Author(s):  
STEPHANIE HABIF ◽  
ALEXANDRA CONSTANTIN ◽  
LARS MUELLER ◽  
HARSIMRAN SINGH

2021 ◽  
Author(s):  
Rachana Tank ◽  
Joey Ward ◽  
Daniel J. Smith ◽  
Kristin E. Flegal ◽  
Donald M. Lyall

AbstractImportanceRecent research has suggested that genetic variation in the Klotho (KL) locus may modify the association between apolipoprotein e (APOE) e4 genotype and cognitive impairment.ObjectiveLarge-scale testing for associations and interactions between KL and APOE genotypes vs. risk of dementia (n=1,570 cases), cognitive abilities (n=174,513) and brain structure (n = 13,158) in older (60+ years) participants.Design, setting and participantsCross-sectional and prospective data (UK Biobank).Main outcomes and measuresKL status was indexed with heterozygosity of the rs9536314 polymorphism (vs. not), in unrelated people with vs. without APOE e4 genotype, using regression and interaction tests. We assessed non-demented cognitive scores (processing speed; reasoning; memory; executive function), multiple structural brain imaging, and clinical dementia outcomes. All tests were corrected for age, sex, assessment centre, eight principal components for population stratification, genotypic array, smoking history, deprivation, and self-reported medication history.ResultsAPOE e4 presence (vs. not) was associated with increased risk of dementia, worse cognitive abilities and brain structure differences. KL heterozygosity was associated with less frontal lobe grey matter. There were no significant APOE/KL interactions for cognitive, dementia or brain imaging measures (all P>0.05).Conclusions and relevanceWe found no evidence of APOE/KL interactions on cognitive, dementia or brain imaging outcomes. This could be due to some degree of cognitive test imprecision, generally preserved participant health potentially due to relatively young age, type-1 error in prior studies, or indicative of a significant age-dependent KL effect only in the context of marked AD pathology.Key pointsQuestion: Klotho genotype has been previously shown to ‘offset’ a substantial amount of the APOE e4/cognitive impairment association. Is this modification effect apparent in large-scale independent data, in terms of non-demented cognitive abilities, brain structure and dementia prevalence?Findings: In aged 60 years and above participants from UK Biobank, we found significant associations of APOE and Klotho genotypes on cognitive, structural brain and dementia outcomes, but no significant interactions.Meaning: This could reflect somewhat healthy participants, prior type 1 error or cognitive/dementia ascertainment imprecision, and/or that Klotho genotypic effects are age and neuropathology dependent.


2021 ◽  
Author(s):  
Maximilian Maier ◽  
Daniel Lakens

The default use of an alpha level of 0.05 is suboptimal for two reasons. First, decisions based on data can be made more efficiently by choosing an alpha level that minimizes the combined Type 1 and Type 2 error rate. Second, it is possible that in studies with very high statistical power p-values lower than the alpha level can be more likely when the null hypothesis is true, than when the alternative hypothesis is true (i.e., Lindley's paradox). This manuscript explains two approaches that can be used to justify a better choice of an alpha level than relying on the default threshold of 0.05. The first approach is based on the idea to either minimize or balance Type 1 and Type 2 error rates. The second approach lowers the alpha level as a function of the sample size to prevent Lindley's paradox. An R package and Shiny app are provided to perform the required calculations. Both approaches have their limitations (e.g., the challenge of specifying relative costs and priors), but can offer an improvement to current practices, especially when sample sizes are large. The use of alpha levels that have a better justification should improve statistical inferences and can increase the efficiency and informativeness of scientific research.


Author(s):  
Elizabeth M. McNally ◽  
Douglas L. Mann ◽  
Yigal Pinto ◽  
Deepak Bhakta ◽  
Gordon Tomaselli ◽  
...  

Abstract Myotonic dystrophy is an inherited systemic disorder affecting skeletal muscle and the heart. Genetic testing for myotonic dystrophy is diagnostic and identifies those at risk for cardiac complications. The 2 major genetic forms of myotonic dystrophy, type 1 and type 2, differ in genetic etiology yet share clinical features. The cardiac management of myotonic dystrophy should include surveillance for arrhythmias and left ventricular dysfunction, both of which occur in progressive manner and contribute to morbidity and mortality. To promote the development of care guidelines for myotonic dystrophy, the Myotonic Foundation solicited the input of care experts and organized the drafting of these recommendations. As a rare disorder, large scale clinical trial data to guide the management of myotonic dystrophy are largely lacking. The following recommendations represent expert consensus opinion from those with experience in the management of myotonic dystrophy, in part supported by literature‐based evidence where available.


Sign in / Sign up

Export Citation Format

Share Document