scholarly journals Statistical approaches for analyzing mutational spectra: some recommendations for categorical data.

Genetics ◽  
1994 ◽  
Vol 136 (1) ◽  
pp. 403-416
Author(s):  
W W Piegorsch ◽  
A J Bailer

Abstract In studies examining the patterns or spectra of mutational damage, the primary variables of interest are expressed typically as discrete counts within defined categories of damage. Various statistical methods can be applied to test for heterogeneity among the observed spectra of different classes, treatment groups and/or doses of a mutagen. These are described and compared via computer simulations to determine which are most appropriate for practical use in the evaluation of spectral data. Our results suggest that selected, simple modifications of the usual Pearson X2 statistic for contingency tables provide stable false positive error rates near the usual alpha = 0.05 level and also acceptable sensitivity to detect differences among spectra. Extensions to the problem of identifying individual differences within and among mutant spectra are noted.

2020 ◽  
Author(s):  
Kristy Martire ◽  
Agnes Bali ◽  
Kaye Ballantyne ◽  
Gary Edmond ◽  
Richard Kemp ◽  
...  

We do not know how often false positive reports are made in a range of forensic science disciplines. In the absence of this information it is important to understand the naive beliefs held by potential jurors about forensic science evidence reliability. It is these beliefs that will shape evaluations at trial. This descriptive study adds to our knowledge about naive beliefs by: 1) measuring jury-eligible (lay) perceptions of reliability for the largest range of forensic science disciplines to date, over three waves of data collection between 2011 and 2016 (n = 674); 2) calibrating reliability ratings with false positive report estimates; and 3) comparing lay reliability estimates with those of an opportunity sample of forensic practitioners (n = 53). Overall the data suggest that both jury-eligible participants and practitioners consider forensic evidence highly reliable. When compared to best or plausible estimates of reliability and error in the forensic sciences these views appear to overestimate reliability and underestimate the frequency of false positive errors. This result highlights the importance of collecting and disseminating empirically derived estimates of false positive error rates to ensure that practitioners and potential jurors have a realistic impression of the value of forensic science evidence.


1990 ◽  
Vol 15 (1) ◽  
pp. 39-52 ◽  
Author(s):  
Huynh Huynh

False positive and false negative error rates are studied for competency testing where examinees are permitted to retake the test if they fail to pass. Formulae are provided for the beta-binomial and Rasch models, and estimates based on these two models are compared for several typical situations. Although Rasch estimates are expected to be more accurate than beta-binomial estimates, differences among them are found not to be substantial in a number of practical situations. Under relatively general conditions and when test retaking is permitted, the probability of making a false negative error is zero. Under the same situation, and given that an examinee is a true nonmaster, the conditional probability of making a false positive error for this examinee is one.


2019 ◽  
Vol 302 ◽  
pp. 109877 ◽  
Author(s):  
Kristy A. Martire ◽  
Kaye N. Ballantyne ◽  
Agnes Bali ◽  
Gary Edmond ◽  
Richard I. Kemp ◽  
...  

2019 ◽  
Author(s):  
Scott D. Blain ◽  
Julia Longenecker ◽  
Rachael Grazioplene ◽  
Bonnie Klimes-Dougan ◽  
Colin G. DeYoung

Positive symptoms of schizophrenia and its extended phenotype—often termed psychoticism or positive schizotypy—are characterized by the inclusion of novel, erroneous mental contents. One promising framework for explaining positive symptoms involves “apophenia,” conceptualized here as a disposition toward false positive errors. Apophenia and positive symptoms have shown relations to Openness to Experience, and all of these constructs involve tendencies toward pattern seeking. Nonetheless, few studies have investigated the relations between psychoticism and non-self-report indicators of apophenia, let alone the role of normal personality variation. The current research used structural equation models to test associations between psychoticism, openness, intelligence, and non-self-report indicators of apophenia comprising false positive error rates on a variety of computerized tasks. In Sample 1, 1193 participants completed digit identification, theory of mind, and emotion recognition tasks. In Sample 2, 195 participants completed auditory signal detection and semantic word association tasks. Openness and psychoticism were positively correlated. Self-reported psychoticism, openness, and their shared variance were positively associated with apophenia, as indexed by false positive error rates, whether or not intelligence was controlled for. Apophenia was not associated with other personality traits, and openness and psychoticism were not associated with false negative errors. Standardized regression paths from openness-psychoticism to apophenia were in the range of .61 to .75. Findings provide insights into the measurement of apophenia and its relation to personality and psychopathology. Apophenia and pattern seeking may be promising constructs for unifying openness with the psychosis spectrum and for providing an explanation of positive symptoms. Results are discussed in the context of possible adaptive characteristics of apophenia, as well as potential risk factors for the development of psychotic disorders.


2018 ◽  
Author(s):  
Stephen D Benning ◽  
Rachel L. Bachrach ◽  
Edward Smith ◽  
Andrew Justin Freeman ◽  
Aidan G.C. Wright

Clinical scientists can use a continuum of registration efforts that vary in their disclosure and timing relative to data collection and analysis. Broadly speaking, registration benefits investigators by offering stronger, more powerful tests of theory with particular methods in tandem with better control of long-run false positive error rates. Registration helps clinical researchers in thinking through tensions between bandwidth and fidelity that surround recruiting participants, defining clinical phenotypes, handling comorbidity, treating missing data, and analyzing rich and complex data. In particular, registration helps record and justify the reasons behind specific study design decisions, though it also provides the opportunity to register entire decision trees with specific endpoints. Creating ever more faithful registrations and standard operating procedures may offer alternative methods of judging a clinical investigator’s scientific skill and eminence because study registration increases the transparency of clinical researchers’ work.


2021 ◽  
Author(s):  
Tshifhiwa Nkwenika ◽  
Samuel Manda

Abstract BackgroundDeaths certification remains a challenge mostly in the low-resources countries which results in poor availability and incompleteness of vital statistics. In such sceneries, public health and developmental policies concerning the burden of diseases are limited in their derivation and application. The study aimed at developing and evaluating appropriate cause-specific mortality risk scores using Verbal Autopsy (VA) data. MethodsA logistic regression model was used to identify independent predictors of NCDs, AIDS/TB, and CDs specific causes of death. Risk scores were derived using a point scoring system. Receiver operating characteristic (ROC) curves were used to validate the models by matching the number of reported deaths to the number of deaths predicted by the models. ResultsThe models provided accurate prediction results with sensitivities of 86%, 46%, and 40% and false-positive error rates of 44%, 11%, and 12% for NCDs, AIDS/TB, and CDs respectively. ConclusionThis study has shown that, in low- and medium-income countries, simple risk scores using information collected using verbal autopsy questionnaire could be adequately used to assign causes of death for Non-Communicable Diseases and AIDS/TB


Author(s):  
Xinwei Deng ◽  
Ying Hung ◽  
C. Devon Lin

Computer experiments refer to the study of complex systems using mathematical models and computer simulations. The use of computer experiments becomes popular for studying complex systems in science and engineering. The design and analysis of computer experiments have received broad attention in the past decades. In this chapter, we present several widely used statistical approaches for design and analysis of computer experiments, including space-filling designs and Gaussian process modeling. A special emphasis is given to recently developed design and modeling techniques for computer experiments with quantitative and qualitative factors.


1998 ◽  
Vol 61 (3) ◽  
pp. 334-338 ◽  
Author(s):  
M. A. GRANT

U.S. Food and Drug Administration regulations governing bottled water include microbiological quality guidelines based on coliform counts. Recently, a new MF medium for simultaneous detection of total coliforms and Escherichia coli was developed. This medium, m-ColiBlue24 (m-CB) was compared to m-Endo medium and an International Organization for Standardization standard coliform medium, lactose agar with Tergitol 7. Coliform analysis was conducted on 104 brands of bottled water from 10 countries. Some samples were additionally analyzed for heterotrophic plate count and Pseudomonas sp. populations, including P. aeruginosa. Presumptive coliform colonies were found in 5.8% of the samples with m-CB, 1.9% with m-Endo and 11.5% with lactose agar with Tergitol 7. None of the presumptive coliforms from any of the three media were verified as true coliforms in subsequent analysis. Consequently, the presumptive recovery rates actually represented false-positive error (FPE) rates. The FPE for m-CB and m-Endo were not statistically different (P < 0.05) but the FPE for lactose agar with Tergitol 7 was significantly larger.


Sign in / Sign up

Export Citation Format

Share Document