Evaluation of DNA Mixtures Accounting for Sampling Variability

2010 ◽  
pp. 437-444
Author(s):  
Yuk-Ka Chung ◽  
Yue-Qing Hu ◽  
De-Gang Zhu ◽  
Wing K. Fung
2021 ◽  
Vol 1 (1) ◽  
pp. 33-45
Author(s):  
Dennis McNevin ◽  
Kirsty Wright ◽  
Mark Barash ◽  
Sara Gomes ◽  
Allan Jamieson ◽  
...  

Continuous probabilistic genotyping (PG) systems are becoming the default method for calculating likelihood ratios (LRs) for competing propositions about DNA mixtures. Calculation of the LR relies on numerical methods and simultaneous probabilistic simulations of multiple variables rather than on analytical solutions alone. Some also require modelling of individual laboratory processes that give rise to electropherogram artefacts and peak height variance. For these reasons, it has been argued that any LR produced by continuous PG is unique and cannot be compared with another. We challenge this assumption and demonstrate that there are a set of conditions defining specific DNA mixtures which can produce an aspirational LR and thereby provide a measure of reproducibility for DNA profiling systems incorporating PG. Such DNA mixtures could serve as the basis for inter-laboratory comparisons, even when different STR amplification kits are employed. We propose a procedure for an inter-laboratory comparison consistent with these conditions.


2018 ◽  
Vol 58 (3) ◽  
pp. 191-199 ◽  
Author(s):  
Katherine Farash ◽  
Erin K. Hanson ◽  
Jack Ballantyne

The Lancet ◽  
1986 ◽  
Vol 327 (8480) ◽  
pp. 523-525 ◽  
Author(s):  
B. Maharaj ◽  
W.P. Leary ◽  
A.D. Naran ◽  
R.J. Maharaj ◽  
R.M. Cooppan ◽  
...  

1990 ◽  
Vol 47 (1) ◽  
pp. 2-15 ◽  
Author(s):  
Randall M. Peterman

Ninety-eight percent of recently surveyed papers in fisheries and aquatic sciences that did not reject some null hypothesis (H0) failed to report β, the probability of making a type II error (not rejecting H0 when it should have been), or statistical power (1 – β). However, 52% of those papers drew conclusions as if H0 were true. A false H0 could have been missed because of a low-power experiment, caused by small sample size or large sampling variability. Costs of type II errors can be large (for example, for cases that fail to detect harmful effects of some industrial effluent or a significant effect of fishing on stock depletion). Past statistical power analyses show that abundance estimation techniques usually have high β and that only large effects are detectable. I review relationships among β, power, detectable effect size, sample size, and sampling variability. I show how statistical power analysis can help interpret past results and improve designs of future experiments, impact assessments, and management regulations. I make recommendations for researchers and decision makers, including routine application of power analysis, more cautious management, and reversal of the burden of proof to put it on industry, not management agencies.


Author(s):  
M. K. Abu Husain ◽  
N. I. Mohd Zaki ◽  
M. B. Johari ◽  
G. Najafian

For an offshore structure, wind, wave, current, tide, ice and gravitational forces are all important sources of loading which exhibit a high degree of statistical uncertainty. The capability to predict the probability distribution of the response extreme values during the service life of the structure is essential for safe and economical design of these structures. Many different techniques have been introduced for evaluation of statistical properties of response. In each case, sea-states are characterised by an appropriate water surface elevation spectrum, covering a wide range of frequencies. In reality, the most versatile and reliable technique for predicting the statistical properties of the response of an offshore structure to random wave loading is the time domain simulation technique. To this end, conventional time simulation (CTS) procedure or commonly called Monte Carlo time simulation method is the best known technique for predicting the short-term and long-term statistical properties of the response of an offshore structure to random wave loading due to its capability of accounting for various nonlinearities. However, this technique requires very long simulations in order to reduce the sampling variability to acceptable levels. In this paper, the effect of sampling variability of a Monte Carlo technique is investigated.


2017 ◽  
Vol 31 ◽  
pp. 34-39 ◽  
Author(s):  
Yu Tan ◽  
Li Wang ◽  
Hui Wang ◽  
Huan Tian ◽  
Zhilong Li ◽  
...  

2019 ◽  
Author(s):  
Payton J. Jones ◽  
Donald Ray Williams ◽  
Richard J. McNally

Forbes, Wright, Markon, and Krueger claim that psychopathology networks have "limited" or "poor" replicability, supporting their argument primarily with data from two waves of an observational study on depression and anxiety. They developed “direct metrics” to gauge change across networks (e.g., change in edge sign), and used these results to support their conclusion. Three key flaws undermine their critique. First, nonreplication across empirical datasets does not provide evidence against a method; such evaluations of methods are possible only in controlled simulations when the data-generating model is known. Second, they assert that the removal of shared variance necessarily decreases reliability. This is not true. Depending on the causal model, it can either increase or decrease reliability. Third, their direct metrics do not account for normal sampling variability, leaving open the possibility that the direct differences between samples are due to normal, unproblematic fluctuations. As an alternative to their direct metrics, we provide a Bayesian re-analysis that quantifies uncertainty and compares relative evidence for replication (i.e., equivalence, H0) versus nonreplication (i.e., nonequivalence, "not H0") for each network edge. This approach provides a principled roadmap for future assessments of network replicability. Our analysis indicated substantial evidence for replication and scant evidence for nonreplication.


2020 ◽  
Author(s):  
Brett Whitty ◽  
John F. Thompson

AbstractBackgroundLow levels of sample contamination can have disastrous effects on the accurate identification of somatic variation in tumor samples. Detection of sample contamination in DNA is generally based on observation of low frequency variants that suggest more than a single source of DNA is present. This strategy works with standard DNA samples but is especially problematic in solid tumor FFPE samples because there can be huge variations in allele frequency (AF) due to massive copy number changes arising from large gains and losses across the genome. The tremendously variable allele frequencies make detection of contamination challenging. A method not based on individual AF is needed for accurate determination of whether a sample is contaminated and to what degree.MethodsWe used microhaplotypes to determine whether sample contamination is present. Microhaplotypes are sets of variants on the same sequencing read that can be unambiguously phased. Instead of measuring AF, the number and frequency of microhaplotypes is determined. Contamination detection becomes based on fundamental genomic properties, linkage disequilibrium (LD) and the diploid nature of human DNA, rather than variant frequencies. We optimized microhaplotype content based on 164 single nucleotide variant sets located in genes already sequenced within a cancer panel. Thus, contamination detection uses existing sequence data and does not require sequencing of any extraneous regions. The content is chosen based on LD data from the 1000 Genomes Project to be ancestry agnostic, providing the same sensitivity for contamination detection with samples from individuals of African, East Asian, and European ancestry.ResultsDetection of contamination at 1% and below is possible using this design. The methods described here can also be extended to other DNA mixtures such as forensic and non-invasive prenatal testing samples where DNA mixes of 1% or less can be similarly detected.ConclusionsThe microhaplotype method allows sensitive detection of DNA contamination in FFPE tumor samples. These methods provide a foundation for examining DNA mixtures in a variety of contexts. With the appropriate panels and high sequencing depth, low levels of secondary DNA can be detected and this can be valuable in a variety of applications.


2016 ◽  
Author(s):  
Brett G. Amidan ◽  
Janine R. Hutchison

Sign in / Sign up

Export Citation Format

Share Document