false positive error
Recently Published Documents


TOTAL DOCUMENTS

74
(FIVE YEARS 30)

H-INDEX

10
(FIVE YEARS 4)

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Aleksandra Krawiec ◽  
Łukasz Pawela ◽  
Zbigniew Puchała

AbstractCertification of quantum channels is based on quantum hypothesis testing and involves also preparation of an input state and choosing the final measurement. This work primarily focuses on the scenario when the false negative error cannot occur, even if it leads to the growth of the probability of false positive error. We establish a condition when it is possible to exclude false negative error after a finite number of queries to the quantum channel in parallel, and we provide an upper bound on the number of queries. On top of that, we found a class of channels which allow for excluding false negative error after a finite number of queries in parallel, but cannot be distinguished unambiguously. Moreover, it will be proved that parallel certification scheme is always sufficient, however the number of steps may be decreased by the use of adaptive scheme. Finally, we consider examples of certification of various classes of quantum channels and measurements.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Daniel M. Fernandes ◽  
Olivia Cheronet ◽  
Pere Gelabert ◽  
Ron Pinhasi

AbstractEstimation of genetically related individuals is playing an increasingly important role in the ancient DNA field. In recent years, the numbers of sequenced individuals from single sites have been increasing, reflecting a growing interest in understanding the familial and social organisation of ancient populations. Although a few different methods have been specifically developed for ancient DNA, namely to tackle issues such as low-coverage homozygous data, they require a 0.1–1× minimum average genomic coverage per analysed pair of individuals. Here we present an updated version of a method that enables estimates of 1st and 2nd-degrees of relatedness with as little as 0.026× average coverage, or around 18,000 SNPs from 1.3 million aligned reads per sample with average length of 62 bp—four times less data than 0.1× coverage at similar read lengths. By using simulated data to estimate false positive error rates, we further show that a threshold even as low as 0.012×, or around 4000 SNPs from 600,000 reads, will always show 1st-degree relationships as related. Lastly, by applying this method to published data, we are able to identify previously undocumented relationships using individuals that had been excluded from prior kinship analysis due to their very low coverage. This methodological improvement has the potential to enable relatedness estimation on ancient whole genome shotgun data during routine low-coverage screening, and therefore improve project management when decisions need to be made on which individuals are to be further sequenced.


2021 ◽  
Author(s):  
Chen Mo ◽  
Zhenyao Ye ◽  
Kathryn Hatch ◽  
Yuan Zhang ◽  
Qiong Wu ◽  
...  

Abstract Background: Fine-mapping is an analytical step for causal prioritization of the polymorphic variants in a trait-associated genomic region observed in genome-wide association studies (GWAS). Prioritization of causal variants can be challenging due to linkage disequilibrium (LD) patterns among hundreds to thousands of polymorphisms associated with a trait. Hence, we propose an ℓ0 graph norm shrinkage algorithm to disentangle LD patterns by dense LD blocks consisting of highly correlated single nucleotide polymorphisms (SNPs). We further incorporate the dense LD structure for fine-mapping. Based on graph theory, the concept of "dense" refers to a condition where a block is composed mainly of highly correlated SNPs. We demonstrated the application of our new fine-mapping method using a large UK Biobank (UKBB) sample related to nicotine addiction. We also evaluated and compared its performance with existing fine-mapping algorithms using simulations.Results: Our results suggested that polymorphic variances in both neighboring and distant variants can be consolidated into dense blocks of highly correlated loci. Dense-LD outperformed comparable fine-mapping methods with increased sensitivity and reduced false-positive error rate for causal variant selection. Applying to a UKBB sample, this method replicated the loci reported in previous findings and suggested a strong association with nicotine addiction.Conclusion: We found that the dense LD block structure can guide fine-mapping and accurately determine a parsimonious set of potential causal variants. Our approach is computationally efficient and allows fine-mapping of thousands of polymorphisms.


Author(s):  
Valentyna I. Borysova ◽  
Bohdan P. Karnaukh

As a result of recent amendments to the procedural legislation of Ukraine, one may observe a tendency in judicial practice to differentiate the standards of proof depending on the type of litigation. Thus, in commercial litigation the so-called standard of “probability of evidence” applies, while in criminal proceedings – “beyond a reasonable doubt” standard applies. The purpose of this study was to find the rational justification for the differentiation of the standards of proof applied in civil (commercial) and criminal cases and to explain how the same fact is considered proven for the purposes of civil lawsuit and not proven for the purposes of criminal charge. The study is based on the methodology of Bayesian decision theory. The paper demonstrated how the principles of Bayesian decision theory can be applied to judicial fact-finding. According to Bayesian theory, the standard of proof applied depends on the ratio of the false positive error disutility to false negative error disutility. Since both types of error have the same disutility in a civil litigation, the threshold value of conviction is 50+ percent. In a criminal case, on the other hand, the disutility of false positive error considerably exceeds the disutility of the false negative one, and therefore the threshold value of conviction shall be much higher, amounting to 90 percent. Bayesian decision theory is premised on probabilistic assessments. And since the concept of probability has many meanings, the results of the application of Bayesian theory to judicial fact-finding can be interpreted in a variety of ways. When dealing with statistical evidence, it is crucial to distinguish between subjective and objective probability. Statistics indicate objective probability, while the standard of proof refers to subjective probability. Yet, in some cases, especially when statistical data is the only available evidence, the subjective probability may be roughly equivalent to the objective probability. In such cases, statistics cannot be ignored


2021 ◽  
Author(s):  
Daniel M Fernandes ◽  
Olivia Cheronet ◽  
Pere Gelabert ◽  
Ron Pinhasi

Estimation of genetically related individuals is playing an increasingly important role in the ancient DNA field. In recent years, the numbers of sequenced individuals from single sites have been increasing, reflecting a growing interest in understanding the familial and social organisation of ancient populations. Although a few different methods have been specifically developed for ancient DNA, namely to tackle issues such as low-coverage homozygous data, they require a 0.1 - 1x minimum average genomic coverage per analysed pair of individuals between. Here we present an updated version of a method that enables estimates of 1st and 2nd-degrees of relatedness with as little as 0.026x average coverage, or around 1.3 million aligned reads per sample - 4 times less data than 0.1x. By using simulated data to estimate false positive error rates, we further show that a threshold even as low as 0.012x, or around 600,000 reads, will always show 1st-degree relationships as related. Lastly, by applying this method to published data, we are able to identify previously undocumented relationships using individuals previously excluded from kinship analysis due to their very low coverage. This methodological improvement has the potential to enable relatedness estimation on ancient whole genome shotgun data during routine low-coverage screening, and therefore improve project management when decisions need to be made on which individuals are to be further sequenced.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Andrew Buxton ◽  
Eleni Matechou ◽  
Jim Griffin ◽  
Alex Diana ◽  
Richard A. Griffiths

AbstractEcological surveys risk incurring false negative and false positive detections of the target species. With indirect survey methods, such as environmental DNA, such error can occur at two stages: sample collection and laboratory analysis. Here we analyse a large qPCR based eDNA data set using two occupancy models, one of which accounts for false positive error by Griffin et al. (J R Stat Soc Ser C Appl Stat 69: 377–392, 2020), and a second that assumes no false positive error by Stratton et al. (Methods Ecol Evol 11: 1113–1120, 2020). Additionally, we apply the Griffin et al. (2020) model to simulated data to determine optimal levels of replication at both sampling stages. The Stratton et al. (2020) model, which assumes no false positive results, consistently overestimated both overall and individual site occupancy compared to both the Griffin et al. (2020) model and to previous estimates of pond occupancy for the target species. The inclusion of replication at both stages of eDNA analysis (sample collection and in the laboratory) reduces both bias and credible interval width in estimates of both occupancy and detectability. Even the collection of > 1 sample from a site can improve parameter estimates more than having a high number of replicates only within the laboratory analysis.


2021 ◽  
Author(s):  
Tshifhiwa Nkwenika ◽  
Samuel Manda

Abstract BackgroundDeaths certification remains a challenge mostly in the low-resources countries which results in poor availability and incompleteness of vital statistics. In such sceneries, public health and developmental policies concerning the burden of diseases are limited in their derivation and application. The study aimed at developing and evaluating appropriate cause-specific mortality risk scores using Verbal Autopsy (VA) data. MethodsA logistic regression model was used to identify independent predictors of NCDs, AIDS/TB, and CDs specific causes of death. Risk scores were derived using a point scoring system. Receiver operating characteristic (ROC) curves were used to validate the models by matching the number of reported deaths to the number of deaths predicted by the models. ResultsThe models provided accurate prediction results with sensitivities of 86%, 46%, and 40% and false-positive error rates of 44%, 11%, and 12% for NCDs, AIDS/TB, and CDs respectively. ConclusionThis study has shown that, in low- and medium-income countries, simple risk scores using information collected using verbal autopsy questionnaire could be adequately used to assign causes of death for Non-Communicable Diseases and AIDS/TB


2021 ◽  
Vol 2 (1) ◽  
pp. p1
Author(s):  
Edward J. Lusk

Focus Decision-making is often aided by examining False Positive Error-risk profiles [FPEs]. In this research report, the decision-making jeopardy that one invites by eschewing the Exact factorial-binomial Probability-values used to form the FPEs in favor of: (i) Normal Approximations [NA], or (ii) Continuity-Corrected Normal Approximations [CCNA] is addressed. Results Referencing an audit context where testing sample sizes for Re-Performance & Re-Calculation protocols are, by economic necessity, in the range of 20 to 100 account items, there are indications that audit decisions would benefit by using the Exact Probability-values. Specifically, using a jeopardy-screen of ±2.5% created by benchmarking the NA & the CCNA by the Exact FPEs, it is observed that: (i) for sample sizes of 100 there is little difference between the Exact and the CCNA FPEs, (ii) almost uniformly for both sample extremes of 20 and 100, the FPEs created using the NA are lower and outside the jeopardy screen, finally (iii) for the CCNA-arm for sample sizes of n = 20, only sometimes are the CCNA FPEs interior to the jeopardy screen. These results call into question not using the Exact Factorial Binomial results. Finally, an illustrative example is offered of an A priori FPE-risk Decision-Grid that can be parametrized and used in a decision-making context.


2020 ◽  
Author(s):  
Chen Mo ◽  
Zhenyao Ye ◽  
Kathryn Hatch ◽  
Yuan Zhang ◽  
Qiong Wu ◽  
...  

AbstractFine-mapping is an analytical step to perform causal prioritization of the polymorphic variants on a trait-associated genomic region observed from genome-wide association studies (GWAS). The prioritization of causal variants can be challenging due to the linkage disequilibrium (LD) patterns among hundreds to thousands of polymorphisms associated with a trait. We propose a novel ℓ0 graph norm shrinkage algorithm to select causal variants from dense LD blocks consisting of highly correlated SNPs that may not be proximal or contiguous. We extract dense LD blocks and perform regression shrinkage to calculate a prioritization score to select a parsimonious set of causal variants. Our approach is computationally efficient and allows performing fine-mapping on thousands of polymorphisms. We demonstrate its application using a large UK Biobank (UKBB) sample related to nicotine addiction. Our results suggest that polymorphic variances in both neighboring and distant variants can be consolidated into dense blocks of highly correlated loci. Simulations were used to evaluate and compare the performance of our method and existing fine-mapping algorithms. The results demonstrated that our method outperformed comparable fine-mapping methods with increased sensitivity and reduced false-positive error rate regarding causal variant selection. The application of this method to smoking severity trait in UKBB sample replicated previously reported loci and suggested the causal prioritization of genetic effects on nicotine dependency.Author summaryDisentangling the complex linkage disequilibrium (LD) pattern and selecting the underlying causal variants have been a long-term challenge for genetic fine-mapping. We find that the LD pattern within GWAS loci is intrinsically organized in delicate graph topological structures, which can be effectively learned by our novel ℓ0 graph norm shrinkage algorithm. The extracted LD graph structure is critical for causal variant selection. Moreover, our method is less constrained by the width of GWAS loci and thus can fine-map a massive number of correlated SNPs.


2020 ◽  
pp. jclinpath-2020-206726
Author(s):  
Cornelia Margaret Szecsei ◽  
Jon D Oxley

AimTo examine the effects of specialist reporting on error rates in prostate core biopsy diagnosis.MethodBiopsies were reported by eight specialist uropathologists over 3 years. New cancer diagnoses were double-reported and all biopsies were reviewed for the multidisciplinary team (MDT) meeting. Diagnostic alterations were recorded in supplementary reports and error rates were compared with a decade previously.Results2600 biopsies were reported. 64.1% contained adenocarcinoma, a 19.7% increase. The false-positive error rate had reduced from 0.4% to 0.06%. The false-negative error rate had increased from 1.5% to 1.8%, but represented fewer absolute errors due to increased cancer incidence.ConclusionsSpecialisation and double-reporting have reduced false-positive errors. MDT review of negative cores continues to identify a very low number of false-negative errors. Our data represents a ‘gold standard’ for prostate biopsy diagnostic error rates. Increased use of MRI-targeted biopsies may alter error rates and their future clinical significance.


Sign in / Sign up

Export Citation Format

Share Document