scholarly journals Revisiting the STRmixlikelihood ratio probability interval coverage considering multiple factors

2021 ◽  
Author(s):  
Jo-Anne Bright ◽  
Shan-I Lee ◽  
JOHN BUCKLETON ◽  
Duncan Alexander Taylor

In previously reported work a method for applying a lower bound to the variation induced by the Monte Carlo effect was trialled. This is implemented in the widely used probabilistic genotyping system, STRmix The approach did not give the desired 99% coverage. However, the method for assigning the lower bound to the MCMC variability is only one of a number of layers of conservativism applied in a typical application. We tested all but one of these sources of variability collectively and term the result the near global coverage. The near global coverage for all tested samples was greater than 99.5% for inclusionary average LRs of known donors. This suggests that when included in the probability interval method the other layers of conservativism are more than adequate to compensate for the intermittent underperformance of the MCMC variability component. Running for extended MCMC accepts was also shown to result in improved precision.

2021 ◽  
Author(s):  
Jo-Anne Bright ◽  
Duncan Alexander Taylor ◽  
James Michael Curran ◽  
JOHN BUCKLETON

Two methods for applying a lower bound to the variation induced by the Monte Carlo effect are trialled. One of these is implemented in the widely used probabilistic genotyping system, STRmix Neither approach is giving the desired 99% coverage. In some cases the coverage is much lower than the desired 99%. The discrepancy (i.e. the distance between the LR corresponding to the desired coverage and the LR observed coverage at 99%) is not large. For example, the discrepancy of 0.23 for approach 1 suggests the lower bounds should be moved downwards by a factor of 1.7 to achieve the desired 99% coverage. Although less effective than desired these methods provide a layer of conservatism that is additional to the other layers. These other layers are from factors such as the conservatism within the sub-population model, the choice of conservative measures of co-ancestry, the consideration of relatives within the population and the resampling method used for allele probabilities, all of which tend to understate the strength of the findings.


Author(s):  
Markus Kiderlen ◽  
Florian Pausinger

AbstractWe extend the notion of jittered sampling to arbitrary partitions and study the discrepancy of the related point sets. Let $${\varvec{\Omega }}=(\Omega _1,\ldots ,\Omega _N)$$ Ω = ( Ω 1 , … , Ω N ) be a partition of $$[0,1]^d$$ [ 0 , 1 ] d and let the ith point in $${{\mathcal {P}}}$$ P be chosen uniformly in the ith set of the partition (and stochastically independent of the other points), $$i=1,\ldots ,N$$ i = 1 , … , N . For the study of such sets we introduce the concept of a uniformly distributed triangular array and compare this notion to related notions in the literature. We prove that the expected $${{{\mathcal {L}}}_p}$$ L p -discrepancy, $${{\mathbb {E}}}{{{\mathcal {L}}}_p}({{\mathcal {P}}}_{\varvec{\Omega }})^p$$ E L p ( P Ω ) p , of a point set $${{\mathcal {P}}}_{\varvec{\Omega }}$$ P Ω generated from any equivolume partition $${\varvec{\Omega }}$$ Ω is always strictly smaller than the expected $${{{\mathcal {L}}}_p}$$ L p -discrepancy of a set of N uniform random samples for $$p>1$$ p > 1 . For fixed N we consider classes of stratified samples based on equivolume partitions of the unit cube into convex sets or into sets with a uniform positive lower bound on their reach. It is shown that these classes contain at least one minimizer of the expected $${{{\mathcal {L}}}_p}$$ L p -discrepancy. We illustrate our results with explicit constructions for small N. In addition, we present a family of partitions that seems to improve the expected discrepancy of Monte Carlo sampling by a factor of 2 for every N.


2021 ◽  
Vol 15 (5) ◽  
pp. 1-32
Author(s):  
Quang-huy Duong ◽  
Heri Ramampiaro ◽  
Kjetil Nørvåg ◽  
Thu-lan Dam

Dense subregion (subgraph & subtensor) detection is a well-studied area, with a wide range of applications, and numerous efficient approaches and algorithms have been proposed. Approximation approaches are commonly used for detecting dense subregions due to the complexity of the exact methods. Existing algorithms are generally efficient for dense subtensor and subgraph detection, and can perform well in many applications. However, most of the existing works utilize the state-or-the-art greedy 2-approximation algorithm to capably provide solutions with a loose theoretical density guarantee. The main drawback of most of these algorithms is that they can estimate only one subtensor, or subgraph, at a time, with a low guarantee on its density. While some methods can, on the other hand, estimate multiple subtensors, they can give a guarantee on the density with respect to the input tensor for the first estimated subsensor only. We address these drawbacks by providing both theoretical and practical solution for estimating multiple dense subtensors in tensor data and giving a higher lower bound of the density. In particular, we guarantee and prove a higher bound of the lower-bound density of the estimated subgraph and subtensors. We also propose a novel approach to show that there are multiple dense subtensors with a guarantee on its density that is greater than the lower bound used in the state-of-the-art algorithms. We evaluate our approach with extensive experiments on several real-world datasets, which demonstrates its efficiency and feasibility.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
R. A. Abdelghany ◽  
A.-B. A. Mohamed ◽  
M. Tammam ◽  
Watson Kuo ◽  
H. Eleuch

AbstractWe formulate the tripartite entropic uncertainty relation and predict its lower bound in a three-qubit Heisenberg XXZ spin chain when measuring an arbitrary pair of incompatible observables on one qubit while the other two are served as quantum memories. Our study reveals that the entanglement between the nearest neighbors plays an important role in reducing the uncertainty in measurement outcomes. In addition we have shown that the Dolatkhah’s lower bound (Phys Rev A 102(5):052227, 2020) is tighter than that of Ming (Phys Rev A 102(01):012206, 2020) and their dynamics under phase decoherence depends on the choice of the observable pair. In the absence of phase decoherence, Ming’s lower bound is time-invariant regardless the chosen observable pair, while Dolatkhah’s lower bound is perfectly identical with the tripartite uncertainty with a specific choice of pair.


1970 ◽  
Vol 37 (2) ◽  
pp. 267-270 ◽  
Author(s):  
D. Pnueli

A method is presented to obtain both upper and lower bound to eigenvalues when a variational formulation of the problem exists. The method consists of a systematic shift in the weight function. A detailed procedure is offered for one-dimensional problems, which makes improvement of the bounds possible, and which involves the same order of detailed computation as the Rayleigh-Ritz method. The main contribution of this method is that it yields the “other bound;” i.e., the one which cannot be obtained by the Rayleigh-Ritz method.


2021 ◽  
Vol 12 (3) ◽  
pp. 150-156
Author(s):  
A. V. Galatenko ◽  
◽  
V. A. Kuzovikhina ◽  

We propose an automata model of computer system security. A system is represented by a finite automaton with states partitioned into two subsets: "secure" and "insecure". System functioning is secure if the number of consecutive insecure states is not greater than some nonnegative integer k. This definition allows one to formally reflect responsiveness to security breaches. The number of all input sequences that preserve security for the given value of k is referred to as a k-secure language. We prove that if a language is k-secure for some natural and automaton V, then it is also k-secure for any 0 < k < k and some automaton V = V (k). Reduction of the value of k is performed at the cost of amplification of the number of states. On the other hand, for any non-negative integer k there exists a k-secure language that is not k"-secure for any natural k" > k. The problem of reconstruction of a k-secure language using a conditional experiment is split into two subcases. If the cardinality of an input alphabet is bound by some constant, then the order of Shannon function of experiment complexity is the same for al k; otherwise there emerges a lower bound of the order nk.


Algorithms ◽  
2018 ◽  
Vol 11 (11) ◽  
pp. 187
Author(s):  
Faisal Abu-Khzam ◽  
Henning Fernau ◽  
Ryuhei Uehara

The study of reconfiguration problems has grown into a field of its own. The basic idea is to consider the scenario of moving from one given (feasible) solution to another, maintaining feasibility for all intermediate solutions. The solution space is often represented by a “reconfiguration graph”, where vertices represent solutions to the problem in hand and an edge between two vertices means that one can be obtained from the other in one step. A typical application background would be for a reorganization or repair work that has to be done without interruption to the service that is provided.


2018 ◽  
Vol 615 ◽  
pp. A62 ◽  
Author(s):  
G. Valle ◽  
M. Dell’Omodarme ◽  
P. G. Prada Moroni ◽  
S. Degl’Innocenti

Aims. The capability of grid-based techniques to estimate the age together with the convective core overshooting efficiency of stars in detached eclipsing binary systems for main sequence stars has previously been investigated. We have extended this investigation to later evolutionary stages and have evaluated the bias and variability on the recovered age and convective core overshooting parameter accounting for both observational and internal uncertainties. Methods. We considered synthetic binary systems, whose age and overshooting efficiency should be recovered by applying the SCEPtER pipeline to the same grid of models used to build the mock stars. We focus our attention on a binary system composed of a 2.50 M⊙ primary star coupled with a 2.38 M⊙ secondary. To explore different evolutionary scenarios, we performed the estimation at three different times: when the primary is at the end of the central helium burning, when it is at the bottom of the RGB, and when it is in the helium core burning phase. The Monte Carlo simulations have been carried out for two typical values of accuracy on the mass determination, that is, 1% and 0.1%. Results. Adopting typical observational uncertainties, we found that the recovered age and overshooting efficiency are biased towards low values in all three scenarios. For an uncertainty on the masses of 1%, the underestimation is particularly relevant for a primary in the central helium burning stage, reaching − 8.5% in age and − 0.04 (− 25% relative error) in the overshooting parameter β. In the other scenarios, an undervaluation of the age by about 4% occurs. A large variability in the fitted values between Monte Carlo simulations was found: for an individual system calibration, the value of the overshooting parameter can vary from β = 0.0 to β = 0.26. When adopting a 0.1% error on the masses, the biases remain nearly unchanged but the global variability is suppressed by a factor of about two. We also explored the effect of a systematic discrepancy between the artificial systems and the model grid by accounting for an offset in the effective temperature of the stars by ± 150 K. For a mass error of 1% the overshooting parameter is largely biased towards the edges of the explored range, while for the lower mass uncertainty it is basically unconstrained from 0.0 to 0.2. We also evaluate the possibility of individually recovering the β value for both binary stars. We found that this is impossible for a primary near to central hydrogen exhaustion owing to huge biases for the primary star of + 0.14 (90% relative error), while in the other cases the fitted β are consistent, but always biased by about − 0.04 (− 25% relative error). Finally, the possibility to distinguish between models computed with mild overshooting from models with no overshooting was evaluated, resulting in a reassuring power of distinction greater than 80%. However, the scenario with a primary in the central helium burning was a notable exception, showing a power of distinction lower than 5%.


Sign in / Sign up

Export Citation Format

Share Document