scholarly journals Universal Lower Bound for Finite-Sample Reconstruction Error and Its Relation to Prolate Spheroidal Functions

2018 ◽  
Vol 25 (1) ◽  
pp. 50-54 ◽  
Author(s):  
Talha Cihad Gulcu ◽  
Haldun M. Ozaktas
Author(s):  
ZHI-YONG LIU ◽  
HONG QIAO ◽  
LEI XU

By minimizing the mean square reconstruction error, multisets mixture learning (MML) provides a general approach for object detection in image. To calculate each sample reconstruction error, as the object template is represented by a set of contour points, the MML needs to inefficiently enumerate the distances between the sample and all the contour points. In this paper, we develop the line segment approximation (LSA) algorithm to calculate the reconstruction error, which is shown theoretically and experimentally to be more efficient than the enumeration method. It is also experimentally illustrated that the MML based algorithm has a better noise resistance ability than the generalized Hough transform (GHT) based counterpart.


2015 ◽  
Vol 25 (03) ◽  
pp. 187-205 ◽  
Author(s):  
Niccolò Cavazza ◽  
Massimo Ferri ◽  
Claudia Landi

An exact computation of the persistent Betti numbers of a submanifold [Formula: see text] of a Euclidean space is possible only in a theoretical setting. In practical situations, only a finite sample of [Formula: see text] is available. We show that, under suitable density conditions, it is possible to estimate the multidimensional persistent Betti numbers of [Formula: see text] from the ones of a union of balls centered on the sample points; this even yields the exact value in restricted areas of the domain. Using these inequalities we improve a previous lower bound for the natural pseudodistance to assess dissimilarity between the shapes of two objects from a sampling of them. Similar inequalities are proved for the multidimensional persistent Betti numbers of the ball union and the one of a combinatorial description of it.


2011 ◽  
Vol 23 (7) ◽  
pp. 1862-1898 ◽  
Author(s):  
Nathan D. VanderKraats ◽  
Arunava Banerjee

For any memoryless communication channel with a binary-valued input and a one-dimensional real-valued output, we introduce a probabilistic lower bound on the mutual information given empirical observations on the channel. The bound is built on the Dvoretzky-Kiefer-Wolfowitz inequality and is distribution free. A quadratic time algorithm is described for computing the bound and its corresponding class-conditional distribution functions. We compare our approach to existing techniques and show the superiority of our bound to a method inspired by Fano’s inequality where the continuous random variable is discretized.


PLoS ONE ◽  
2021 ◽  
Vol 16 (12) ◽  
pp. e0261889
Author(s):  
Meraj Hashemi ◽  
Kristan A. Schneider

Background The UN’s Sustainable Development Goals are devoted to eradicate a range of infectious diseases to achieve global well-being. These efforts require monitoring disease transmission at a level that differentiates between pathogen variants at the genetic/molecular level. In fact, the advantages of genetic (molecular) measures like multiplicity of infection (MOI) over traditional metrics, e.g., R0, are being increasingly recognized. MOI refers to the presence of multiple pathogen variants within an infection due to multiple infective contacts. Maximum-likelihood (ML) methods have been proposed to derive MOI and pathogen-lineage frequencies from molecular data. However, these methods are biased. Methods and findings Based on a single molecular marker, we derive a bias-corrected ML estimator for MOI and pathogen-lineage frequencies. We further improve these estimators by heuristical adjustments that compensate shortcomings in the derivation of the bias correction, which implicitly assumes that data lies in the interior of the observational space. The finite sample properties of the different variants of the bias-corrected estimators are investigated by a systematic simulation study. In particular, we investigate the performance of the estimator in terms of bias, variance, and robustness against model violations. The corrections successfully remove bias except for extreme parameters that likely yield uninformative data, which cannot sustain accurate parameter estimation. Heuristic adjustments further improve the bias correction, particularly for small sample sizes. The bias corrections also reduce the estimators’ variances, which coincide with the Cramér-Rao lower bound. The estimators are reasonably robust against model violations. Conclusions Applying bias corrections can substantially improve the quality of MOI estimates, particularly in areas of low as well as areas of high transmission—in both cases estimates tend to be biased. The bias-corrected estimators are (almost) unbiased and their variance coincides with the Cramér-Rao lower bound, suggesting that no further improvements are possible unless additional information is provided. Additional information can be obtained by combining data from several molecular markers, or by including information that allows stratifying the data into heterogeneous groups.


Author(s):  
Sh. Azhgaliyev ◽  
◽  
Sh. Abikenova ◽  

In this paper, we study the problem of function reconstruction by values of Radon transforms within the framework of the Computational (Numerical) Diameter (C(N)D) approach. The meaning of C(N)D is to solve two independent problems: obtaining lower bounds of the reconstruction error by exact information and specifying the computing tool that implements the upper bounds (preferably coinciding with the lower bound up to constants). The C(N)D approach is a mathematical model of experiments for describing various processes (physical, chemical, technical, etc.). An important role in setting up such experiments is played by types of measuring instruments, which is reflected in C(N)D as types of functionals. The next important point is the choice of location and balancing of instruments, i.e. selection of functionals’ parameters. The final step is to build an optimal computing tool using the obtained data. The most studied types of functionals are function values at points and Fourier coefficients. An important difference of this work from previously obtained results is the study of the approximation capabilities of another type of functionals – Radon transforms, i.e. mathematical model of the use of tomography and similar technologies. This paper is devoted to obtaining lower bounds for the error in reconstructing functions from Sobolev and Korobov spaces.


Author(s):  
Pranab K. Sen ◽  
Julio M. Singer ◽  
Antonio C. Pedroso de Lima

Methodology ◽  
2012 ◽  
Vol 8 (1) ◽  
pp. 23-38 ◽  
Author(s):  
Manuel C. Voelkle ◽  
Patrick E. McKnight

The use of latent curve models (LCMs) has increased almost exponentially during the last decade. Oftentimes, researchers regard LCM as a “new” method to analyze change with little attention paid to the fact that the technique was originally introduced as an “alternative to standard repeated measures ANOVA and first-order auto-regressive methods” (Meredith & Tisak, 1990, p. 107). In the first part of the paper, this close relationship is reviewed, and it is demonstrated how “traditional” methods, such as the repeated measures ANOVA, and MANOVA, can be formulated as LCMs. Given that latent curve modeling is essentially a large-sample technique, compared to “traditional” finite-sample approaches, the second part of the paper addresses the question to what degree the more flexible LCMs can actually replace some of the older tests by means of a Monte-Carlo simulation. In addition, a structural equation modeling alternative to Mauchly’s (1940) test of sphericity is explored. Although “traditional” methods may be expressed as special cases of more general LCMs, we found the equivalence holds only asymptotically. For practical purposes, however, no approach always outperformed the other alternatives in terms of power and type I error, so the best method to be used depends on the situation. We provide detailed recommendations of when to use which method.


2006 ◽  
Vol 54 (3) ◽  
pp. 343-350 ◽  
Author(s):  
C. F. H. Longin ◽  
H. F. Utz ◽  
A. E. Melchinger ◽  
J.C. Reif

The optimum allocation of breeding resources is crucial for the efficiency of breeding programmes. The objectives were to (i) compare selection gain ΔGk for finite and infinite sample sizes, (ii) compare ΔGk and the probability of identifying superior hybrids (Pk), and (iii) determine the optimum allocation of the number of hybrids and test locations in hybrid maize breeding using doubled haploids. Infinite compared to finite sample sizes led to almost identical optimum allocation of test resources, but to an inflation of ΔGk. This inflation decreased as the budget and the number of finally selected hybrids increased. A reasonable Pk was reached for hybrids belonging to the q = 1% best of the population. The optimum allocations for Pk(q) and ΔGkwere similar, indicating that Pk(q) is promising for optimizing breeding programmes.


Sign in / Sign up

Export Citation Format

Share Document