true parameter
Recently Published Documents


TOTAL DOCUMENTS

87
(FIVE YEARS 31)

H-INDEX

14
(FIVE YEARS 2)

2022 ◽  
Vol 8 (1) ◽  
Author(s):  
Tailong Xiao ◽  
Jianping Fan ◽  
Guihua Zeng

AbstractParameter estimation is a pivotal task, where quantum technologies can enhance precision greatly. We investigate the time-dependent parameter estimation based on deep reinforcement learning, where the noise-free and noisy bounds of parameter estimation are derived from a geometrical perspective. We propose a physical-inspired linear time-correlated control ansatz and a general well-defined reward function integrated with the derived bounds to accelerate the network training for fast generating quantum control signals. In the light of the proposed scheme, we validate the performance of time-dependent and time-independent parameter estimation under noise-free and noisy dynamics. In particular, we evaluate the transferability of the scheme when the parameter has a shift from the true parameter. The simulation showcases the robustness and sample efficiency of the scheme and achieves the state-of-the-art performance. Our work highlights the universality and global optimality of deep reinforcement learning over conventional methods in practical parameter estimation of quantum sensing.


2022 ◽  
Author(s):  
Hanne Kekkonen

Abstract We consider the statistical non-linear inverse problem of recovering the absorption term f>0 in the heat equation with given sufficiently smooth functions describing boundary and initial values respectively. The data consists of N discrete noisy point evaluations of the solution u_f. We study the statistical performance of Bayesian nonparametric procedures based on a large class of Gaussian process priors. We show that, as the number of measurements increases, the resulting posterior distributions concentrate around the true parameter generating the data, and derive a convergence rate for the reconstruction error of the associated posterior means. We also consider the optimality of the contraction rates and prove a lower bound for the minimax convergence rate for inferring f from the data, and show that optimal rates can be achieved with truncated Gaussian priors.


2021 ◽  
Vol 162 (6) ◽  
pp. 304
Author(s):  
Jacob Golomb ◽  
Graça Rocha ◽  
Tiffany Meshkat ◽  
Michael Bottom ◽  
Dimitri Mawet ◽  
...  

Abstract The work presented here attempts at answering the following question: how do we decide when a given detection is a planet or just residual noise in exoplanet direct imaging data? To this end we implement a metric meant to replace the empirical frequentist-based thresholds for detection. Our method, implemented within a Bayesian framework, introduces an “evidence-based” approach to help decide whether a given detection is a true planet or just noise. We apply this metric jointly with a postprocessing technique and Karhunen–Loeve Image Processing (KLIP), which models and subtracts the stellar PSF from the image. As a proof of concept we implemented a new routine named PlanetEvidence that integrates the nested sampling technique (Multinest) with the KLIP algorithm. This is a first step to recast such a postprocessing method into a fully Bayesian perspective. We test our approach on real direct imaging data, specifically using GPI data of β Pictoris b, and on synthetic data. We find that for the former the method strongly favors the presence of a planet (as expected) and recovers the true parameter posterior distributions. For the latter case our approach allows us to detect (true) dim sources invisible to the naked eye as real planets, rather than background noise, and set a new lower threshold for detection at ∼2.5σ level. Further it allows us to quantify our confidence that a given detection is a real planet and not just residual noise.


2021 ◽  
Vol 5 (5) ◽  
pp. 755-774
Author(s):  
Yadpirun Supharakonsakun

The Bayesian approach, a non-classical estimation technique, is very widely used in statistical inference for real world situations. The parameter is considered to be a random variable, and knowledge of the prior distribution is used to update the parameter estimation. Herein, two Bayesian approaches for Poisson parameter estimation by deriving the posterior distribution under the squared error loss or quadratic loss functions are proposed. Their performances were compared with frequentist (maximum likelihood estimator) and Empirical Bayes approaches through Monte Carlo simulations. The mean square error was used as the test criterion for comparing the methods for point estimation; the smallest value indicates the best performing method with the estimated parameter value closest to the true parameter value. Coverage Probabilities (CPs) and average lengths (ALs) were obtained to evaluate the performances of the methods for constructing confidence intervals. The results reveal that the Bayesian approaches were excellent for point estimation when the true parameter value was small (0.5, 1 and 2). In the credible interval comparison, these methods obtained CP values close to the nominal 0.95 confidence level and the smallest ALs for large sample sizes (50 and 100), when the true parameter value was small (0.5, 1 and 2). Doi: 10.28991/esj-2021-01310 Full Text: PDF


2021 ◽  
Vol 12 ◽  
Author(s):  
Nick van Osta ◽  
Feddo P. Kirkels ◽  
Tim van Loon ◽  
Tijmen Koopsen ◽  
Aurore Lyon ◽  
...  

Introduction: Computational models of the cardiovascular system are widely used to simulate cardiac (dys)function. Personalization of such models for patient-specific simulation of cardiac function remains challenging. Measurement uncertainty affects accuracy of parameter estimations. In this study, we present a methodology for patient-specific estimation and uncertainty quantification of parameters in the closed-loop CircAdapt model of the human heart and circulation using echocardiographic deformation imaging. Based on patient-specific estimated parameters we aim to reveal the mechanical substrate underlying deformation abnormalities in patients with arrhythmogenic cardiomyopathy (AC).Methods: We used adaptive multiple importance sampling to estimate the posterior distribution of regional myocardial tissue properties. This methodology is implemented in the CircAdapt cardiovascular modeling platform and applied to estimate active and passive tissue properties underlying regional deformation patterns, left ventricular volumes, and right ventricular diameter. First, we tested the accuracy of this method and its inter- and intraobserver variability using nine datasets obtained in AC patients. Second, we tested the trueness of the estimation using nine in silico generated virtual patient datasets representative for various stages of AC. Finally, we applied this method to two longitudinal series of echocardiograms of two pathogenic mutation carriers without established myocardial disease at baseline.Results: Tissue characteristics of virtual patients were accurately estimated with a highest density interval containing the true parameter value of 9% (95% CI [0–79]). Variances of estimated posterior distributions in patient data and virtual data were comparable, supporting the reliability of the patient estimations. Estimations were highly reproducible with an overlap in posterior distributions of 89.9% (95% CI [60.1–95.9]). Clinically measured deformation, ejection fraction, and end-diastolic volume were accurately simulated. In presence of worsening of deformation over time, estimated tissue properties also revealed functional deterioration.Conclusion: This method facilitates patient-specific simulation-based estimation of regional ventricular tissue properties from non-invasive imaging data, taking into account both measurement and model uncertainties. Two proof-of-principle case studies suggested that this cardiac digital twin technology enables quantitative monitoring of AC disease progression in early stages of disease.


Author(s):  
M. A. Yunusa ◽  
A. Audu ◽  
N. Musa ◽  
D. O. Beki ◽  
A. Rashida ◽  
...  

The estimation of population coefficient of variation is one of the challenging aspects in sampling survey techniques for the past decades and much effort has been employed to develop estimators to produce its efficient estimate. In this paper, we proposed logarithmic ratio type estimator for the estimating population coefficient of variation using logarithm transformation on the both population and sample variances of the auxiliary character. The expression for the mean squared error (MSE) of the proposed estimator has been derived using Taylor series first order approximation approach. Efficiency conditions of the proposed estimator over other estimators in the study has also been derived. The empirical study was conducted using two-sets of populations and the results showed that the proposed estimator is more efficient. This result implies that, the estimate of proposed estimator will be closer to the true parameter than the estimates of other estimators in the study.


Author(s):  
Oluwadare O Ojo

In this work, we describe a Bayesian procedure for detection of change-point when we have an unknown change point in regression model. Bayesian approach with posterior inference for change points was provided to know the particular change point that is optimal while Gibbs sampler was used to estimate the parameters of the change point model. The simulation experiments show that all the posterior means are quite close to their true parameter values. The performance of this method is recommended for multiple change points.


2021 ◽  
Vol 12 ◽  
Author(s):  
Hawre Jalal ◽  
Thomas A. Trikalinos ◽  
Fernando Alarid-Escudero

Purpose: Bayesian calibration is generally superior to standard direct-search algorithms in that it estimates the full joint posterior distribution of the calibrated parameters. However, there are many barriers to using Bayesian calibration in health decision sciences stemming from the need to program complex models in probabilistic programming languages and the associated computational burden of applying Bayesian calibration. In this paper, we propose to use artificial neural networks (ANN) as one practical solution to these challenges.Methods: Bayesian Calibration using Artificial Neural Networks (BayCANN) involves (1) training an ANN metamodel on a sample of model inputs and outputs, and (2) then calibrating the trained ANN metamodel instead of the full model in a probabilistic programming language to obtain the posterior joint distribution of the calibrated parameters. We illustrate BayCANN using a colorectal cancer natural history model. We conduct a confirmatory simulation analysis by first obtaining parameter estimates from the literature and then using them to generate adenoma prevalence and cancer incidence targets. We compare the performance of BayCANN in recovering these “true” parameter values against performing a Bayesian calibration directly on the simulation model using an incremental mixture importance sampling (IMIS) algorithm.Results: We were able to apply BayCANN using only a dataset of the model inputs and outputs and minor modification of BayCANN's code. In this example, BayCANN was slightly more accurate in recovering the true posterior parameter estimates compared to IMIS. Obtaining the dataset of samples, and running BayCANN took 15 min compared to the IMIS which took 80 min. In applications involving computationally more expensive simulations (e.g., microsimulations), BayCANN may offer higher relative speed gains.Conclusions: BayCANN only uses a dataset of model inputs and outputs to obtain the calibrated joint parameter distributions. Thus, it can be adapted to models of various levels of complexity with minor or no change to its structure. In addition, BayCANN's efficiency can be especially useful in computationally expensive models. To facilitate BayCANN's wider adoption, we provide BayCANN's open-source implementation in R and Stan.


2021 ◽  
Author(s):  
Ben Stephens Hemingway ◽  
Paul Swinton ◽  
Ben Ogorek

Fitting an FFM via NLS in practice assumes that a unique optimal solution exists and can be found by the algorithm applied. However, this idealistic scenario may not hold for two reasons: 1) the absolute minimum may not be unique; and 2) local minima, saddle points, and/or plateau features may exist that cause problems for certain algorithms. If there exist different parameter sets in the domain that share the same global minimum under standard NLS, then there is a situation where parameters aren’t uniquely identified without additional constraints or regularisation terms. However, more likely is that problems with the typical FFM fitting process will stem from the existence of local minima, saddles, or plateau features that cause the algorithm to converge to a solution not equal to the global minimum. Local optima can provoke sensitivities in the fitting process for first and second-order algorithms that are by definition local optimisers. This manifests as sensitivity to initial parameter estimates (i.e., the starting point the algorithm initialises the search from). The extent of starting point sensitivity is largely unknown in the context of FFMs for common algorithms adopted and has not been studied directly. Given this concern, research reporting a single model solution derived from ‘one shot’ minimisation of NLS via typical first and second-order algorithms is fundamentally limited by possible uncertainty as to the suitability of fitted estimates as global minimisers. Therefore, the primary aim of this study was to investigate the sensitivity of a classical first-order search algorithm to selection of initial estimates when fitting a fitness-fatigue model (FFM) via nonlinear least-squares (NLS), and to subsequently assess the existence of local optima. A secondary aim of this study was to examine the implications of any findings in relation to previous research and provide considerations for future experimentation. The aims of the study were addressed through a computer experiment (in silico) approach that adopted a deterministic assumption the FFM completely specified athlete response. Under this assumption, two FFMs (standard, and fitness-delay) were simulated under a set of hypothetical model inputs and manually selected ‘true’ parameter values (for each FFM), generating a set of synthetic performance data. The two FFMs were refitted to the synthetic performance data without noise (and under the same model inputs) by the quasi-Newton L-BFGS-B algorithm in a repetitive fashion initiated from multiple starting points in the parameter space, attempting to at each search recover the true parameter values. Estimates obtained from this process were then further transformed into prediction errors quantifying in-sample model fit across the iterations and non-true solutions. Within the standard model scenarios, 69.1-70.3% of solutions found were the true parameters. In contrast, within the fitness-delay model scenarios, 17.6-17.9% of solutions found were the true parameters. A large number of unique non-true solutions were found for both the standard model (N=275-353) and the fitness-delay model (N=383-550) in this idealistic environment. Many of the non-true extrema found by the algorithm were local minima or saddles. Strong in-sample model fit was also observed across non-true solutions for both models. Collectively, these results indicate the typical NLS approach to fitting FFMs is harder for a hill-climbing algorithm to solve than previously recognised in the literature, particularly for models of higher complexity. The findings of this study add weight to the hypothesis that there exists substantial doubt in reported estimates across prior literature where local optimisers have been used or models more complex than the standard FFM applied, particularly when optimisation procedures reported have lacked the relevant detail to indicate that these issues have been considered. Future research should consider the use of global optimisation algorithms, hybrid approaches, or different perspectives (e.g., Bayesian optimisation).


2021 ◽  
Author(s):  
Kristin Baltrusaitis ◽  
Craig Dalton ◽  
Sandra Carlson ◽  
Laura White

Introduction Traditional surveillance methods have been enhanced by the emergence of online participatory syndromic surveillance systems that collect health-related digital data. These systems have many applications including tracking weekly prevalence of Influenza-Like Illness (ILI), predicting probable infection of Coronavirus 2019 (COVID-19), and determining risk factors of ILI and COVID-19. However, not every volunteer consistently completes surveys. In this study, we assess how different missing data methods affect estimates of ILI burden using data from FluTracking, a participatory surveillance system in Australia. Methods We estimate the incidence rate, the incidence proportion, and weekly prevalence using five missing data methods: available case, complete case, assume missing is non-ILI, multiple imputation (MI), and delta (δ) MI, which is a flexible and transparent method to impute missing data under Missing Not at Random (MNAR) assumptions. We evaluate these methods using simulated and FluTracking data. Results Our simulations show that the optimal missing data method depends on the measure of ILI burden and the underlying missingness model. Of note, the δ-MI method provides estimates of ILI burden that are similar to the true parameter under MNAR models. When we apply these methods to FluTracking, we find that the δ-MI method accurately predicted complete, end of season weekly prevalence estimates from real-time data. Conclusion Missing data is an important problem in participatory surveillance systems. Here, we show that accounting for missingness using statistical approaches leads to different inferences from the data.


Sign in / Sign up

Export Citation Format

Share Document