scholarly journals Improved parameter estimation and uncertainty propagation in Bayesian Radiocarbon-dated Event Count [REC] models

2021 ◽  
Author(s):  
William Christopher Carleton ◽  
Dave Campbell

Data about the past contain chronological uncertainty that needs to be accounted for in statistical models. Recently a method called Radiocarbon-dated Event Count (REC) modelling has been explored as a way to improve the handling of chronological uncertainty in the context of statistical regression. REC modelling has so far employed a Bayesian hierarchical framework for parameter estimation to account for chronological uncertainty in count series of radiocarbon-dates. This approach, however, suffers from a couple of limitations. It is computationally inefficient, which limits the amount of chronological uncertainty that can be accounted for, and the hierarchical framework can produce biased, but highly precise parameter estimates. Here we report the results of an investigation in which we compared hierarchical REC models to an alternative with simulated data and a new R package called "chronup". Our results indicate that the hierarchical framework can produce correct high-precision estimates given enough data, but it is susceptible to sampling bias and has an inflated Type I error rate. In contrast, the alternative better handles small samples and fully propagates uncertainty into parameter estimates. In light of these results, we think the alternative method is more generally suitable for Palaeo Science applications.

Methodology ◽  
2015 ◽  
Vol 11 (1) ◽  
pp. 3-12 ◽  
Author(s):  
Jochen Ranger ◽  
Jörg-Tobias Kuhn

In this manuscript, a new approach to the analysis of person fit is presented that is based on the information matrix test of White (1982) . This test can be interpreted as a test of trait stability during the measurement situation. The test follows approximately a χ2-distribution. In small samples, the approximation can be improved by a higher-order expansion. The performance of the test is explored in a simulation study. This simulation study suggests that the test adheres to the nominal Type-I error rate well, although it tends to be conservative in very short scales. The power of the test is compared to the power of four alternative tests of person fit. This comparison corroborates that the power of the information matrix test is similar to the power of the alternative tests. Advantages and areas of application of the information matrix test are discussed.


Entropy ◽  
2021 ◽  
Vol 23 (4) ◽  
pp. 384
Author(s):  
Rocío Hernández-Sanjaime ◽  
Martín González ◽  
Antonio Peñalver ◽  
Jose J. López-Espín

The presence of unaccounted heterogeneity in simultaneous equation models (SEMs) is frequently problematic in many real-life applications. Under the usual assumption of homogeneity, the model can be seriously misspecified, and it can potentially induce an important bias in the parameter estimates. This paper focuses on SEMs in which data are heterogeneous and tend to form clustering structures in the endogenous-variable dataset. Because the identification of different clusters is not straightforward, a two-step strategy that first forms groups among the endogenous observations and then uses the standard simultaneous equation scheme is provided. Methodologically, the proposed approach is based on a variational Bayes learning algorithm and does not need to be executed for varying numbers of groups in order to identify the one that adequately fits the data. We describe the statistical theory, evaluate the performance of the suggested algorithm by using simulated data, and apply the two-step method to a macroeconomic problem.


1989 ◽  
Vol 236 (1285) ◽  
pp. 385-416 ◽  

Patch-clamp data may be analysed in terms of Markov process models of channel gating mechanisms. We present a maximum likelihood algorithm for estimation of gating parameters from records where only a single channel is present. Computer simulated data for three different models of agonist receptor gated channels are used to demonstrate the performance of the procedure. Full details of the implementation of the algorithm are given for an example gating mechanism. The effects of omission of brief openings and closings from the single-channel data on parameter estimation are explored. A strategy for discriminating between alternative possible gating models, based upon use of the Schwarz criterion, is described. Omission of brief events is shown not to lead to incorrect model identification, except in extreme circumstances. Finally, the algorithm is extended to include channel gating models exhibiting multiple conductance levels.


Author(s):  
Luan L. Lee ◽  
Miguel G. Lizarraga ◽  
Natanael R. Gomes ◽  
Alessandro L. Koerich

This paper describes a prototype for Brazilian bankcheck recognition. The description is divided into three topics: bankcheck information extraction, digit amount recognition and signature verification. In bankcheck information extraction, our algorithms provide signature and digit amount images free of background patterns and bankcheck printed information. In digit amount recognition, we dealt with the digit amount segmentation and implementation of a complete numeral character recognition system involving image processing, feature extraction and neural classification. In signature verification, we designed and implemented a static signature verification system suitable for banking and commercial applications. Our signature verification algorithm is capable of detecting both simple, random and skilled forgeries. The proposed automatic bankcheck recognition prototype was intensively tested by real bankcheck data as well as simulated data providing the following performance results: for skilled forgeries, 4.7% equal error rate; for random forgeries, zero Type I error and 7.3% Type II error; for bankcheck numerals, 92.7% correct recognition rate.


1985 ◽  
Vol 231 (1) ◽  
pp. 171-177 ◽  
Author(s):  
L Matyska ◽  
J Kovář

The known jackknife methods (i.e. standard jackknife, weighted jackknife, linear jackknife and weighted linear jackknife) for the determination of the parameters (as well as of their confidence regions) were tested and compared with the simple Marquardt's technique (comprising the calculation of confidence intervals from the variance-co-variance matrix). The simulated data corresponding to the Michaelis-Menten equation with defined structure and magnitude of error of the dependent variable were used for fitting. There were no essential differences between the results of both point and interval parameter estimations by the tested methods. Marquardt's procedure yielded slightly better results than the jackknives for five scattered data points (the use of this method is advisable for routine analyses). The classical jackknife was slightly superior to the other methods for 20 data points (this method can be recommended for very precise calculations if great numbers of data are available). The weighting does not seem to be necessary in this type of equation because the parameter estimates obtained with all methods with the use of constant weights were comparable with those calculated with the weights corresponding exactly to the real error structure whereas the relative weighting led to rather worse results.


Author(s):  
Zachary R. McCaw ◽  
Hanna Julienne ◽  
Hugues Aschard

AbstractAlthough missing data are prevalent in applications, existing implementations of Gaussian mixture models (GMMs) require complete data. Standard practice is to perform complete case analysis or imputation prior to model fitting. Both approaches have serious drawbacks, potentially resulting in biased and unstable parameter estimates. Here we present MGMM, an R package for fitting GMMs in the presence of missing data. Using three case studies on real and simulated data sets, we demonstrate that, when the underlying distribution is near-to a GMM, MGMM is more effective at recovering the true cluster assignments than state of the art imputation followed by standard GMM. Moreover, MGMM provides an accurate assessment of cluster assignment uncertainty even when the generative distribution is not a GMM. This assessment may be used to identify unassignable observations. MGMM is available as an R package on CRAN: https://CRAN.R-project.org/package=MGMM.


2020 ◽  
Author(s):  
William Christopher Carleton

Chronological uncertainty complicates attempts to use radiocarbon dates as proxies for processes like human population growth/decline, forest fires, and marine ingression. Established approaches involve turning databases of radiocarbon-date densities into single summary proxies that cannot fully account for chronological uncertainty. Here, I use simulated data to explore an alternate Bayesian approach that instead models the data as what they are, namely radiocarbon-dated event-counts. The approach involves assessing possible event-count sequences by sampling radiocarbon date densities and then applying MCMC to estimate the parameters of an appropriate count-based regression model. The regressions based on individual sampled sequences were placed in a multilevel framework, which allowed for the estimation of hyperparameters that account for chronological uncertainty in individual event times. Two processes were used to produce simulated data. One represented a simple monotonic change in event-counts and the other was based on a real palaeoclimate proxy record. In both cases, the method produced estimates that had the correct sign and were consistently biased toward zero. These results indicate that the approach is widely applicable and could form the basis of a new class of quantitative models for use in exploring long-term human and environmental processes.


Sign in / Sign up

Export Citation Format

Share Document