scholarly journals Epistemic Considerations About Uncertainty and Model Selection in Computational Archaeology: A Case Study on Exploratory Modeling

Author(s):  
Hans Peeters ◽  
Jan-Willem Romeijn
Keyword(s):  
2006 ◽  
Vol 23 (5) ◽  
pp. 365-376 ◽  
Author(s):  
Henkjan Honing

While the most common way of evaluating a computational model is to see whether it shows a good fit with the empirical data, recent literature on theory testing and model selection criticizes the assumption that this is actually strong evidence for the validity of a model. This article presents a case study from music cognition (modeling the ritardandi in music performance) and compares two families of computational models (kinematic and perceptual) using three different model selection criteria: goodness-of-fit, model simplicity, and the degree of surprise in the predictions. In the light of what counts as strong evidence for a model’s validity—namely that it makes limited range, nonsmooth, and relatively surprising predictions—the perception-based model is preferred over the kinematic model.


2015 ◽  
Vol 36 (2) ◽  
pp. 326-339 ◽  
Author(s):  
Matteo Tonietto ◽  
Gaia Rizzo ◽  
Mattia Veronese ◽  
Masahiro Fujita ◽  
Sami S Zoghbi ◽  
...  

Full kinetic modeling of dynamic PET images requires the measurement of radioligand concentrations in the arterial plasma. The unchanged parent radioligand must, however, be separated from its radiometabolites by chromatographic methods. Thus, only few samples can usually be analyzed and the resulting measurements are often noisy. Therefore, the measurements must be fitted with a mathematical model. This work presents a comprehensive analysis of the different models proposed in the literature to describe the plasma parent fraction (PPf) and of the alternative approaches for radiometabolite correction. Finally, we used a dataset of [11C]PBR28 brain PET data as a case study to guide the reader through the PPf model selection process.


2006 ◽  
Vol 24 (1) ◽  
pp. 159-170 ◽  
Author(s):  
S. L. Kosakovsky Pond ◽  
F. V. Mannino ◽  
M. B. Gravenor ◽  
S. V. Muse ◽  
S. D. W. Frost

Author(s):  
Jungmok Ma ◽  
Harrison M. Kim

As awareness of environmental issues increases, the pressures from the public and policy makers have forced OEMs to consider remanufacturing as the key product design option. In order to make the remanufacturing operations more profitable, forecasting product returns is critical with regards to the uncertainty in quantity and timing. This paper proposes a predictive model selection algorithm to deal with the uncertainty by identifying better predictive models. Unlike other major approaches in literature (distributed lag model or DLM), the predictive model selection algorithm focuses on the predictive power over new or future returns. The proposed algorithm extends the set of candidate models that should be considered: autoregressive integrated moving average or ARIMA (previous returns for future returns), DLM (previous sales for future returns), and mixed model (both previous sales and returns for future returns). The prediction performance measure from holdout samples is used to find a better model among them. The case study of reusable bottles shows that one of the candidate models, ARIMA, can predict better than the DLM depending on the relationships between returns and sales. The univariate model is widely unexplored due to the criticism that the model cannot utilize the previous sales. Another candidate model, mixed model, can provide a chance to find a better predictive model by combining the ARIMA and DLM. The case study also shows that the DLM in the predictive model selection algorithm can provide a good predictive performance when there are relatively strong and static relationships between returns and sales.


2013 ◽  
Vol 70 (12) ◽  
pp. 1723-1740 ◽  
Author(s):  
Jon Brodziak ◽  
William A. Walsh

One key issue for standardizing catch per unit effort (CPUE) of bycatch species is how to model observations of zero catch per fishing operation. Typically, the fraction of zero catches is high, and catch counts may be overdispersed. In this study, we develop a model selection and multimodel inference approach to standardize CPUE in a case study of oceanic whitetip shark (Carcharhinus longimanus) bycatch in the Hawaii-based pelagic longline fishery. Alternative hypotheses for shark catch per longline set were characterized by the variance to mean ratio of the count distribution. Zero-inflated and non-inflated Poisson, negative binomial, and delta-gamma models were fit to fishery observer data using stepwise variable selection. Alternative hypotheses were compared using multimodel inference. Results from the best-fitting zero-inflated negative binomial model showed that standardized CPUE of oceanic whitetip sharks decreased by about 90% during 1995–2010 because of increased zero catch sets and decreased CPUE on sets with positive catch. Our model selection approach provides an objective way to address the question of how to treat zero catches when analyzing bycatch CPUE.


Sign in / Sign up

Export Citation Format

Share Document