true model
Recently Published Documents


TOTAL DOCUMENTS

112
(FIVE YEARS 41)

H-INDEX

16
(FIVE YEARS 2)

Paleobiology ◽  
2021 ◽  
pp. 1-13
Author(s):  
Chi Zhang

Abstract Relaxed clock models are fundamental in Bayesian clock dating, but a single distribution characterizing the clock variation is typically selected. Hence, I developed a new reversible-jump Markov chain Monte Carlo (rjMCMC) algorithm for drawing posterior samples between the independent lognormal (ILN) and independent gamma rates (IGR) clock models. The ability of the rjMCMC algorithm to infer the true model was verified through simulations. I then applied the algorithm to the Mesozoic bird data previously analyzed under the white noise (WN) clock model. In comparison, averaging over the ILN and IGR models provided more reliable estimates of the divergence times and evolutionary rates. The ILN model showed slightly better fit than the IGR model and much better fit than the autocorrelated lognormal (ALN) clock model. When the data were partitioned, different partitions showed heterogeneous model fit for ILN and IGR clocks. The implementation provides a general framework for selecting and averaging relaxed clock models in Bayesian dating analyses.


Psychometrika ◽  
2021 ◽  
Author(s):  
José H. Lozano ◽  
Javier Revuelta

AbstractThe present paper introduces a new explanatory item response model to account for the learning that takes place during a psychometric test due to the repeated use of the operations involved in the items. The proposed model is an extension of the operation-specific learning model (Fischer and Formann in Appl Psychol Meas 6:397–416, 1982; Scheiblechner in Z für Exp Angew Psychol 19:476–506, 1972; Spada in Spada and Kempf (eds.) Structural models of thinking and learning, Huber, Bern, Germany, pp 227–262, 1977). The paper discusses special cases of the model, which, together with the general formulation, differ in the type of response in which the model states that learning occurs: (1) correct and incorrect responses equally (non-contingent learning); (2) correct responses only (contingent learning); and (3) correct and incorrect responses to a different extent (differential contingent learning). A Bayesian framework is adopted for model estimation and evaluation. A simulation study is conducted to examine the performance of the estimation and evaluation methods in recovering the true parameters and selecting the true model. Finally, an empirical study is presented to illustrate the applicability of the model to detect learning effects using real data.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Ive Weygers ◽  
Manon Kok ◽  
Thomas Seel ◽  
Darshan Shah ◽  
Orçun Taylan ◽  
...  

AbstractSkin-attached inertial sensors are increasingly used for kinematic analysis. However, their ability to measure outside-lab can only be exploited after correctly aligning the sensor axes with the underlying anatomical axes. Emerging model-based inertial-sensor-to-bone alignment methods relate inertial measurements with a model of the joint to overcome calibration movements and sensor placement assumptions. It is unclear how good such alignment methods can identify the anatomical axes. Any misalignment results in kinematic cross-talk errors, which makes model validation and the interpretation of the resulting kinematics measurements challenging. This study provides an anatomically correct ground-truth reference dataset from dynamic motions on a cadaver. In contrast with existing references, this enables a true model evaluation that overcomes influences from soft-tissue artifacts, orientation and manual palpation errors. This dataset comprises extensive dynamic movements that are recorded with multimodal measurements including trajectories of optical and virtual (via computed tomography) anatomical markers, reference kinematics, inertial measurements, transformation matrices and visualization tools. The dataset can be used either as a ground-truth reference or to advance research in inertial-sensor-to-bone-alignment.


2021 ◽  
Author(s):  
Martin Green ◽  
Eliana Lima ◽  
Robert Hyde

Abstract Epidemiological research commonly involves identification of causal factors from within high dimensional (wide) data, where predictor variables outnumber observations. In this situation, however, conventional stepwise selection procedures perform poorly. Selection stability is one method to aid robust variable selection, by refitting a model to repeated resamples of the data and calculating the proportion of times each covariate is selected. A key problem when applying selection stability is to determine a threshold of stability above which a covariate is deemed ‘important’. In this research we describe and illustrate a two-step process to implement a stability threshold for covariate selection. Firstly, covariate stability distributions were established with a permuted model (randomly reordering the outcome to sever the relationship with predictors) using a cumulative distribution function. Subsequently, covariate stability was estimated using the true model outcome and covariates with a stability above a threshold defined from the permuted model, were selected in a final model. The proposed method performed well across 22 varied, simulated datasets with known outcomes; selection error rates were consistently lower than conventional implementation of equivalent models. This method of covariate selection appears to offer substantial advantages over current methods, to accurately identify the correct covariates from within a large, complex parameter space.


2021 ◽  
Vol 12 ◽  
Author(s):  
Blaire Steven ◽  
Josephine Hyde ◽  
Jacquelyn C. LaReau ◽  
Doug E. Brackney

The increasing availability of modern research tools has enabled a revolution in studies of non-model organisms. Yet, one aspect that remains difficult or impossible to control in many model and most non-model organisms is the presence and composition of the host-associated microbiota or the microbiome. In this review, we explore the development of axenic (microbe-free) mosquito models and what these systems reveal about the role of the microbiome in mosquito biology. Additionally, the axenic host is a blank template on which a microbiome of known composition can be introduced, also known as a gnotobiotic organism. Finally, we identify a “most wanted” list of common mosquito microbiome members that show the greatest potential to influence host phenotypes. We propose that these are high-value targets to be employed in future gnotobiotic studies. The use of axenic and gnotobiotic organisms will transition the microbiome into another experimental variable that can be manipulated and controlled. Through these efforts, the mosquito will be a true model for examining host microbiome interactions.


Entropy ◽  
2021 ◽  
Vol 23 (6) ◽  
pp. 773
Author(s):  
Amichai Painsky ◽  
Meir Feder

Learning and making inference from a finite set of samples are among the fundamental problems in science. In most popular applications, the paradigmatic approach is to seek a model that best explains the data. This approach has many desirable properties when the number of samples is large. However, in many practical setups, data acquisition is costly and only a limited number of samples is available. In this work, we study an alternative approach for this challenging setup. Our framework suggests that the role of the train-set is not to provide a single estimated model, which may be inaccurate due to the limited number of samples. Instead, we define a class of “reasonable” models. Then, the worst-case performance in the class is controlled by a minimax estimator with respect to it. Further, we introduce a robust estimation scheme that provides minimax guarantees, also for the case where the true model is not a member of the model class. Our results draw important connections to universal prediction, the redundancy-capacity theorem, and channel capacity theory. We demonstrate our suggested scheme in different setups, showing a significant improvement in worst-case performance over currently known alternatives.


2021 ◽  
Vol 19 (1) ◽  
pp. 2-15
Author(s):  
Tahir R. Dikheel ◽  
Alaa Q. Yaseen

The lag-weighted lasso was introduced to deal with lag effects when identifying the true model in time series. This method depends on weights to reflect both the coefficient size and the lag effects. However, the lag weighted lasso is not robust. To overcome this problem, we propose robust lag weighted lasso methods. Both the simulation study and the real data example show that the proposed methods outperform the other existing methods.


2021 ◽  
Vol 29 (2) ◽  
Author(s):  
Mustapha Adejo Mohammed ◽  
Nordiana Mohd Muztaza ◽  
Rosli Saad

Two-dimensional electrical resistivity tomography (2-D ERT) is one of the most common geophysical tools employed to satisfy the ever-growing need for obtaining subsurface information. Most of the conventional electrode arrays used for 2-D ERT survey are built with the theoretical assumption that the survey lines are straight to guarantee four collinear electrodes at every point of measurement. However, due to surface constraint associated with most survey areas, it is rarely possible to conduct a two-dimensional resistivity survey on a straight line. Therefore, 2-D ERT survey conducted on a surface constraint field requires shifting one or more electrodes off the survey line, which contrasts with the underlying assumption. Consequently, the result might be prone to false anomalies. Thus, this study aimed to device a new approach that could mitigate the false anomalies posed by non-collinearity of electrodes in 2-D ERT result. In view of this, ABEM Terrameter SAS4000 using Wenner array configuration was adopted for the survey. The data was acquired with all electrodes inline and one or more electrodes offline at stepwise distances, respectively. Based on the result obtained, the new approach mitigates the offline electrodes effect, as the inverse resistivity tomograms resolves the geometries of the true model reasonably well. More so, it has high R-value >90% which is an indication of proximity to the true model. Hence, it is concluded that the new approach is effective in mitigating offline electrode effect on a 2-D ERT result.


2021 ◽  
Author(s):  
Benjamin Domingue ◽  
Klint Kanopka ◽  
Sam Trejo ◽  
Jeremy Freese

Years of education is a commonly used outcome variable in many lifecourse studies. We argue that such studies may derive additional insights from a treatment of years of education as an ordinal outcome rather than the standard treatment using the linear model. Via simulation, we show that the ordinal approach performs well if the linear model is actually the true model while, in the reverse scenario, estimates from the linear model may be somewhat suboptimal when the ordinal model is the true model. We use data from the Health and Retirement Study to illustrate additional insights that are readily derived from application of the ordinal model and offer a suggested workflow for future analysis.


Author(s):  
Adam Martin-Schwarze ◽  
Jarad Niemi ◽  
Philip Dixon

AbstractRemoval and distance modeling are two common methods to adjust counts for imperfect detection in point-count surveys. Several recent articles have formulated models to combine them into a distance-removal framework. We observe that these models fall into two groups building from different assumptions about the joint distribution of observed distances and first times to detection. One approach assumes the joint distribution results from a Poisson process (PP). The other assumes an independent joint (IJ) distribution with its joint density being the product of its marginal densities. We compose an IJ+PP model that more flexibly models the joint distribution and accommodates both existing approaches as special cases. The IJ+PP model matches the bias and coverage of the true model for data simulated from either PP or IJ models. In contrast, PP models underestimate abundance from IJ simulations, while IJ models overestimate abundance from PP simulations. We apply all three models to surveys of golden-crowned sparrows in Alaska. Only the IJ+PP model reasonably fits the joint distribution of observed distances and first times to detection. Model choice affects estimates of abundance and detection but has little impact on the magnitude of estimated covariate effects on availability and perceptibility.Supplementary materials accompanying this paper appear online.


Sign in / Sign up

Export Citation Format

Share Document