scholarly journals Bayesian analysis for comparison of nonlinear regression model parameters: an application to ruminal degradability data

2010 ◽  
Vol 39 (2) ◽  
pp. 419-424
Author(s):  
Robson Marcelo Rossi ◽  
Elias Nunes Martins ◽  
Terezinha Aparecida Guedes ◽  
Clóves Cabreira Jobim

This paper shows the Bayesian approach as an alternative to the classical analysis of nonlinear models for ruminal degradation data. The data set was obtained from a Latin square experimental design, established for testing the ruminal degradation of dry matter, crude protein and fiber in neutral detergent of three silages: elephant grass (Pennisetum purpureum Schum) with bacterial inoculant or enzyme-bacterial inoculant and corn silage (Zea mays L.). The incubation times were 0, 2, 6, 12, 24, 48, 72 and 96 hours. The parameter estimates of the equations fitted by both methods showed small differences, but by the Bayesian approach it was possible to compare the estimates correctly, that does not happen with the frequentist methodology because it is much more restricted in the applications due to the demand for a larger number of presuppositions.

2021 ◽  
Author(s):  
Oliver Lüdtke ◽  
Alexander Robitzsch ◽  
Esther Ulitzsch

The bivariate Stable Trait, AutoRegressive Trait, and State (STARTS) model provides a general approach for estimating reciprocal effects between constructs over time. However, previous research has shown that this model is difficult to estimate using the maximum likelihood (ML) method (e.g., nonconvergence). In this article, we introduce a Bayesian approach for estimating the bivariate STARTS model and implement it in the software Stan. We discuss issues of model parameterization and show how appropriate prior distributions for model parameters can be selected. Specifically, we propose the four-parameter beta distribution as a flexible prior distribution for the autoregressive and cross-lagged effects. Using a simulation study, we show that the proposed Bayesian approach provides more accurate estimates than ML estimation in challenging data constellations. An example is presented to illustrate how the Bayesian approach can be used to stabilize the parameter estimates of the bivariate STARTS model.


Mathematics ◽  
2019 ◽  
Vol 7 (5) ◽  
pp. 474 ◽  
Author(s):  
Muhammad Rizwan Khan ◽  
Biswajit Sarkar

Airborne particulate matter (PM) is a key air pollutant that affects human health adversely. Exposure to high concentrations of such particles may cause premature death, heart disease, respiratory problems, or reduced lung function. Previous work on particulate matter ( P M 2.5 and P M 10 ) was limited to specific areas. Therefore, more studies are required to investigate airborne particulate matter patterns due to their complex and varying properties, and their associated ( P M 10 and P M 2.5 ) concentrations and compositions to assess the numerical productivity of pollution control programs for air quality. Consequently, to control particulate matter pollution and to make effective plans for counter measurement, it is important to measure the efficiency and efficacy of policies applied by the Ministry of Environment. The primary purpose of this research is to construct a simulation model for the identification of a change point in particulate matter ( P M 2.5 and P M 10 ) concentration, and if it occurs in different areas of the world. The methodology is based on the Bayesian approach for the analysis of different data structures and a likelihood ratio test is used to a detect change point at unknown time (k). Real time data of particulate matter concentrations at different locations has been used for numerical verification. The model parameters before change point ( θ ) and parameters after change point ( λ ) have been critically analyzed so that the proficiency and success of environmental policies for particulate matter ( P M 2.5 and P M 10 ) concentrations can be evaluated. The main reason for using different areas is their considerably different features, i.e., environment, population densities, and transportation vehicle densities. Consequently, this study also provides insights about how well this suggested model could perform in different areas.


2013 ◽  
Vol 19 (3) ◽  
pp. 344-353 ◽  
Author(s):  
Keith R. Shockley

Quantitative high-throughput screening (qHTS) experiments can simultaneously produce concentration-response profiles for thousands of chemicals. In a typical qHTS study, a large chemical library is subjected to a primary screen to identify candidate hits for secondary screening, validation studies, or prediction modeling. Different algorithms, usually based on the Hill equation logistic model, have been used to classify compounds as active or inactive (or inconclusive). However, observed concentration-response activity relationships may not adequately fit a sigmoidal curve. Furthermore, it is unclear how to prioritize chemicals for follow-up studies given the large uncertainties that often accompany parameter estimates from nonlinear models. Weighted Shannon entropy can address these concerns by ranking compounds according to profile-specific statistics derived from estimates of the probability mass distribution of response at the tested concentration levels. This strategy can be used to rank all tested chemicals in the absence of a prespecified model structure, or the approach can complement existing activity call algorithms by ranking the returned candidate hits. The weighted entropy approach was evaluated here using data simulated from the Hill equation model. The procedure was then applied to a chemical genomics profiling data set interrogating compounds for androgen receptor agonist activity.


2015 ◽  
Vol 15 (08) ◽  
pp. 1540026 ◽  
Author(s):  
Q. Hu ◽  
H. F. Lam ◽  
S. A. Alabi

The identification of railway ballast damage under a concrete sleeper is investigated by following the Bayesian approach. The use of a discrete modeling method to capture the distribution of ballast stiffness under the sleeper introduces artificial stiffness discontinuities between different ballast regions. This increases the effects of modeling errors and reduces the accuracy of the ballast damage detection results. In this paper, a continuous modeling method was developed to overcome this difficulty. The uncertainties induced by modeling error and measurement noise are the major difficulties of vibration-based damage detection methods. In the proposed methodology, Bayesian probabilistic approach is adopted to explicitly address the uncertainties associated with the identified model parameters. In the model updating process, the stiffness of the ballast foundation is assumed to be continuous along the sleeper by using a polynomial of order N. One of the contributions of this paper is to calculate the order N conditional on a given set of measurement utilizing the Bayesian model class selection method. The proposed ballast damage detection methodology was verified with vibration data obtained from a segment of full-scale ballasted track under laboratory conditions, and the experimental verification results are very encouraging showing that it is possible to use the Bayesian approach along with the newly developed continuous modeling method for the purpose of ballast damage detection.


2020 ◽  
Vol 36 (1) ◽  
pp. 89-115 ◽  
Author(s):  
Harvey Goldstein ◽  
Natalie Shlomo

AbstractThe requirement to anonymise data sets that are to be released for secondary analysis should be balanced by the need to allow their analysis to provide efficient and consistent parameter estimates. The proposal in this article is to integrate the process of anonymisation and data analysis. The first stage uses the addition of random noise with known distributional properties to some or all variables in a released (already pseudonymised) data set, in which the values of some identifying and sensitive variables for data subjects of interest are also available to an external ‘attacker’ who wishes to identify those data subjects in order to interrogate their records in the data set. The second stage of the analysis consists of specifying the model of interest so that parameter estimation accounts for the added noise. Where the characteristics of the noise are made available to the analyst by the data provider, we propose a new method that allows a valid analysis. This is formally a measurement error model and we describe a Bayesian MCMC algorithm that recovers consistent estimates of the true model parameters. A new method for handling categorical data is presented. The article shows how an appropriate noise distribution can be determined.


2016 ◽  
Vol 40 (3) ◽  
Author(s):  
Jehad Al-Jararha ◽  
Mohammed Al-Haj Ebrahem ◽  
Abedel-Qader Al-Masri

The need of autocorrelation models for degradation data comes from the facts that the degradation measurements are often correlated, since such measurements are taken over time. Time series can exhibit autocorrelation caused by modeling error or cyclic changes in ambient conditions in the measurement errors or in degradation process itself. Generally, autocorrelation becomes stronger when the times between measurements are relativelyshort and becomes less noticeable when the times between process are longer. In this paper, we assume that the error terms are autocorrelated and have an autoregressive of order one, AR(1). This case is a more general case of the assumption that the error terms are identically and independently normally distributed. Since when the error terms are uncorrelated over the time, the estimate of the parameter of AR(1) is approximately zero.If the parameter of AR(1) is unknown, one can estimate it from the data set. Using two real data sets, the model parameters are estimated and compared with the case when the error terms are independent and identically distributed. Such computations are available by using procedures AUTOREG and model in SAS. Computations show that an AR(1) can be used as a useful tool to remove the autocorrelation between the residuals.


2020 ◽  
Vol 21 ◽  
Author(s):  
Simone Daniela Sartorio de Medeiros ◽  
César Gonçalves de Lima ◽  
Taciana Villela Savian ◽  
Euclides Braga Malheiros ◽  
Simone Silmara Werner

Abstract Classical methods of analysis of nonlinear models are widely used in studies of ruminal degradation kinetics. As this type of study involves repeated measurements in the same experimental unit, the use of mixed nonlinear models (MNLM) is proposed, in order to solve problems of heterogeneity of variances of the responses, correlation among repeated measurements and consequent lack of sphericity in the covariance matrix. The aims of this work are to present an evaluation of the applicability of MNLM in the estimation of parameters to describe the in situ ruminal degradation kinetics of the dry matter of Tifton 85 hay and to compare the results with those obtained from the usual analysis in two-phases. The steers used in the trial were fed diets composed of three different combinations of roughage and concentrate and two hays with different nutritional qualities. The proposed approach was proven as effective as the traditional one for estimating model parameters. However, it adequately models the correlation among the longitudinal data, which can affect the estimates obtained, the standard error associated with them and potentially change the results of the inferences. It is quite attractive when the research seeks to understand the behavior of the process of food degradation throughout the incubation times.


Psych ◽  
2021 ◽  
Vol 3 (3) ◽  
pp. 360-385
Author(s):  
Manuel Arnold ◽  
Andreas M. Brandmaier ◽  
Manuel C. Voelkle

Unmodeled differences between individuals or groups can bias parameter estimates and may lead to false-positive or false-negative findings. Such instances of heterogeneity can often be detected and predicted with additional covariates. However, predicting differences with covariates can be challenging or even infeasible, depending on the modeling framework and type of parameter. Here, we demonstrate how the individual parameter contribution (IPC) regression framework, as implemented in the R package ipcr, can be leveraged to predict differences in any parameter across a wide range of parametric models. First and foremost, IPC regression is an exploratory analysis technique to determine if and how the parameters of a fitted model vary as a linear function of covariates. After introducing the theoretical foundation of IPC regression, we use an empirical data set to demonstrate how parameter differences in a structural equation model can be predicted with the ipcr package. Then, we analyze the performance of IPC regression in comparison to alternative methods for modeling parameter heterogeneity in a Monte Carlo simulation.


Author(s):  
Bettina Grün ◽  
Gertraud Malsiner-Walli ◽  
Sylvia Frühwirth-Schnatter

AbstractIn model-based clustering, the Galaxy data set is often used as a benchmark data set to study the performance of different modeling approaches. Aitkin (Stat Model 1:287–304) compares maximum likelihood and Bayesian analyses of the Galaxy data set and expresses reservations about the Bayesian approach due to the fact that the prior assumptions imposed remain rather obscure while playing a major role in the results obtained and conclusions drawn. The aim of the paper is to address Aitkin’s concerns about the Bayesian approach by shedding light on how the specified priors influence the number of estimated clusters. We perform a sensitivity analysis of different prior specifications for the mixtures of finite mixture model, i.e., the mixture model where a prior on the number of components is included. We use an extensive set of different prior specifications in a full factorial design and assess their impact on the estimated number of clusters for the Galaxy data set. Results highlight the interaction effects of the prior specifications and provide insights into which prior specifications are recommended to obtain a sparse clustering solution. A simulation study with artificial data provides further empirical evidence to support the recommendations. A clear understanding of the impact of the prior specifications removes restraints preventing the use of Bayesian methods due to the complexity of selecting suitable priors. Also, the regularizing properties of the priors may be intentionally exploited to obtain a suitable clustering solution meeting prior expectations and needs of the application.


Sign in / Sign up

Export Citation Format

Share Document