Development of Distress Progression Models Using Panel Data Sets of In-Service Pavements

Author(s):  
Samer Madanat ◽  
Hee Cheol Shin

Pavement distress progression models predict the extent of a distress on pavement sections as a function of age, design characteristics, traffic loads and environmental factors. These models are usually developed using data from in-service facilities to calibrate the parameters of mechanistic deterioration models. The data used for the statistical estimation of such models consist of observations of pavements for which the distress has already appeared. Unfortunately, common statistical methods, when applied to such data sets, produce biased and inconsistent model parameters. This type of bias is known as selectivity bias, and it results from the fact that less durable pavement sections are over-represented in the sample used for model estimation. A joint pavement distress initiation and progression model, consisting of a discrete model of distress initiation and a continuous model of pavement progression is presented. This approach explicitly accounts for the self-selected nature of the sample used in developing the progression model, through the use of appropriate correction terms. Moreover, previous research is extended by accounting for the potential presence of unobserved heterogeneity in the model, which is related to the use of a panel data set for model estimation. This is achieved by using a random effects specification for both the discrete and continuous models. An empirical case study demonstrates the application of this approach for highway pavement cracking models.

2019 ◽  
Vol XVI (2) ◽  
pp. 1-11
Author(s):  
Farrukh Jamal ◽  
Hesham Mohammed Reyad ◽  
Soha Othman Ahmed ◽  
Muhammad Akbar Ali Shah ◽  
Emrah Altun

A new three-parameter continuous model called the exponentiated half-logistic Lomax distribution is introduced in this paper. Basic mathematical properties for the proposed model were investigated which include raw and incomplete moments, skewness, kurtosis, generating functions, Rényi entropy, Lorenz, Bonferroni and Zenga curves, probability weighted moment, stress strength model, order statistics, and record statistics. The model parameters were estimated by using the maximum likelihood criterion and the behaviours of these estimates were examined by conducting a simulation study. The applicability of the new model is illustrated by applying it on a real data set.


2020 ◽  
Vol 70 (1) ◽  
pp. 145-161 ◽  
Author(s):  
Marnus Stoltz ◽  
Boris Baeumer ◽  
Remco Bouckaert ◽  
Colin Fox ◽  
Gordon Hiscott ◽  
...  

Abstract We describe a new and computationally efficient Bayesian methodology for inferring species trees and demographics from unlinked binary markers. Likelihood calculations are carried out using diffusion models of allele frequency dynamics combined with novel numerical algorithms. The diffusion approach allows for analysis of data sets containing hundreds or thousands of individuals. The method, which we call Snapper, has been implemented as part of the BEAST2 package. We conducted simulation experiments to assess numerical error, computational requirements, and accuracy recovering known model parameters. A reanalysis of soybean SNP data demonstrates that the models implemented in Snapp and Snapper can be difficult to distinguish in practice, a characteristic which we tested with further simulations. We demonstrate the scale of analysis possible using a SNP data set sampled from 399 fresh water turtles in 41 populations. [Bayesian inference; diffusion models; multi-species coalescent; SNP data; species trees; spectral methods.]


2003 ◽  
Vol 184 ◽  
pp. 99-110 ◽  
Author(s):  
Thomas Zwick

This paper finds substantial effects of ICT investments on productivity for a large and representative German establishment panel data set. In contrast to the bulk of the literature also establishments without ICT capital are included and lagged effects of ICT investments are analysed. In addition, a broad range of establishment and employee characteristics are taken account of in order to avoid omitted variable bias. It is shown that taking into account unobserved heterogeneity of the establishments and endogeneity of ICT investments increases the estimated lagged productivity impact of ICT investments.


2013 ◽  
Vol 427-429 ◽  
pp. 1506-1509
Author(s):  
Yong Yan Yu

A robust estimation procedure is necessary to estimate the true model parameters in computer vision. Evaluating the multiple-model in the presence of outliers-robust is a fundamentally different task than the single-model problem.Despite there are many diversity multi-model estimation algorithms, it is difficult to pick an effective and advisably approach.So we present a novel quantitative evaluation of multi-model estimation algorithms, efficiency may be evaluated by either examining the asymptotic efficiency of the algorithms or by running them for a series of data sets of increasing size.Thus we create a specifical testing dataset,and introduce a performance metric, Strongest-Intersection.and using the model-aware correctness criterion. Finally, well show the validity of estimation strategy by the Experimention of line-fitting.


Author(s):  
Victor H Aguiar ◽  
Nail Kashaev

Abstract A long-standing question about consumer behaviour is whether individuals’ observed purchase decisions satisfy the revealed preference (RP) axioms of the utility maximization theory (UMT). Researchers using survey or experimental panel data sets on prices and consumption to answer this question face the well-known problem of measurement error. We show that ignoring measurement error in the RP approach may lead to overrejection of the UMT. To solve this problem, we propose a new statistical RP framework for consumption panel data sets that allows for testing the UMT in the presence of measurement error. Our test is applicable to all consumer models that can be characterized by their first-order conditions. Our approach is non-parametric, allows for unrestricted heterogeneity in preferences and requires only a centring condition on measurement error. We develop two applications that provide new evidence about the UMT. First, we find support in a survey data set for the dynamic and time-consistent UMT in single-individual households, in the presence of nonclassical measurement error in consumption. In the second application, we cannot reject the static UMT in a widely used experimental data set in which measurement error in prices is assumed to be the result of price misperception due to the experimental design. The first finding stands in contrast to the conclusions drawn from the deterministic RP test of Browning (1989, International Economic Review, 979–992). The second finding reverses the conclusions drawn from the deterministic RP test of Afriat (1967, International Economic Review, 8, 6–77) and Varian (1982, Econometrica, 945–973).


2017 ◽  
Vol 5 (4) ◽  
pp. 1
Author(s):  
I. E. Okorie ◽  
A. C. Akpanta ◽  
J. Ohakwe ◽  
D. C. Chikezie ◽  
C. U. Onyemachi ◽  
...  

This paper introduces a new generator of probability distribution-the adjusted log-logistic generalized (ALLoG) distribution and a new extension of the standard one parameter exponential distribution called the adjusted log-logistic generalized exponential (ALLoGExp) distribution. The ALLoGExp distribution is a special case of the ALLoG distribution and we have provided some of its statistical and reliability properties. Notably, the failure rate could be monotonically decreasing, increasing or upside-down bathtub shaped depending on the value of the parameters $\delta$ and $\theta$. The method of maximum likelihood estimation was proposed to estimate the model parameters. The importance and flexibility of he ALLoGExp distribution was demonstrated with a real and uncensored lifetime data set and its fit was compared with five other exponential related distributions. The results obtained from the model fittings shows that the ALLoGExp distribution provides a reasonably better fit than the one based on the other fitted distributions. The ALLoGExp distribution is therefore ecommended for effective modelling of lifetime data sets.


2021 ◽  
Vol 37 (3) ◽  
pp. 481-490
Author(s):  
Chenyong Song ◽  
Dongwei Wang ◽  
Haoran Bai ◽  
Weihao Sun

HighlightsThe proposed data enhancement method can be used for small-scale data sets with rich sample image features.The accuracy of the new model reaches 98.5%, which is better than the traditional CNN method.Abstract: GoogLeNet offers far better performance in identifying apple disease compared to traditional methods. However, the complexity of GoogLeNet is relatively high. For small volumes of data, GoogLeNet does not achieve the same performance as it does with large-scale data. We propose a new apple disease identification model using GoogLeNet’s inception module. The model adopts a variety of methods to optimize its generalization ability. First, geometric transformation and image modification of data enhancement methods (including rotation, scaling, noise interference, random elimination, color space enhancement) and random probability and appropriate combination of strategies are used to amplify the data set. Second, we employ a deep convolution generative adversarial network (DCGAN) to enhance the richness of generated images by increasing the diversity of the noise distribution of the generator. Finally, we optimize the GoogLeNet model structure to reduce model complexity and model parameters, making it more suitable for identifying apple tree diseases. The experimental results show that our approach quickly detects and classifies apple diseases including rust, spotted leaf disease, and anthrax. It outperforms the original GoogLeNet in recognition accuracy and model size, with identification accuracy reaching 98.5%, making it a feasible method for apple disease classification. Keywords: Apple disease identification, Data enhancement, DCGAN, GoogLeNet.


Author(s):  
Rajendra Prasad ◽  
Lalit Kumar Gupta ◽  
A. Beesham ◽  
G. K. Goswami ◽  
Anil Kumar Yadav

In this paper, we investigate a Bianchi type I exact Universe by taking into account the cosmological constant as the source of energy at the present epoch. We have performed a [Formula: see text] test to obtain the best fit values of the model parameters of the Universe in the derived model. We have used two types of data sets, viz., (i) 31 values of the Hubble parameter and (ii) the 1048 Pantheon data set of various supernovae distance moduli and apparent magnitudes. From both the data sets, we have estimated the current values of the Hubble constant, density parameters [Formula: see text] and [Formula: see text]. The dynamics of the deceleration parameter shows that the Universe was in a decelerating phase for redshift [Formula: see text]. At a transition redshift [Formula: see text], the present Universe entered an accelerating phase of expansion. The current age of the Universe is obtained as [Formula: see text] Gyrs. This is in good agreement with the value of [Formula: see text] calculated from the Plank collaboration results and WMAP observations.


2021 ◽  
Author(s):  
Gah-Yi Ban ◽  
N. Bora Keskin

We consider a seller who can dynamically adjust the price of a product at the individual customer level, by utilizing information about customers’ characteristics encoded as a d-dimensional feature vector. We assume a personalized demand model, parameters of which depend on s out of the d features. The seller initially does not know the relationship between the customer features and the product demand but learns this through sales observations over a selling horizon of T periods. We prove that the seller’s expected regret, that is, the revenue loss against a clairvoyant who knows the underlying demand relationship, is at least of order [Formula: see text] under any admissible policy. We then design a near-optimal pricing policy for a semiclairvoyant seller (who knows which s of the d features are in the demand model) who achieves an expected regret of order [Formula: see text]. We extend this policy to a more realistic setting, where the seller does not know the true demand predictors, and show that this policy has an expected regret of order [Formula: see text], which is also near-optimal. Finally, we test our theory on simulated data and on a data set from an online auto loan company in the United States. On both data sets, our experimentation-based pricing policy is superior to intuitive and/or widely-practiced customized pricing methods, such as myopic pricing and segment-then-optimize policies. Furthermore, our policy improves upon the loan company’s historical pricing decisions by 47% in expected revenue over a six-month period. This paper was accepted by Noah Gans, stochastic models and simulation.


Geophysics ◽  
2016 ◽  
Vol 81 (2) ◽  
pp. ID1-ID24 ◽  
Author(s):  
Alan W. Roberts ◽  
Richard W. Hobbs ◽  
Michael Goldstein ◽  
Max Moorkamp ◽  
Marion Jegen ◽  
...  

Understanding the uncertainty associated with large joint geophysical surveys, such as 3D seismic, gravity, and magnetotelluric (MT) studies, is a challenge, conceptually and practically. By demonstrating the use of emulators, we have adopted a Monte Carlo forward screening scheme to globally test a prior model space for plausibility. This methodology means that the incorporation of all types of uncertainty is made conceptually straightforward, by designing an appropriate prior model space, upon which the results are dependent, from which to draw candidate models. We have tested the approach on a salt dome target, over which three data sets had been obtained; wide-angle seismic refraction, MT and gravity data. We have considered the data sets together using an empirically measured uncertain physical relationship connecting the three different model parameters: seismic velocity, density, and resistivity, and we have indicated the value of a joint approach, rather than considering individual parameter models. The results were probability density functions over the model parameters, together with a halite probability map. The emulators give a considerable speed advantage over running the full simulator codes, and we consider their use to have great potential in the development of geophysical statistical constraint methods.


Sign in / Sign up

Export Citation Format

Share Document