scholarly journals A method for estimating the probability distribution of the lifetime for new technical equipment based on expert judgement

2021 ◽  
Vol 23 (4) ◽  
pp. 757-769
Author(s):  
Karol Andrzejczak ◽  
Lech Bukowski

Managing the exploitation of technical equipment under conditions of uncertainty requires the use of probabilistic prediction models in the form of probability distributions of the lifetime of these objects. The parameters of these distributions are estimated with the use of statistical methods based on historical data about actual realizations of the lifetime of examined objects. However, when completely new solutions are introduced into service, such data are not available and the only possible method for the initial assessment of the expected lifetime of technical objects is expert methods. The aim of the study is to present a method for estimating the probability distribution of the lifetime for new technical facilities based on expert assessments of three parameters characterizing the expected lifetime of these objects. The method is based on a subjective Bayesian approach to the problem of randomness and integrated with models of classical probability theory. Due to its wide application in the field of maintenance of machinery and technical equipment, a Weibull model is proposed, and its possible practical applications are shown. A new method of expert elicitation of probabilities for any continuous random variable is developed. A general procedure for the application of this method is proposed and the individual steps of its implementation are discussed, as well as the mathematical models necessary for the estimation of the parameters of the probability distribution are presented. A practical example of the application of the developed method on specific numerical values is also presented.

2020 ◽  
Vol 92 (6) ◽  
pp. 51-58
Author(s):  
S.A. SOLOVYEV ◽  

The article describes a method for reliability (probability of non-failure) analysis of structural elements based on p-boxes. An algorithm for constructing two p-blocks is shown. First p-box is used in the absence of information about the probability distribution shape of a random variable. Second p-box is used for a certain probability distribution function but with inaccurate (interval) function parameters. The algorithm for reliability analysis is presented on a numerical example of the reliability analysis for a flexural wooden beam by wood strength criterion. The result of the reliability analysis is an interval of the non-failure probability boundaries. Recommendations are given for narrowing the reliability boundaries which can reduce epistemic uncertainty. On the basis of the proposed approach, particular methods for reliability analysis for any structural elements can be developed. Design equations are given for a comprehensive assessment of the structural element reliability as a system taking into account all the criteria of limit states.


Risks ◽  
2021 ◽  
Vol 9 (4) ◽  
pp. 70
Author(s):  
Małgorzata Just ◽  
Krzysztof Echaust

The appropriate choice of a threshold level, which separates the tails of the probability distribution of a random variable from its middle part, is considered to be a very complex and challenging task. This paper provides an empirical study on various methods of the optimal tail selection in risk measurement. The results indicate which method may be useful in practice for investors and financial and regulatory institutions. Some methods that perform well in simulation studies, based on theoretical distributions, may not perform well when real data are in use. We analyze twelve methods with different parameters for forty-eight world indices using returns from the period of 2000–Q1 2020 and four sub-periods. The research objective is to compare the methods and to identify those which can be recognized as useful in risk measurement. The results suggest that only four tail selection methods, i.e., the Path Stability algorithm, the minimization of the Asymptotic Mean Squared Error approach, the automated Eyeball method with carefully selected tuning parameters and the Hall single bootstrap procedure may be useful in practical applications.


1980 ◽  
Vol 17 (4) ◽  
pp. 1016-1024 ◽  
Author(s):  
K. D. Glazebrook

A collection of jobs is to be processed by a single machine. The amount of processing required by each job is a random variable with a known probability distribution. The jobs must be processed in a manner which is consistent with a precedence relation but the machine is free to switch from one job to another at any time; such switches are costly, however. This paper discusses conditions under which there is an optimal strategy for allocating the machine to the jobs which is given by a fixed permutation of the jobs indicating in which order they should be processed. When this is so, existing algorithms may be helpful in giving the best job ordering.


2016 ◽  
Vol 5 (4) ◽  
pp. 106-113 ◽  
Author(s):  
Tamer El Nashar

The objective of this paper is to examine the impact of inclusive business on the internal ethical values and the internal control quality while conceiving the accounting perspective. I construct the hypothesis for this paper based on the potential impact on the organizations’ awareness to be directed to the inclusive business approach that will significantly impact the culture of the organizations then the ethical values and the internal control quality. I use the approach of the expected value and variance of random variable test in order to analyze the potential impact of inclusive business. I support the examination by discrete probability distribution and continuous probability distribution. I find a probability of 85.5% to have a significant potential impact of the inclusive business by 100% score on internal ethical values and internal control quality. And to help contribute to sustainability growth, reduce poverty and improve organizational culture and learning.


Author(s):  
Shuguang Song ◽  
Hanlin Liu ◽  
Mimi Zhang ◽  
Min Xie

In this paper, we propose and study a new bivariate Weibull model, called Bi-levelWeibullModel, which arises when one failure occurs after the other. Under some specific regularity conditions, the reliability function of the second event can be above the reliability function of the first event, and is always above the reliability function of the transformed first event, which is a univariate Weibull random variable. This model is motivated by a common physical feature that arises fromseveral real applications. The two marginal distributions are a Weibull distribution and a generalized three-parameter Weibull mixture distribution. Some useful properties of the model are derived, and we also present the maximum likelihood estimation method. A real example is provided to illustrate the application of the model.


2005 ◽  
Vol 23 (6) ◽  
pp. 429-461
Author(s):  
Ian Lerche ◽  
Brett S. Mudford

This article derives an estimation procedure to evaluate how many Monte Carlo realisations need to be done in order to achieve prescribed accuracies in the estimated mean value and also in the cumulative probabilities of achieving values greater than, or less than, a particular value as the chosen particular value is allowed to vary. In addition, by inverting the argument and asking what the accuracies are that result for a prescribed number of Monte Carlo realisations, one can assess the computer time that would be involved should one choose to carry out the Monte Carlo realisations. The arguments and numerical illustrations are carried though in detail for the four distributions of lognormal, binomial, Cauchy, and exponential. The procedure is valid for any choice of distribution function. The general method given in Lerche and Mudford (2005) is not merely a coincidence owing to the nature of the Gaussian distribution but is of universal validity. This article provides (in the Appendices) the general procedure for obtaining equivalent results for any distribution and shows quantitatively how the procedure operates for the four specific distributions. The methodology is therefore available for any choice of probability distribution function. Some distributions have more than two parameters that are needed to define precisely the distribution. Estimates of mean value and standard error around the mean only allow determination of two parameters for each distribution. Thus any distribution with more than two parameters has degrees of freedom that either have to be constrained from other information or that are unknown and so can be freely specified. That fluidity in such distributions allows a similar fluidity in the estimates of the number of Monte Carlo realisations needed to achieve prescribed accuracies as well as providing fluidity in the estimates of achievable accuracy for a prescribed number of Monte Carlo realisations. Without some way to control the free parameters in such distributions one will, presumably, always have such dynamic uncertainties. Even when the free parameters are known precisely, there is still considerable uncertainty in determining the number of Monte Carlo realisations needed to achieve prescribed accuracies, and in the accuracies achievable with a prescribed number of Monte Carol realisations because of the different functional forms of probability distribution that can be invoked from which one chooses the Monte Carlo realisations. Without knowledge of the underlying distribution functions that are appropriate to use for a given problem, presumably the choices one makes for numerical implementation of the basic logic procedure will bias the estimates of achievable accuracy and estimated number of Monte Carlo realisations one should undertake. The cautionary note, which is the main point of this article, and which is exhibited sharply with numerical illustrations, is that one must clearly specify precisely what distributions one is using and precisely what free parameter values one has chosen (and why the choices were made) in assessing the accuracy achievable and the number of Monte Carlo realisations needed with such choices. Without such available information it is not a very useful exercise to undertake Monte Carlo realisations because other investigations, using other distributions and with other values of available free parameters, will arrive at very different conclusions.


Author(s):  
Xiuyang Zou ◽  
Ji Pan ◽  
Zhe Sun ◽  
Bowen Wang ◽  
Zhiyu Jin ◽  
...  

The degradation of anion exchange membranes (AEMs) hindered the practical applications of alkaline membrane fuel cells. This issue has inspired a large number of both experimental and theoretical studies. However,...


Vestnik NSUEM ◽  
2021 ◽  
pp. 146-155
Author(s):  
A. V. Ganicheva ◽  
A. V. Ganichev

The problem of reducing the number of observations for constructing a confidence interval of variance with a given degree of accuracy and reliability is considered. The new method of constructing an interval estimate of variance developed in the article is formulated by three statements and justified by four proven theorems. Formulas for calculating the required number of observations depending on the accuracy and reliability of the estimate are derived. The results of the calculations are presented in the table and shown in the diagram. The universality and effectiveness of this method is shown. The universality of the method lies in the fact that it is applicable to any laws of probability distribution, and not only for the normal law. The effectiveness of the developed method is justified by comparing its performance with other known methods.


Author(s):  
M. Vidyasagar

This chapter provides an introduction to some elementary aspects of information theory, including entropy in its various forms. Entropy refers to the level of uncertainty associated with a random variable (or more precisely, the probability distribution of the random variable). When there are two or more random variables, it is worthwhile to study the conditional entropy of one random variable with respect to another. The last concept is relative entropy, also known as the Kullback–Leibler divergence, which measures the “disparity” between two probability distributions. The chapter first considers convex and concave functions before discussing the properties of the entropy function, conditional entropy, uniqueness of the entropy function, and the Kullback–Leibler divergence.


Sign in / Sign up

Export Citation Format

Share Document