scholarly journals A Discussion on Simple Support Functions

Author(s):  
D.N. Kandekar

Prediction about each and every incident happening in our daily life is impossible. But we can predict about some incidents. Probability is most helpful tool in predicting about outcomes or conclusions of such incidents. Such incidents happened in our life, always follow some known or unknown statistical probability distribution which may consist of simple or complicated probability density function. Therefore with help of probability distributions, we get some blurred idea about the functioning of incidents happening in our life. Using some commonly used probability distributions, we obtain conclusions which are helpful in decision making. Support functions viz. simple support functions are very useful in decision making. In this paper, we quote some results and applications regarding simple support function based on probability transformations.

2015 ◽  
Vol 56 ◽  
Author(s):  
Valentinas Podvezko ◽  
Askoldas Podviezko

Multiple criteria decision-making (MCDM) methods designed for evaluation of attractiveness of available alternatives, whenever used in decision-aid systems, imply active participation of experts. They participate in all stages of evaluation: casting a set of criteria, which should describe an evaluated process or an alternative; estimating level of importance of each criterion; estimating values of some criteria and sub-criteria. Social and economic processes are prone to laws of statistics,which are described and could be forecasted using the theory of probability. Weights of criteria, which reveal levels of their importance, could rarely be estimated with the absolute level of precision. Uncertainty of evaluation is characterised by a probability distribution. Aiming to elicit evaluation from experts we have to find either a distribution or a density function. Statistical simulation method can be used for estimation of evaluation of weights and/or values of criteria by experts. Alternatively, character of related uncertainty can be estimated by an expert himself during the survey process. The aim of this paper is to describe algorithms of expert evaluation with estimation of opinion uncertainty, which were applied in practice. In particular, a new algorithm was proposed, where an expert evaluates criteria by probability distributions.


Author(s):  
HAI-YEN HAU

Shafer’s theory of evidence has been used to deal with uncertainty in many artificial intelligence applications. In this paper, we will show that in a hierarchically structured hypotheses space, any belief function whose focal elements are nodes in the hierarchy is a separable support function. We propose an algorithm that decomposes such separable support function into simple support functions. It is shown that the computational complexity of this decomposition algorithm is O(N2). Applications of the decomposition of separable support functions to the data fusion problem and reasoning about the control problem is discussed.


Author(s):  
Kelachi P. Enwere ◽  
Uchenna P. Ogoke

Aims: The Study seeks to determine the relationship that exists among Continuous Probability Distributions and the use of Interpolation Techniques to estimate unavailable but desired value of a given probability distribution. Study Design: Statistical Probability tables for Normal, Student t, Chi-squared, F and Gamma distributions were used to compare interpolated values with statistical tabulated values. Charts and Tables were used to represent the relationships among the five probability distributions. Methodology: Linear Interpolation Technique was employed to interpolate unavailable but desired values so as to obtain approximate values from the statistical tables. The data were analyzed for interpolation of unavailable but desired values at 95% a-level from the five continuous probability distribution. Results: Interpolated values are as close as possible to the exact values and the difference between the exact value and the interpolated value is not pronounced. The table and chart established showed that relationships do exist among the Normal, Student-t, Chi-squared, F and Gamma distributions. Conclusion: Interpolation techniques can be applied to obtain unavailable but desired information in a data set. Thus, uncertainty found in a data set can be discovered, then analyzed and interpreted to produce desired results. However, understanding of how these probability distributions are related to each other can inform how best these distributions can be used interchangeably by Statisticians and other Researchers who apply statistical methods employed in practical applications.


Author(s):  
V. S. Huzurbazar

Let f(x, αi) be the probability density function of a distribution depending on n parameters αi(i = 1,2, …, n). Then following Jeffreys(1) we shall say that the parameters αi are orthogonal if


2011 ◽  
Vol 09 (supp01) ◽  
pp. 39-47
Author(s):  
ALESSIA ALLEVI ◽  
MARIA BONDANI ◽  
ALESSANDRA ANDREONI

We present the experimental reconstruction of the Wigner function of some optical states. The method is based on direct intensity measurements by non-ideal photodetectors operated in the linear regime. The signal state is mixed at a beam-splitter with a set of coherent probes of known complex amplitudes and the probability distribution of the detected photons is measured. The Wigner function is given by a suitable sum of these probability distributions measured for different values of the probe. For comparison, the same data are analyzed to obtain the number distributions and the Wigner functions for photons.


2021 ◽  
Vol 5 (1) ◽  
pp. 1-11
Author(s):  
Vitthal Anwat ◽  
Pramodkumar Hire ◽  
Uttam Pawar ◽  
Rajendra Gunjal

Flood Frequency Analysis (FFA) method was introduced by Fuller in 1914 to understand the magnitude and frequency of floods. The present study is carried out using the two most widely accepted probability distributions for FFA in the world namely, Gumbel Extreme Value type I (GEVI) and Log Pearson type III (LP-III). The Kolmogorov-Smirnov (KS) and Anderson-Darling (AD) methods were used to select the most suitable probability distribution at sites in the Damanganga Basin. Moreover, discharges were estimated for various return periods using GEVI and LP-III. The recurrence interval of the largest peak flood on record (Qmax) is 107 years (at Nanipalsan) and 146 years (at Ozarkhed) as per LP-III. Flood Frequency Curves (FFC) specifies that LP-III is the best-fitted probability distribution for FFA of the Damanganga Basin. Therefore, estimated discharges and return periods by LP-III probability distribution are more reliable and can be used for designing hydraulic structures.


Author(s):  
J. L. Cagney ◽  
S. S. Rao

Abstract The modeling of manufacturing errors in mechanisms is a significant task to validate practical designs. The use of probability distributions for errors can simulate manufacturing variations and real world operations. This paper presents the mechanical error analysis of universal joint drivelines. Each error is simulated using a probability distribution, i.e., a design of the mechanism is created by assigning random values to the errors. Each design is then evaluated by comparing the output error with a limiting value and the reliability of the universal joint is estimated. For this, the design is considered a failure whenever the output error exceeds the specified limit. In addition, the problem of synthesis, which involves the allocation of tolerances (errors) for minimum manufacturing cost without violating a specified accuracy requirement of the output, is also considered. Three probability distributions — normal, Weibull and beta distributions — were used to simulate the random values of the errors. The similarity of the results given by the three distributions suggests that the use of normal distribution would be acceptable for modeling the tolerances in most cases.


2021 ◽  
Vol 118 (40) ◽  
pp. e2025782118
Author(s):  
Wei-Chia Chen ◽  
Juannan Zhou ◽  
Jason M. Sheltzer ◽  
Justin B. Kinney ◽  
David M. McCandlish

Density estimation in sequence space is a fundamental problem in machine learning that is also of great importance in computational biology. Due to the discrete nature and large dimensionality of sequence space, how best to estimate such probability distributions from a sample of observed sequences remains unclear. One common strategy for addressing this problem is to estimate the probability distribution using maximum entropy (i.e., calculating point estimates for some set of correlations based on the observed sequences and predicting the probability distribution that is as uniform as possible while still matching these point estimates). Building on recent advances in Bayesian field-theoretic density estimation, we present a generalization of this maximum entropy approach that provides greater expressivity in regions of sequence space where data are plentiful while still maintaining a conservative maximum entropy character in regions of sequence space where data are sparse or absent. In particular, we define a family of priors for probability distributions over sequence space with a single hyperparameter that controls the expected magnitude of higher-order correlations. This family of priors then results in a corresponding one-dimensional family of maximum a posteriori estimates that interpolate smoothly between the maximum entropy estimate and the observed sample frequencies. To demonstrate the power of this method, we use it to explore the high-dimensional geometry of the distribution of 5′ splice sites found in the human genome and to understand patterns of chromosomal abnormalities across human cancers.


2016 ◽  
Vol 11 (1) ◽  
pp. 432-440 ◽  
Author(s):  
M. T. Amin ◽  
M. Rizwan ◽  
A. A. Alazba

AbstractThis study was designed to find the best-fit probability distribution of annual maximum rainfall based on a twenty-four-hour sample in the northern regions of Pakistan using four probability distributions: normal, log-normal, log-Pearson type-III and Gumbel max. Based on the scores of goodness of fit tests, the normal distribution was found to be the best-fit probability distribution at the Mardan rainfall gauging station. The log-Pearson type-III distribution was found to be the best-fit probability distribution at the rest of the rainfall gauging stations. The maximum values of expected rainfall were calculated using the best-fit probability distributions and can be used by design engineers in future research.


2018 ◽  
Vol 23 ◽  
pp. 00037 ◽  
Author(s):  
Stanisław Węglarczyk

Kernel density estimation is a technique for estimation of probability density function that is a must-have enabling the user to better analyse the studied probability distribution than when using a traditional histogram. Unlike the histogram, the kernel technique produces smooth estimate of the pdf, uses all sample points' locations and more convincingly suggest multimodality. In its two-dimensional applications, kernel estimation is even better as the 2D histogram requires additionally to define the orientation of 2D bins. Two concepts play fundamental role in kernel estimation: kernel function shape and coefficient of smoothness, of which the latter is crucial to the method. Several real-life examples, both for univariate and bivariate applications, are shown.


Sign in / Sign up

Export Citation Format

Share Document