combine probability
Recently Published Documents


TOTAL DOCUMENTS

5
(FIVE YEARS 4)

H-INDEX

1
(FIVE YEARS 0)

2021 ◽  
Vol Volume 17, Issue 3 ◽  
Author(s):  
Filippo Bonchi ◽  
Alexandra Silva ◽  
Ana Sokolova

Probabilistic automata (PA), also known as probabilistic nondeterministic labelled transition systems, combine probability and nondeterminism. They can be given different semantics, like strong bisimilarity, convex bisimilarity, or (more recently) distribution bisimilarity. The latter is based on the view of PA as transformers of probability distributions, also called belief states, and promotes distributions to first-class citizens. We give a coalgebraic account of distribution bisimilarity, and explain the genesis of the belief-state transformer from a PA. To do so, we make explicit the convex algebraic structure present in PA and identify belief-state transformers as transition systems with state space that carries a convex algebra. As a consequence of our abstract approach, we can give a sound proof technique which we call bisimulation up-to convex hull. Comment: Full (extended) version of a CONCUR 2017 paper, minor revision of the LMCS submission


Author(s):  
Josimar Vasconcelos ◽  
Renato Cintra ◽  
Abraão Nascimento

In recent years various probability models have been proposed for describing lifetime data. Increasing model flexibility is often sought as a means to better describe asymmetric and heavy tail distributions. Such extensions were pioneered by the beta-G family. However, efficient goodness-of-fit (GoF) measures for the beta-G distributions are sought. In this paper, we combine probability weighted moments (PWMs) and the Mellin transform (MT) in order to furnish new qualitative and quantitative GoF tools for model selection within the beta-G class. We derive PWMs for the Fr\’{e}chet and Kumaraswamy distributions; and we provide expressions for the MT, and for the log-cumulants (LC) of the beta-Weibull, beta-Fr\’{e}chet, beta-Kumaraswamy, and beta-log-logistic distributions. Subsequently, we construct LC diagrams and, based on the Hotelling’s $T^2$ statistic, we derive confidence ellipses for the LCs. Finally, the proposed GoF measures are applied on five real data sets in order to demonstrate their applicability.


2021 ◽  
Author(s):  
Anca Hanea ◽  
David Peter Wilkinson ◽  
Marissa McBride ◽  
Aidan Lyon ◽  
Don van Ravenzwaaij ◽  
...  

Experts are often asked to represent their uncertainty as a subjective probability. Structured protocols offer a transparent and systematic way to elicit and combine probability judgements from multiple experts. As part of this process, experts are asked to individually estimate a probability (e.g., of a future event) which needs to be combined/aggregated into a final group prediction. The experts' judgements can be aggregated behaviourally (by striving for consensus), or mathematically (by using a mathematical rule to combine individual estimates). Mathematical rules (e.g., weighted linear combinations of judgments) provide an objective approach to aggregation. However, the choice of a rule is not straightforward, and the aggregated group probability judgement's quality depends on it. The quality of an aggregation can be defined in terms of accuracy, calibration and informativeness. These measures can be used to compare different aggregation approaches and help decide on which aggregation produces the "best" final prediction.In the ideal case, individual experts' performance (as probability assessors) is scored, these scores are translated into performance-based weights, and a performance-based weighted aggregation is used. When this is not possible though, several other aggregation methods, informed by measurable proxies for good performance, can be formulated and compared. We use several data sets to investigate the relative performance of multiple aggregation methods informed by previous experience and the available literature. Even though the accuracy, calibration, and informativeness of the majority of methods are very similar, two of the aggregation methods distinguish themselves as the best and worst.


Author(s):  
Palash Dutta

This chapter presents an approach to combine probability distributions with imprecise (fuzzy numbers) parameters (mean and standard deviation) as well as fuzzy numbers (FNs) of various types and shapes within the same framework. The amalgamation of probability distribution and fuzzy numbers are done by generating three algorithms. Human health risk assessment is performed through the proposed algorithms. It is found that the chapter provides an exertion to perform human health risk assessment in a specific manner that has more efficacies because of its capacity to exemplify uncertainties of risk assessment model in its own fashion. It affords assistance to scientists, environmentalists, and experts to perform human health risk assessment providing better efficiency to the output.


1997 ◽  
Vol 5 (2) ◽  
pp. 123-141 ◽  
Author(s):  
Rafal Salustowicz ◽  
Jürgen Schmidhuber

Probabilistic incremental program evolution (PIPE) is a novel technique for automatic program synthesis. We combine probability vector coding of program instructions, population-based incremental learning, and tree-coded programs like those used in some variants of genetic programming (GP). PIPE iteratively generates successive populations of functional programs according to an adaptive probability distribution over all possible programs. Each iteration, it uses the best program to refine the distribution. Thus, it stochastically generates better and better programs. Since distribution refinements depend only on the best program of the current population, PIPE can evaluate program populations efficiently when the goal is to discover a program with minimal runtime. We compare PIPE to GP on a function regression problem and the 6-bit parity problem. We also use PIPE to solve tasks in partially observable mazes, where the best programs have minimal runtime.


Sign in / Sign up

Export Citation Format

Share Document