scholarly journals Forecasting Using Information and Entropy Based on Belief Functions

Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-16
Author(s):  
Woraphon Yamaka ◽  
Songsak Sriboonchitta

This paper introduces an entropy-based belief function to the forecasting problem. While the likelihood-based belief function needs to know the distribution of the objective function for the prediction, the entropy-based belief function does not. This is because the observed data likelihood is somewhat complex in practice. We, thus, replace the likelihood function with the entropy. That is, we propose an approach in which a belief function is built from the entropy function. As an illustration, the proposed method is compared to the likelihood-based belief function in the simulation and empirical studies. According to the results, our approach performs well under a wide array of simulated data models and distributions. There are pieces of evidence that the prediction interval obtained from the frequentist method has a much narrower prediction interval, while our entropy-based method performs the widest. However, our entropy-based belief function still produces an acceptable range for prediction intervals as the true prediction value always lay in the prediction intervals.

Author(s):  
Jianping Fan ◽  
Jing Wang ◽  
Meiqin Wu

The two-dimensional belief function (TDBF = (mA, mB)) uses a pair of ordered basic probability distribution functions to describe and process uncertain information. Among them, mB includes support degree, non-support degree and reliability unmeasured degree of mA. So it is more abundant and reasonable than the traditional discount coefficient and expresses the evaluation value of experts. However, only considering that the expert’s assessment is single and one-sided, we also need to consider the influence between the belief function itself. The difference in belief function can measure the difference between two belief functions, based on which the supporting degree, non-supporting degree and unmeasured degree of reliability of the evidence are calculated. Based on the divergence measure of belief function, this paper proposes an extended two-dimensional belief function, which can solve some evidence conflict problems and is more objective and better solve a class of problems that TDBF cannot handle. Finally, numerical examples illustrate its effectiveness and rationality.


2016 ◽  
Author(s):  
Kassian Kobert ◽  
Alexandros Stamatakis ◽  
Tomáš Flouri

The phylogenetic likelihood function is the major computational bottleneck in several applications of evolutionary biology such as phylogenetic inference, species delimitation, model selection and divergence times estimation. Given the alignment, a tree and the evolutionary model parameters, the likelihood function computes the conditional likelihood vectors for every node of the tree. Vector entries for which all input data are identical result in redundant likelihood operations which, in turn, yield identical conditional values. Such operations can be omitted for improving run-time and, using appropriate data structures, reducing memory usage. We present a fast, novel method for identifying and omitting such redundant operations in phylogenetic likelihood calculations, and assess the performance improvement and memory saving attained by our method. Using empirical and simulated data sets, we show that a prototype implementation of our method yields up to 10-fold speedups and uses up to 78% less memory than one of the fastest and most highly tuned implementations of the phylogenetic likelihood function currently available. Our method is generic and can seamlessly be integrated into any phylogenetic likelihood implementation.


Author(s):  
Rajendra P. Srivastava ◽  
Mari W. Buche ◽  
Tom L. Roberts

The purpose of this chapter is to demonstrate the use of the evidential reasoning approach under the Dempster-Shafer (D-S) theory of belief functions to analyze revealed causal maps (RCM). The participants from information technology (IT) organizations provided the concepts to describe the target phenomenon of Job Satisfaction. They also identified the associations between the concepts. This chapter discusses the steps necessary to transform a causal map into an evidential diagram. The evidential diagram can then be analyzed using belief functions technique with survey data, thereby extending the research from a discovery and explanation stage to testing and prediction. An example is provided to demonstrate these steps. This chapter also provides the basics of Dempster-Shafer theory of belief functions and a step-by-step description of the propagation process of beliefs in tree-like evidential diagrams.


2019 ◽  
Vol 286 (1906) ◽  
pp. 20190384 ◽  
Author(s):  
P.-L. Jan ◽  
L. Lehnen ◽  
A.-L. Besnard ◽  
G. Kerth ◽  
M. Biedermann ◽  
...  

The speed and dynamics of range expansions shape species distributions and community composition. Despite the critical impact of population growth rates for range expansion, they are neglected in existing empirical studies, which focus on the investigation of selected life-history traits. Here, we present an approach based on non-invasive genetic capture–mark–recapture data for the estimation of adult survival, fecundity and juvenile survival, which determine population growth. We demonstrate the reliability of our method with simulated data, and use it to investigate life-history changes associated with range expansion in 35 colonies of the bat species Rhinolophus hipposideros . Comparing the demographic parameters inferred for 19 of those colonies which belong to an expanding population with those inferred for the remaining 16 colonies from a non-expanding population reveals that range expansion is associated with higher net reproduction. Juvenile survival was the main driver of the observed reproduction increase in this long-lived bat species with low per capita annual reproductive output. The higher average growth rate in the expanding population was not associated with a trade-off between increased reproduction and survival, suggesting that the observed increase in reproduction stems from a higher resource acquisition in the expanding population. Environmental conditions in the novel habitat hence seem to have an important influence on range expansion dynamics, and warrant further investigation for the management of range expansion in both native and invasive species.


2011 ◽  
Vol 19 (2) ◽  
pp. 188-204 ◽  
Author(s):  
Jong Hee Park

In this paper, I introduce changepoint models for binary and ordered time series data based on Chib's hidden Markov model. The extension of the changepoint model to a binary probit model is straightforward in a Bayesian setting. However, detecting parameter breaks from ordered regression models is difficult because ordered time series data often have clustering along the break points. To address this issue, I propose an estimation method that uses the linear regression likelihood function for the sampling of hidden states of the ordinal probit changepoint model. The marginal likelihood method is used to detect the number of hidden regimes. I evaluate the performance of the introduced methods using simulated data and apply the ordinal probit changepoint model to the study of Eichengreen, Watson, and Grossman on violations of the “rules of the game” of the gold standard by the Bank of England during the interwar period.


Author(s):  
PETER P. WAKKER

This paper shows that a "principle of complete ignorance" plays a central role in decisions based on Dempster belief functions. Such belief functions occur when, in a first stage, a random message is received and then, in a second stage, a true state of nature obtains. The uncertainty about the random message in the first stage is assumed to be probabilized, in agreement with the Bayesian principles. For the uncertainty in the second stage no probabilities are given. The Bayesian and belief function approaches part ways in the processing of the uncertainty in the second stage. The Bayesian approach requires that this uncertainty also be probabilized, which may require a resort to subjective information. Belief functions follow the principle of complete ignorance in the second stage, which permits strict adherence to objective inputs.


Author(s):  
Ivan Kramosil

A possibility to define a binary operation over the space of pairs of belief functions, inverse or dual to the well-known Dempster combination rule in the same sense in which substraction is dual with respect to the addition operation in the space of real numbers, can be taken as an important problem for the purely algebraic as well as from the application point of view. Or, it offers a way how to eliminate the modification of a belief function obtained when combining this original belief function with other pieces of information, later proved not to be reliable. In the space of classical belief functions definable by set-valued (generalized) random variables defined on a probability space, the invertibility problem for belief functions, resulting from the above mentioned problem of "dual" combination rule, can be proved to be unsolvable up to trivial cases. However, when generalizing the notion of belief functions in such a way that probability space is replaced by more general measurable space with signed measure, inverse belief functions can be defined for a large class of belief functions generalized in the corresponding way. "Dual" combination rule is then defined by the application of the Dempster rule to the inverse belief functions.


Author(s):  
Chunlai Zhou ◽  
Biao Qin ◽  
Xiaoyong Du

In this paper, we provide an axiomatic justification for decision making with belief functions by studying the belief-function counterpart of Savage's Theorem where the state space is finite and the consequence set is a continuum [l, M] (l<M). We propose six axioms for a preference relation over acts, and then show that this axiomatization admits a definition of qualitative belief functions comparing preferences over events that guarantees the existence of a belief function on the state space. The key axioms are uniformity and an analogue of the independence axiom. The uniformity axiom is used to ensure that all acts with the same maximal and minimal consequences must be equivalent. And our independence axiom shows the existence of a utility function and implies the uniqueness of the belief function on the state space. Moreover, we prove without the independence axiom the neutrality theorem that two acts are indifferent whenever they generate the same belief functions over consequences. At the end of the paper, we compare our approach with other related decision theories for belief functions.


2021 ◽  
Vol 2021 ◽  
pp. 1-27
Author(s):  
Awad A. Bakery ◽  
Wael Zakaria ◽  
OM Kalthum S. K. Mohamed

The generalized Gamma model has been applied in a variety of research fields, including reliability engineering and lifetime analysis. Indeed, we know that, from the above, it is unbounded. Data have a bounded service area in a variety of applications. A new five-parameter bounded generalized Gamma model, the bounded Weibull model with four parameters, the bounded Gamma model with four parameters, the bounded generalized Gaussian model with three parameters, the bounded exponential model with three parameters, and the bounded Rayleigh model with two parameters, is presented in this paper as a special case. This approach to the problem, which utilizes a bounded support area, allows for a great deal of versatility in fitting various shapes of observed data. Numerous properties of the proposed distribution have been deduced, including explicit expressions for the moments, quantiles, mode, moment generating function, mean variance, mean residual lifespan, and entropies, skewness, kurtosis, hazard function, survival function, r   th order statistic, and median distributions. The delivery has hazard frequencies that are monotonically increasing or declining, bathtub-shaped, or upside-down bathtub-shaped. We use the Newton Raphson approach to approximate model parameters that increase the log-likelihood function and some of the parameters have a closed iterative structure. Six actual data sets and six simulated data sets were tested to demonstrate how the proposed model works in reality. We illustrate why the Model is more stable and less affected by sample size. Additionally, the suggested model for wavelet histogram fitting of images and sounds is very accurate.


Author(s):  
Yinlam Chow ◽  
Brandon Cui ◽  
Moonkyung Ryu ◽  
Mohammad Ghavamzadeh

Model-based reinforcement learning (RL) algorithms allow us to combine model-generated data with those collected from interaction with the real system in order to alleviate the data efficiency problem in RL. However, designing such algorithms is often challenging because the bias in simulated data may overshadow the ease of data generation. A potential solution to this challenge is to jointly learn and improve model and policy using a universal objective function. In this paper, we leverage the connection between RL and probabilistic inference, and formulate such an objective function as a variational lower-bound of a log-likelihood. This allows us to use expectation maximization (EM) and iteratively fix a baseline policy and learn a variational distribution, consisting of a model and a policy (E-step), followed by improving the baseline policy given the learned variational distribution (M-step). We propose model-based and model-free policy iteration (actor-critic) style algorithms for the E-step and show how the variational distribution learned by them can be used to optimize the M-step in a fully model-based fashion. Our experiments on a number of continuous control tasks show that our model-based (E-step) algorithm, called variational model-based policy optimization (VMBPO), is more sample-efficient and robust to hyper-parameter tuning than its model-free (E-step) counterpart. Using the same control tasks, we also compare VMBPO with several state-of-the-art model-based and model-free RL algorithms and show its sample efficiency and performance.


Sign in / Sign up

Export Citation Format

Share Document