probability metrics
Recently Published Documents


TOTAL DOCUMENTS

71
(FIVE YEARS 12)

H-INDEX

14
(FIVE YEARS 0)

2021 ◽  
Author(s):  
Shannon M Locke ◽  
Michael S Landy ◽  
Pascal Mamassian

Perceptual confidence is an important internal signal about the certainty of our decisions and there is a substantial debate on how it is computed. We highlight three confidence metric types from the literature: observers either use 1) the full probability distribution to compute probability correct (Probability metrics), 2) point estimates from the perceptual decision process to estimate uncertainty (Evidence-Strength metrics), or 3) heuristic confidence from stimulus-based cues to uncertainty (Heuristic metrics). These metrics are rarely tested against one another, so we examined models of all three types on a suprathreshold spatial discrimination task. Observers were shown a cloud of dots sampled from a dot generating distribution and judged if the mean of the distribution was left or right of centre. In addition to varying the horizontal position of the mean, there were two sensory uncertainty manipulations: the number of dots sampled and the spread of the generating distribution. After every two perceptual decisions, observers made a confidence forced-choice judgement whether they were more confident in the first or second decision. Model results showed that observers were on average best-fit by a Heuristic model that used dot cloud position, spread, and number of dots as cues. However, almost half of the observers were best-fit by an Evidence-Strength model that uses the distance between the discrimination criterion and a point estimate, scaled according to sensory uncertainty, to compute confidence. This signal-to-noise ratio model outperformed the standard unscaled distance from criterion model favoured by many researchers and suggests that this latter simple model may not be suitable for mixed-difficulty designs. An accidental repetition of some sessions also allowed for the measurement of confidence agreement for identical pairs of stimuli. This N-pass analysis revealed that human observers were more consistent than their best-fitting model would predict, indicating there are still aspects of confidence that are not captured by our model. As such, we propose confidence agreement as a useful technique for computational studies of confidence. Taken together, these findings highlight the idiosyncratic nature of confidence computations for complex decision contexts and the need to consider different potential metrics and transformations in the confidence computation.


2021 ◽  
Vol 9 (3) ◽  
pp. 52-76
Author(s):  
S. Smirnov ◽  
D. Sotnikov

This paper proposes a method of comparing the prices of European options, based on the use of probabilistic metrics, with respect to two models of price dynamics: Bachelier and Samuelson. In contrast to other studies on the subject, we consider two classes of options: European options with a Lipschitz continuous payout function and European options with a bounded payout function. For these classes, the following suitable probability metrics are chosen: the Fortet-Maurier metric, the total variation metric, and the Kolmogorov metric. It is proved that their computation can be reduced to computation of the Lambert in case of the Fortet-Mourier metric, and to the solution of a nonlinear equation in other cases. A statistical estimation of the model parameters in the modern oil market gives the order of magnitude of the error, including the magnitude of sensitivity of the option price, to the change in the volatility.


PLoS ONE ◽  
2021 ◽  
Vol 16 (7) ◽  
pp. e0254546
Author(s):  
Bob Kapteijns ◽  
Florian Hintz

When estimating the influence of sentence complexity on reading, researchers typically opt for one of two main approaches: Measuring syntactic complexity (SC) or transitional probability (TP). Comparisons of the predictive power of both approaches have yielded mixed results. To address this inconsistency, we conducted a self-paced reading experiment. Participants read sentences of varying syntactic complexity. From two alternatives, we selected the set of SC and TP measures, respectively, that provided the best fit to the self-paced reading data. We then compared the contributions of the SC and TP measures to self-paced reading times when entered into the same model. Our results showed that while both measures explained significant portions of variance in reading times (over and above control variables: word/sentence length, word frequency and word position) when included in independent models, their contributions changed drastically when SC and TP were entered into the same model. Specifically, we only observed significant effects of TP. We conclude that in our experiment the control variables explained the bulk of variance. When comparing the small effects of SC and TP, the effects of TP appear to be more robust.


2021 ◽  
Author(s):  
narayanarao narayanarao ◽  
A. Rajasekhar Reddy

Abstract In WMN, at the time of network consignment and bandwidth registration, the active network consignment method did not take into consideration the intrusion, congestion load and bandwidth necessities as a whole. The significance centred bandwidth registration methods result in famishment of slightest significance congestion. Hence in this paper, we propose a Joint Channel Assignment and Bandwidth Reservation using Improved FireFly Algorithm (IFA) in WMN. Initially the priority of each node is determined based on the channel usage, future interference and link congestion probability metrics. The bandwidth is allocated straight, comparative to the nodule significance and entire quantity of congestion movements incomplete on the demanded nodule. For channel assignment and path selection, the improved FireFly Algorithm (IFA) is used. The objective function of IFA is determined in terms of link capacity, interference and flow conservation constraints. Then the channels and the path which minimize the objective function are selected by applying IFA. By simulation results we show that the proposed technique minimizes the traffic and enhances the channel efficiency.


2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Sergey N. Smirnov

The main aim of this article is to show the role of structural stability in financial modelling; that is, a specific “no-arbitrage” property is unaffected by small perturbations of the model’s dynamics. We prove that under the structural stability assumption, given a convex compact-valued multifunction, there exists a stochastic transition kernel with supports coinciding with this multifunction and one that is strong Feller in the strict sense. We also demonstrate preservation of structural stability for sufficiently small deviations of transition kernels for different probability metrics.


2021 ◽  
Author(s):  
Bob Kapteijns ◽  
Florian Hintz

When estimating the influence of sentence complexity on reading, researchers typically opt for one of two main approaches: Measuring syntactic complexity (SC) or transitional probability (TP). Comparisons of the predictive power of both approaches have yielded mixed results. To address this inconsistency, we conducted a self-paced reading experiment. Participants read sentences of varying syntactic complexity. From two alternatives, we selected the set of SC and TP measures, respectively, that provided the best fit to the self-paced reading data. We then compared the contributions of the SC and TP measures to reading times when entered into the same model. Our results showed that both measures explained significant portions of variance in self-paced reading times. Thus, researchers aiming to measure sentence complexity should take both SC and TP into account. All of the analyses were conducted with and without control variables known to influence reading times (word/sentence length, word frequency and word position) to showcase how the effects of SC and TP change in the presence of the control variables.


Author(s):  
M. Hoffhues ◽  
W. Römisch ◽  
T. M. Surowiec

AbstractThe vast majority of stochastic optimization problems require the approximation of the underlying probability measure, e.g., by sampling or using observations. It is therefore crucial to understand the dependence of the optimal value and optimal solutions on these approximations as the sample size increases or more data becomes available. Due to the weak convergence properties of sequences of probability measures, there is no guarantee that these quantities will exhibit favorable asymptotic properties. We consider a class of infinite-dimensional stochastic optimization problems inspired by recent work on PDE-constrained optimization as well as functional data analysis. For this class of problems, we provide both qualitative and quantitative stability results on the optimal value and optimal solutions. In both cases, we make use of the method of probability metrics. The optimal values are shown to be Lipschitz continuous with respect to a minimal information metric and consequently, under further regularity assumptions, with respect to certain Fortet-Mourier and Wasserstein metrics. We prove that even in the most favorable setting, the solutions are at best Hölder continuous with respect to changes in the underlying measure. The theoretical results are tested in the context of Monte Carlo approximation for a numerical example involving PDE-constrained optimization under uncertainty.


2020 ◽  
Vol 376 (1817) ◽  
pp. 20190692
Author(s):  
Caitlin Mills ◽  
Andre Zamani ◽  
Rebecca White ◽  
Kalina Christoff

Thoughts that appear to come to us ‘out of the blue’ or ‘out of nowhere’ are a familiar aspect of mental experience. Such thoughts tend to elicit feelings of surprise and spontaneity. Although we are beginning to understand the neural processes that underlie the arising of such thoughts, little is known about what accounts for their peculiar phenomenology. Here, we focus on one central aspect of this phenomenology—the experience of surprise at their occurrence, as it relates to internal probabilistic predictions regarding mental states. We introduce a distinction between two phenomenologically different types of transitions in thought content: (i) abrupt transitions , which occur at surprising times but lead to unsurprising thought content, and (ii) wayward transitions , which occur at surprising times and also lead to surprising thought content. We examine these two types of transitions using a novel approach that combines probabilistic and predictive processing concepts and principles. We employ two different probability metrics—transition and occurrence probability—to characterize and differentiate between abrupt and wayward transitions. We close by discussing some potentially beneficial ways in which these two kinds of transitions in thought content may contribute to mental function, and how they may be implemented at the neural level. This article is part of the theme issue ‘Offline perception: voluntary and spontaneous perceptual experiences without matching external stimulation’.


Sign in / Sign up

Export Citation Format

Share Document