Joint distributions of successes, failures and patterns in enumeration problems

2000 ◽  
Vol 32 (3) ◽  
pp. 866-884 ◽  
Author(s):  
S Chadjiconstantinidis ◽  
D. L. Antzoulakos ◽  
M. V. Koutras

Let ε be a (single or composite) pattern defined over a sequence of Bernoulli trials. This article presents a unified approach for the study of the joint distribution of the number Sn of successes (and Fn of failures) and the number Xn of occurrences of ε in a fixed number of trials as well as the joint distribution of the waiting time Tr till the rth occurrence of the pattern and the number STr of successes (and FTr of failures) observed at that time. General formulae are developed for the joint probability mass functions and generating functions of (Xn,Sn), (Tr,STr) (and (Xn,Sn,Fn),(Tr,STr,FTr)) when Xn belongs to the family of Markov chain imbeddable variables of binomial type. Specializing to certain success runs, scans and pattern problems several well-known results are delivered as special cases of the general theory along with some new results that have not appeared in the statistical literature before.

2000 ◽  
Vol 32 (03) ◽  
pp. 866-884 ◽  
Author(s):  
S Chadjiconstantinidis ◽  
D. L. Antzoulakos ◽  
M. V. Koutras

Let ε be a (single or composite) pattern defined over a sequence of Bernoulli trials. This article presents a unified approach for the study of the joint distribution of the number S n of successes (and F n of failures) and the number X n of occurrences of ε in a fixed number of trials as well as the joint distribution of the waiting time T r till the rth occurrence of the pattern and the number S T r of successes (and F T r of failures) observed at that time. General formulae are developed for the joint probability mass functions and generating functions of (X n ,S n ), (T r ,S T r ) (and (X n ,S n ,F n ),(T r ,S T r ,F T r )) when X n belongs to the family of Markov chain imbeddable variables of binomial type. Specializing to certain success runs, scans and pattern problems several well-known results are delivered as special cases of the general theory along with some new results that have not appeared in the statistical literature before.


Entropy ◽  
2021 ◽  
Vol 23 (7) ◽  
pp. 857
Author(s):  
Tommaso Boccato ◽  
Alberto Testolin ◽  
Marco Zorzi

One of the most rapidly advancing areas of deep learning research aims at creating models that learn to disentangle the latent factors of variation from a data distribution. However, modeling joint probability mass functions is usually prohibitive, which motivates the use of conditional models assuming that some information is given as input. In the domain of numerical cognition, deep learning architectures have successfully demonstrated that approximate numerosity representations can emerge in multi-layer networks that build latent representations of a set of images with a varying number of items. However, existing models have focused on tasks requiring to conditionally estimate numerosity information from a given image. Here, we focus on a set of much more challenging tasks, which require to conditionally generate synthetic images containing a given number of items. We show that attention-based architectures operating at the pixel level can learn to produce well-formed images approximately containing a specific number of items, even when the target numerosity was not present in the training distribution.


2007 ◽  
Vol 10 (04) ◽  
pp. 733-748 ◽  
Author(s):  
FRIEDEL EPPLE ◽  
SAM MORGAN ◽  
LUTZ SCHLOEGL

The pricing of exotic portfolio products, e.g. path-dependent CDO tranches, relies on the joint probability distribution of portfolio losses at different time horizons. We discuss a range of methods to construct the joint distribution in a way that is consistent with market prices of vanilla CDO tranches. As an example, we show how our loss-linking methods provide estimates for the breakeven spreads of forward-starting tranches. .


Author(s):  
Tommaso Boccato ◽  
Alberto Testolin ◽  
Marco Zorzi

One of the most rapidly advancing areas of deep learning research aims at creating models that learn to disentangle the latent factors of variation from a data distribution. However, modeling joint probability mass functions is usually prohibitive, which motivates the use of conditional models assuming that some information is given as input. In the domain of numerical cognition, deep learning architectures have successfully demonstrated that approximate numerosity representations can emerge in multi-layer networks that build latent representations of a set of images with a varying number of items. However, existing models have focused on tasks requiring to conditionally estimate numerosity information from a given image. Here we focus on a set of much more challenging tasks, which require to conditionally generate synthetic images containing a given number of items. We show that attention-based architectures operating at the pixel level can learn to produce well-formed images approximately containing a specific number of items, even when the target numerosity was not present in the training distribution.


2021 ◽  
Vol 15 (1) ◽  
pp. 408-433
Author(s):  
Margaux Dugardin ◽  
Werner Schindler ◽  
Sylvain Guilley

Abstract Extra-reductions occurring in Montgomery multiplications disclose side-channel information which can be exploited even in stringent contexts. In this article, we derive stochastic attacks to defeat Rivest-Shamir-Adleman (RSA) with Montgomery ladder regular exponentiation coupled with base blinding. Namely, we leverage on precharacterized multivariate probability mass functions of extra-reductions between pairs of (multiplication, square) in one iteration of the RSA algorithm and that of the next one(s) to build a maximum likelihood distinguisher. The efficiency of our attack (in terms of required traces) is more than double compared to the state-of-the-art. In addition to this result, we also apply our method to the case of regular exponentiation, base blinding, and modulus blinding. Quite surprisingly, modulus blinding does not make our attack impossible, and so even for large sizes of the modulus randomizing element. At the cost of larger sample sizes our attacks tolerate noisy measurements. Fortunately, effective countermeasures exist.


2012 ◽  
Vol 44 (3) ◽  
pp. 842-873 ◽  
Author(s):  
Zhiyi Chi

Nonnegative infinitely divisible (i.d.) random variables form an important class of random variables. However, when this type of random variable is specified via Lévy densities that have infinite integrals on (0, ∞), except for some special cases, exact sampling is unknown. We present a method that can sample a rather wide range of such i.d. random variables. A basic result is that, for any nonnegative i.d. random variable X with its Lévy density explicitly specified, if its distribution conditional on X ≤ r can be sampled exactly, where r > 0 is any fixed number, then X can be sampled exactly using rejection sampling, without knowing the explicit expression of the density of X. We show that variations of the result can be used to sample various nonnegative i.d. random variables.


Sign in / Sign up

Export Citation Format

Share Document