uniform probability distribution
Recently Published Documents


TOTAL DOCUMENTS

27
(FIVE YEARS 9)

H-INDEX

5
(FIVE YEARS 0)

Author(s):  
Eddy Keming Chen ◽  
Roderich Tumulka

AbstractLet $$\mathscr {H}$$ H be a finite-dimensional complex Hilbert space and $$\mathscr {D}$$ D the set of density matrices on $$\mathscr {H}$$ H , i.e., the positive operators with trace 1. Our goal in this note is to identify a probability measure u on $$\mathscr {D}$$ D that can be regarded as the uniform distribution over $$\mathscr {D}$$ D . We propose a measure on $$\mathscr {D}$$ D , argue that it can be so regarded, discuss its properties, and compute the joint distribution of the eigenvalues of a random density matrix distributed according to this measure.


2021 ◽  
Author(s):  
Qinyuan Wu ◽  
Yong Deng ◽  
Neal Xiong

Abstract Negation operation is important in intelligent information processing. Different with existing arithmetic negation, an exponential negation is presented in this paper. The new negation can be seen as a kind of geometry negation. Some basic properties of the proposed negation are investigated, we find that the fix point is the uniform probability distribution. The proposed exponential negation is an entropy increase operation and all the probability distributions will converge to the uniform distribution after multiple negation iterations. The convergence speed of the proposed negation is also faster than the existed negation. The number of iterations of convergence is inversely proportional to the number of elements in the distribution. Some numerical examples are used to illustrate the efficiency of the proposed negation.


Synthese ◽  
2021 ◽  
Author(s):  
Rush T. Stewart

AbstractEpistemic states of uncertainty play important roles in ethical and political theorizing. Theories that appeal to a “veil of ignorance,” for example, analyze fairness or impartiality in terms of certain states of ignorance. It is important, then, to scrutinize proposed conceptions of ignorance and explore promising alternatives in such contexts. Here, I study Lerner’s probabilistic egalitarian theorem in the setting of imprecise probabilities. Lerner’s theorem assumes that a social planner tasked with distributing income to individuals in a population is “completely ignorant” about which utility functions belong to which individuals. Lerner models this ignorance with a certain uniform probability distribution, and shows that, under certain further assumptions, income should be equally distributed. Much of the criticism of the relevance of Lerner’s result centers on the representation of ignorance involved. Imprecise probabilities provide a general framework for reasoning about various forms of uncertainty including, in particular, ignorance. To what extent can Lerner’s conclusion be maintained in this setting?


Electronics ◽  
2021 ◽  
Vol 10 (11) ◽  
pp. 1281
Author(s):  
Leonardo Acho ◽  
Gisela Pujol-Vázquez ◽  
José Gibergans-Báguena

This paper presents a mathematical algorithm and an electronic device to study soil resistivity. The system was based on introducing a time-varying electrical signal into the soil by using two electrodes and then collecting the electrical response of the soil. Hence, the proposed electronic system relied on a single-phase DC-to-AC converter followed by a transformer for the soil-to-circuit coupling. By using the maximum likelihood statistical method, a mathematical algorithm was realized to discern soil resistivity. The novelty of the numerical approach consisted of modeling a set of random data from the voltmeters by using a parametric uniform probability distribution function, and then, a parametric estimation was carried out for dataset analysis. Furthermore, to validate our contribution, a two-electrode laboratory experiment with soil was also designed. Finally, and according to the experimental outcomes, our electronic circuit and mathematical data analysis approach were able to detect different soil resistivities.


2021 ◽  
Author(s):  
Daniel Gilford

<p>The 2020 Atlantic hurricane season broke records, including becoming the most active tropical cyclone season (30 named storms), having the latest-named category five hurricane (Iota), and recording the most hurricane landfalls (twelve) in US history. This extraordinary activity yields an unusually large set of observed tropical cyclone (TC) intensities for a single season, which may be studied with theoretical and statistical analyses---something which is typically untenable for an average season. A tool to analyze these 2020 hurricane intensities is potential intensity (PI), which is the theoretical maximum speed limit of a tropical cyclone found by treating the storm as a thermal heat engine. From this thermodynamic perspective, was the 2020 hurricane season unprecedented? We explore this question using pyPI: a new python package which rapidly and transparently calculates potential intensity given a set of environmental conditions (https://github.com/dgilford/tcpyPI). Using reanalyses data, we rank 2020 potential intensity among all previous hurricane seasons (in the satellite era) and consider what environmental conditions made 2020 unique. The high number of observed storms allows us to build on previous work and perform a statistical analysis, which assesses the viability and value of potential intensity theory during the 2020 hurricane season. In particular, we calculate the normalized wind along the track of each storm (observed maximum intensity divided by potential intensity), which generally shows a uniform probability distribution function. The uniform shape of this distribution suggests that potential intensity theory is viable for seasonal intensity forecasting as long as storm counts are sufficiently high. In seasons with at least 25 storms one may expect that ~10% of the most intense observed hurricanes storms will have observed maximum intensities within 10% of their along-track potential intensity. Finally, we discuss how this approach and software could be improved/adapted for operational applications, and ask for feedback from the broader tropical cyclone community.</p>


2020 ◽  
Vol 30 (2) ◽  
pp. 513
Author(s):  
Francisco Montes ◽  
Ramón Sala

The supremacy of a few teams over the other participants is a common factor in the major European football leagues. The Spanish First Division league is not an exception. In order to demonstrate this fact, functional data analysis is used to analyze football league classifications for last ten seasons, 2002-03 to 2011-12. Not only the use of these techniques distinguish this work from similar, another distinctive feature is the use of a non-uniform probability distribution on the three possible outcomes of a match, obtained from the results of the 3800 matches of the 10 seasons taking into account the difference between the categories of the teams in the match. A Monte Carlo test allows to test the hypotheses of uniformity and non-uniformity in the results.


2020 ◽  
pp. 22-26
Author(s):  
O. D. Kupko

The process of measuring the area of a circular diaphragm using a device that determines the coordinates of the boundary of the diaphragm is theoretically considered. The Monte Carlo method with a small number of implementations was used. The procedure for calculating the area is described in detail. We considered a circular aperture with a precisely known radius. On the circumference of the diaphragm, the coordinate measuring points vibrated through 0.1, 0.3, 0.6, and π/2 radians vibrated. To simulate random deviations (uncertainties) when measuring coordinates, random additives were used with a uniform probability distribution and a given standard deviation. For each case, the areas were calculated in accordance with the proposed procedure. The difference in the results of calculating the area from the true area depending on the number of measurement points and the standard deviation of random additives is analyzed. It is shown that the ratio of the relative standard deviations of the area to the relative standard deviations of the coordinates is approximately the same for each number of measurements. The dependence of this relationship on the number of measurements is determined. The results obtained are analyzed.


Episteme ◽  
2019 ◽  
pp. 1-15
Author(s):  
Gerhard Schurz

AbstractWhite (2015) proposes an a priori justification of the reliability of inductive prediction methods based on his thesis of induction-friendliness. It asserts that there are by far more induction-friendly event sequences than induction-unfriendly event sequences. In this paper I contrast White's thesis with the famous no free lunch (NFL) theorem. I explain two versions of this theorem, the strong NFL theorem applying to binary and the weak NFL theorem applying to real-valued predictions. I show that both versions refute the thesis of induction-friendliness. In the conclusion I argue that an a priori justification of the reliability of induction based on a uniform probability distribution over possible event sequences is impossible. In the outlook I consider two alternative approaches: (i) justification externalism and (ii) optimality justifications.


2018 ◽  
Vol 0 (0) ◽  
Author(s):  
Mikhail Anokhin

Abstract Let {\mathbb{G}_{n}} be the subgroup of elements of odd order in the group {\mathbb{Z}^{\star}_{n}} , and let {\mathcal{U}(\mathbb{G}_{n})} be the uniform probability distribution on {\mathbb{G}_{n}} . In this paper, we establish a probabilistic polynomial-time reduction from finding a nontrivial divisor of a composite number n to finding a nontrivial relation between l elements chosen independently and uniformly at random from {\mathbb{G}_{n}} , where {l\geq 1} is given in unary as a part of the input. Assume that finding a nontrivial divisor of a random number in some set N of composite numbers (for a given security parameter) is a computationally hard problem. Then, using the above-mentioned reduction, we prove that the family {((\mathbb{G}_{n},\mathcal{U}(\mathbb{G}_{n}))\mid n\in N)} of computational abelian groups is weakly pseudo-free. The disadvantage of this result is that the probability ensemble {(\mathcal{U}(\mathbb{G}_{n})\mid n\in N)} is not polynomial-time samplable. To overcome this disadvantage, we construct a polynomial-time computable function {\nu\colon D\to N} (where {D\subseteq\{0,1\}^{*}} ) and a polynomial-time samplable probability ensemble {(\mathcal{G}_{d}\mid d\in D)} (where {\mathcal{G}_{d}} is a distribution on {\mathbb{G}_{\nu(d)}} for each {d\in D} ) such that the family {((\mathbb{G}_{\nu(d)},\mathcal{G}_{d})\mid d\in D)} of computational abelian groups is weakly pseudo-free.


Sign in / Sign up

Export Citation Format

Share Document