coin tossing
Recently Published Documents


TOTAL DOCUMENTS

201
(FIVE YEARS 21)

H-INDEX

20
(FIVE YEARS 2)

2021 ◽  
Author(s):  
Ola Hössjer ◽  
Daniel Andrés Díaz-Pachón ◽  
J. Sunil Rao

Philosophers frequently define knowledge as justified, true belief. In this paper we build a mathematical framework that makes possible to define learning (increased degree of true belief) and knowledge of an agent in precise ways. This is achieved by phrasing belief in terms of epistemic probabilities, defined from Bayes' Rule. The degree of true belief is then quantified by means of active information $I^+$, that is, a comparison between the degree of belief of the agent and a completely ignorant person. Learning has occurred when either the agent's strength of belief in a true proposition has increased in comparison with the ignorant person ($I^+>0$), or if the strength of belief in a false proposition has decreased ($I^+<0$). Knowledge additionally requires that learning occurs for the right reason, and in this context we introduce a framework of parallel worlds, of which one is true and the others are counterfactuals. We also generalize the framework of learning and knowledge acquisition to a sequential setting, where information and data is updated over time. The theory is illustrated using examples of coin tossing, historical events, future events, replication of studies, and causal inference.


2021 ◽  
Author(s):  
Hamidreza Amini Khorasgani ◽  
Hemanta K. Maji ◽  
Mingyuan Wang
Keyword(s):  

2021 ◽  
Author(s):  
Hamidreza Amini Khorasgani ◽  
Hemanta K. Maji ◽  
Himanshi Mehta ◽  
Mingyuan Wang
Keyword(s):  

2021 ◽  
Vol 51 (5) ◽  
pp. 317-328
Author(s):  
Yael Loewenstein

AbstractBefore a fair, indeterministic coin is tossed, Lucky, who is causally isolated from the coin-tossing mechanism, declines to bet on heads. The coin lands heads. The consensus is that the following counterfactual is true:(M:) If Lucky had bet heads, he would have won the bet.It is also widely believed that to rule (M) true, any plausible semantics for counterfactuals must invoke causal independence. But if that’s so, the hope of giving a reductive analysis of causation in terms of counterfactuals is undermined. Here I argue that there is compelling reason to question the assumption that (M) is true.


Author(s):  
S. Ethier ◽  
Jiyeon Lee

Parrondo’s coin-tossing games comprise two games, A A and B B . The result of game A A is determined by the toss of a fair coin. The result of game B B is determined by the toss of a p 0 p_0 -coin if capital is a multiple of r r , and by the toss of a p 1 p_1 -coin otherwise. In either game, the player wins one unit with heads and loses one unit with tails. Game B B is fair if ( 1 − p 0 ) ( 1 − p 1 ) r − 1 = p 0 p 1 r − 1 (1-p_0)(1-p_1)^{r-1}=p_0\,p_1^{r-1} . In a previous paper we showed that, if the parameters of game B B , namely r r , p 0 p_0 , and p 1 p_1 , are allowed to be arbitrary, subject to the fairness constraint, and if the two (fair) games A A and B B are played in an arbitrary periodic sequence, then the rate of profit can not only be positive (the so-called Parrondo effect), but also be arbitrarily close to 1 (i.e., 100%). Here we prove the same conclusion for a random sequence of the two games instead of a periodic one, that is, at each turn game A A is played with probability γ \gamma and game B B is played otherwise, where γ ∈ ( 0 , 1 ) \gamma \in (0,1) is arbitrary.


Entropy ◽  
2020 ◽  
Vol 23 (1) ◽  
pp. 44
Author(s):  
Hemanta K. Maji

Ben-Or and Linial, in a seminal work, introduced the full information model to study collective coin-tossing protocols. Collective coin-tossing is an elegant functionality providing uncluttered access to the primary bottlenecks to achieve security in a specific adversarial model. Additionally, the research outcomes for this versatile functionality has direct consequences on diverse topics in mathematics and computer science. This survey summarizes the current state-of-the-art of coin-tossing protocols in the full information model and recent advances in this field. In particular, it elaborates on a new proof technique that identifies the minimum insecurity incurred by any coin-tossing protocol and, simultaneously, constructs the coin-tossing protocol achieving that insecurity bound. The combinatorial perspective into this new proof-technique yields new coin-tossing protocols that are more secure than well-known existing coin-tossing protocols, leading to new isoperimetric inequalities over product spaces. Furthermore, this proof-technique’s algebraic reimagination resolves several long-standing fundamental hardness-of-computation problems in cryptography. This survey presents one representative application of each of these two perspectives.


Games ◽  
2020 ◽  
Vol 12 (1) ◽  
pp. 1
Author(s):  
Edward Cartwright ◽  
Lian Xue ◽  
Charlotte Brown

We explore whether individuals are averse to telling a Pareto white lie—a lie that benefits both themselves and another. We first review and summarize the existing evidence on Pareto white lies. We find that the evidence is relatively limited and varied in its conclusions. We then present new experimental results obtained using a coin-tossing experiment. Results are provided for both the UK and China. We find evidence of willingness to tell a partial lie (i.e., inflating reports slightly) and high levels of aversion to telling a Pareto white lie that would maximize payoffs. We also find no significant difference between willingness to tell a Pareto white lie and a selfish black lie—a lie that harms another. We find marginal evidence of more lying in China than the UK, but the overall results in the UK and China are very similar.


2020 ◽  
Vol 20 (1) ◽  
Author(s):  
Zad Rafi ◽  
Sander Greenland

Abstract Background Researchers often misinterpret and misrepresent statistical outputs. This abuse has led to a large literature on modification or replacement of testing thresholds and P-values with confidence intervals, Bayes factors, and other devices. Because the core problems appear cognitive rather than statistical, we review some simple methods to aid researchers in interpreting statistical outputs. These methods emphasize logical and information concepts over probability, and thus may be more robust to common misinterpretations than are traditional descriptions. Methods We use the Shannon transform of the P-value p, also known as the binary surprisal or S-value s = −log2(p), to provide a measure of the information supplied by the testing procedure, and to help calibrate intuitions against simple physical experiments like coin tossing. We also use tables or graphs of test statistics for alternative hypotheses, and interval estimates for different percentile levels, to thwart fallacies arising from arbitrary dichotomies. Finally, we reinterpret P-values and interval estimates in unconditional terms, which describe compatibility of data with the entire set of analysis assumptions. We illustrate these methods with a reanalysis of data from an existing record-based cohort study. Conclusions In line with other recent recommendations, we advise that teaching materials and research reports discuss P-values as measures of compatibility rather than significance, compute P-values for alternative hypotheses whenever they are computed for null hypotheses, and interpret interval estimates as showing values of high compatibility with data, rather than regions of confidence. Our recommendations emphasize cognitive devices for displaying the compatibility of the observed data with various hypotheses of interest, rather than focusing on single hypothesis tests or interval estimates. We believe these simple reforms are well worth the minor effort they require.


Sign in / Sign up

Export Citation Format

Share Document