Probabilistic Polynomial-Time Semantics for a Protocol Security Logic

Author(s):  
Anupam Datta ◽  
Ante Derek ◽  
John C. Mitchell ◽  
Vitaly Shmatikov ◽  
Mathieu Turuani
2004 ◽  
Vol 11 (15) ◽  
Author(s):  
Jesús Fernando Almansa

Two different approaches for general protocol security are proved equivalent. Concretely, we prove that security in the Universal Composability framework (UC) is equivalent to security in the probabilistic polynomial time calculus ppc. Security is defined under active and adaptive adversaries with synchronous and authenticated communication. In detail, we define an encoding from machines in UC to processes in ppc and show UC is fully abstract in ppc, i.e., we show the soundness and completeness of security in ppc with respect to UC. However, we restrict security in ppc to be quantified not over all possible contexts, but over those induced by UC-environments under encoding. This result is not overly-restricting security in ppc, since the threat and communication models we assume are meaningful in both practice and theory.


2015 ◽  
Vol 241 ◽  
pp. 114-141 ◽  
Author(s):  
Ugo Dal Lago ◽  
Paolo Parisen Toldin

Quantum ◽  
2018 ◽  
Vol 2 ◽  
pp. 106 ◽  
Author(s):  
Tomoyuki Morimae ◽  
Yuki Takeuchi ◽  
Harumichi Nishimura

We introduce a simple sub-universal quantum computing model, which we call the Hadamard-classical circuit with one-qubit (HC1Q) model. It consists of a classical reversible circuit sandwiched by two layers of Hadamard gates, and therefore it is in the second level of the Fourier hierarchy. We show that output probability distributions of the HC1Q model cannot be classically efficiently sampled within a multiplicative error unless the polynomial-time hierarchy collapses to the second level. The proof technique is different from those used for previous sub-universal models, such as IQP, Boson Sampling, and DQC1, and therefore the technique itself might be useful for finding other sub-universal models that are hard to classically simulate. We also study the classical verification of quantum computing in the second level of the Fourier hierarchy. To this end, we define a promise problem, which we call the probability distribution distinguishability with maximum norm (PDD-Max). It is a promise problem to decide whether output probability distributions of two quantum circuits are far apart or close. We show that PDD-Max is BQP-complete, but if the two circuits are restricted to some types in the second level of the Fourier hierarchy, such as the HC1Q model or the IQP model, PDD-Max has a Merlin-Arthur system with quantum polynomial-time Merlin and classical probabilistic polynomial-time Arthur.


2001 ◽  
Vol 45 ◽  
pp. 280-310 ◽  
Author(s):  
J. Mitchell ◽  
A. Ramanathan ◽  
A. Scedrov ◽  
V. Teague

Author(s):  
Scott Aaronson

I study the class of problems efficiently solvable by a quantum computer, given the ability to ‘postselect’ on the outcomes of measurements. I prove that this class coincides with a classical complexity class called PP, or probabilistic polynomial-time. Using this result, I show that several simple changes to the axioms of quantum mechanics would let us solve PP-complete problems efficiently. The result also implies, as an easy corollary, a celebrated theorem of Beigel, Reingold and Spielman that PP is closed under intersection, as well as a generalization of that theorem due to Fortnow and Reingold. This illustrates that quantum computing can yield new and simpler proofs of major results about classical computation.


2019 ◽  
Vol 2019 ◽  
pp. 1-13
Author(s):  
Baodong Qin

Lossy trapdoor functions (LTFs), introduced by Peiker and Waters in STOC’08, are functions that may be working in another injective mode or a lossy mode. Given such a function key, it is impossible to distinguish an injective key from a lossy key for any (probabilistic) polynomial-time adversary. This paper studies lossy trapdoor functions with tight security. First, we give a formal definition for tightly secure LTFs. Loosely speaking, a collection of LTFs is tightly secure if the advantage to distinguish a tuple of injective keys from a tuple of lossy keys does not degrade in the number of function keys. Then, we show that tightly secure LTFs can be used to construct public-key encryption schemes with tight CPA security in a multiuser, multichallenge setting, and with tight CCA security in a multiuser, one-challenge setting. Finally, we present a construction of tightly secure LTFs from the decisional Diffie-Hellman assumption.


2014 ◽  
Vol 50 ◽  
pp. 573-601 ◽  
Author(s):  
A. Rey ◽  
J. Rothe

False-name manipulation refers to the question of whether a player in a weighted voting game can increase her power by splitting into several players and distributing her weight among these false identities. Relatedly, the beneficial merging problem asks whether a coalition of players can increase their power in a weighted voting game by merging their weights. For the problems of whether merging or splitting players in weighted voting games is beneficial in terms of the Shapley--Shubik and the normalized Banzhaf index, merely NP-hardness lower bounds are known, leaving the question about their exact complexity open. For the Shapley--Shubik and the probabilistic Banzhaf index, we raise these lower bounds to hardness for PP, "probabilistic polynomial time," a class considered to be by far a larger class than NP. For both power indices, we provide matching upper bounds for beneficial merging and, whenever the new players' weights are given, also for beneficial splitting, thus resolving previous conjectures in the affirmative. Relatedly, we consider the beneficial annexation problem, asking whether a single player can increase her power by taking over other players' weights. It is known that annexation is never disadvantageous for the Shapley--Shubik index, and that beneficial annexation is NP-hard for the normalized Banzhaf index. We show that annexation is never disadvantageous for the probabilistic Banzhaf index either, and for both the Shapley--Shubik index and the probabilistic Banzhaf index we show that it is NP-complete to decide whether annexing another player is advantageous. Moreover, we propose a general framework for merging and splitting that can be applied to different classes and representations of games.


Sign in / Sign up

Export Citation Format

Share Document