proof technique
Recently Published Documents


TOTAL DOCUMENTS

94
(FIVE YEARS 24)

H-INDEX

14
(FIVE YEARS 2)

Author(s):  
Jeffrey M. Rabin ◽  
David Quarfoot

AbstractThe literature on proof by contradiction (PBC) is nearly unanimous in claiming that this proof technique is “more difficult” for students than direct proof, and offers multiple hypotheses as to why this might be the case. To examine this claim and to evaluate some of the hypotheses, we analyzed student work on proof construction problems from homework and examinations in a university “Introduction to Proof” course taught by one of the authors. We also conducted stimulated-recall interviews with student volunteers probing their thought processes while solving these problems, and their views about PBC in general. Our results suggest that the knowledge resources students bring to bear on proof problems, and how these resources are activated, explain more of their “difficulties” than does the logical structure of the proof technique, at least for this population of students.


2021 ◽  
Vol 68 (5) ◽  
pp. 1-43
Author(s):  
Mark Zhandry

Pseudorandom functions ( PRFs ) are one of the foundational concepts in theoretical computer science, with numerous applications in complexity theory and cryptography. In this work, we study the security of PRFs when evaluated on quantum superpositions of inputs. The classical techniques for arguing the security of PRFs do not carry over to this setting, even if the underlying building blocks are quantum resistant. We therefore develop a new proof technique to show that many of the classical PRF constructions remain secure when evaluated on superpositions.


Author(s):  
Andreas Bärmann ◽  
Oskar Schneider

AbstractIn the present work, we consider Zuckerberg’s method for geometric convex-hull proofs introduced in Zuckerberg (Oper Res Lett 44(5):625–629, 2016). It has only been scarcely adopted in the literature so far, despite the great flexibility in designing algorithmic proofs for the completeness of polyhedral descriptions that it offers. We suspect that this is partly due to the rather heavy algebraic framework its original statement entails. This is why we present a much more lightweight and accessible approach to Zuckerberg’s proof technique, building on ideas from Gupte et al. (Discrete Optim 36:100569, 2020). We introduce the concept of set characterizations to replace the set-theoretic expressions needed in the original version and to facilitate the construction of algorithmic proof schemes. Along with this, we develop several different strategies to conduct Zuckerberg-type convex-hull proofs. Very importantly, we also show that our concept allows for a significant extension of Zuckerberg’s proof technique. While the original method was only applicable to 0/1-polytopes, our extended framework allows to treat arbitrary polyhedra and even general convex sets. We demonstrate this increase in expressive power by characterizing the convex hull of Boolean and bilinear functions over polytopal domains. All results are illustrated with indicative examples to underline the practical usefulness and wide applicability of our framework.


2021 ◽  
Vol Volume 17, Issue 3 ◽  
Author(s):  
Filippo Bonchi ◽  
Alexandra Silva ◽  
Ana Sokolova

Probabilistic automata (PA), also known as probabilistic nondeterministic labelled transition systems, combine probability and nondeterminism. They can be given different semantics, like strong bisimilarity, convex bisimilarity, or (more recently) distribution bisimilarity. The latter is based on the view of PA as transformers of probability distributions, also called belief states, and promotes distributions to first-class citizens. We give a coalgebraic account of distribution bisimilarity, and explain the genesis of the belief-state transformer from a PA. To do so, we make explicit the convex algebraic structure present in PA and identify belief-state transformers as transition systems with state space that carries a convex algebra. As a consequence of our abstract approach, we can give a sound proof technique which we call bisimulation up-to convex hull. Comment: Full (extended) version of a CONCUR 2017 paper, minor revision of the LMCS submission


Author(s):  
Marco Castellani ◽  
Massimiliano Giuli

AbstractAn existence result for a generalized inequality over a possible unbounded domain in a finite-dimensional space is established. The proof technique allows to avoid any monotonicity assumption. We adapt a weak coercivity condition introduced in Castellani and Giuli (J Glob Optim 75:163–176, 2019) for a generalized game which extends an older one proposed by Konnov and Dyabilkin (J Glob Optim 49:575–577, 2011) for equilibrium problems. Our main result encompasses and generalizes several existence results for equilibrium, quasiequilibrium and fixed-point problems.


Author(s):  
John Derrick ◽  
Simon Doherty ◽  
Brijesh Dongol ◽  
Gerhard Schellhorn ◽  
Heike Wehrheim

AbstractNon-volatile memory (NVM), aka persistent memory, is a new memory paradigm that preserves its contents even after power loss. The expected ubiquity of NVM has stimulated interest in the design of persistent concurrent data structures, together with associated notions of correctness. In this paper, we present a formal proof technique for durable linearizability, which is a correctness criterion that extends linearizability to handle crashes and recovery in the context ofNVM.Our proofs are based on refinement of Input/Output automata (IOA) representations of concurrent data structures. To this end, we develop a generic procedure for transforming any standard sequential data structure into a durable specification and prove that this transformation is both sound and complete. Since the durable specification only exhibits durably linearizable behaviours, it serves as the abstract specification in our refinement proof. We exemplify our technique on a recently proposed persistentmemory queue that builds on Michael and Scott’s lock-free queue. To support the proofs, we describe an automated translation procedure from code to IOA and a thread-local proof technique for verifying correctness of invariants.


Author(s):  
Håvard Bakke Bjerkevik

AbstractThe algebraic stability theorem for persistence modules is a central result in the theory of stability for persistent homology. We introduce a new proof technique which we use to prove a stability theorem for n-dimensional rectangle decomposable persistence modules up to a constant $$2n-1$$ 2 n - 1 that generalizes the algebraic stability theorem, and give an example showing that the bound cannot be improved for $$n=2$$ n = 2 . We then apply the technique to prove stability for block decomposable modules, from which novel results for zigzag modules and Reeb graphs follow. These results are improvements on weaker bounds in previous work, and the bounds we obtain are optimal.


Entropy ◽  
2021 ◽  
Vol 23 (3) ◽  
pp. 270
Author(s):  
Igal Sason

This paper studies the problem of upper bounding the number of independent sets in a graph, expressed in terms of its degree distribution. For bipartite regular graphs, Kahn (2001) established a tight upper bound using an information-theoretic approach, and he also conjectured an upper bound for general graphs. His conjectured bound was recently proved by Sah et al. (2019), using different techniques not involving information theory. The main contribution of this work is the extension of Kahn’s information-theoretic proof technique to handle irregular bipartite graphs. In particular, when the bipartite graph is regular on one side, but may be irregular on the other, the extended entropy-based proof technique yields the same bound as was conjectured by Kahn (2001) and proved by Sah et al. (2019).


Author(s):  
Sidi Mohamed Beillahi ◽  
Ahmed Bouajjani ◽  
Constantin Enea

AbstractConcurrent accesses to databases are typically encapsulated in transactions in order to enable isolation from other concurrent computations and resilience to failures. Modern databases provide transactions with various semantics corresponding to different trade-offs between consistency and availability. Since a weaker consistency model provides better performance, an important issue is investigating the weakest level of consistency needed by a given program (to satisfy its specification). As a way of dealing with this issue, we investigate the problem of checking whether a given program has the same set of behaviors when replacing a consistency model with a weaker one. This property known as robustness generally implies that any specification of the program is preserved when weakening the consistency. We focus on the robustness problem for consistency models which are weaker than standard serializability, namely, causal consistency, prefix consistency, and snapshot isolation. We show that checking robustness between these models is polynomial time reducible to a state reachability problem under serializability. We use this reduction to also derive a pragmatic proof technique based on Lipton’s reduction theory that allows to prove programs robust. We have applied our techniques to several challenging applications drawn from the literature of distributed systems and databases.


Entropy ◽  
2020 ◽  
Vol 23 (1) ◽  
pp. 44
Author(s):  
Hemanta K. Maji

Ben-Or and Linial, in a seminal work, introduced the full information model to study collective coin-tossing protocols. Collective coin-tossing is an elegant functionality providing uncluttered access to the primary bottlenecks to achieve security in a specific adversarial model. Additionally, the research outcomes for this versatile functionality has direct consequences on diverse topics in mathematics and computer science. This survey summarizes the current state-of-the-art of coin-tossing protocols in the full information model and recent advances in this field. In particular, it elaborates on a new proof technique that identifies the minimum insecurity incurred by any coin-tossing protocol and, simultaneously, constructs the coin-tossing protocol achieving that insecurity bound. The combinatorial perspective into this new proof-technique yields new coin-tossing protocols that are more secure than well-known existing coin-tossing protocols, leading to new isoperimetric inequalities over product spaces. Furthermore, this proof-technique’s algebraic reimagination resolves several long-standing fundamental hardness-of-computation problems in cryptography. This survey presents one representative application of each of these two perspectives.


Sign in / Sign up

Export Citation Format

Share Document