efficient verification
Recently Published Documents


TOTAL DOCUMENTS

206
(FIVE YEARS 55)

H-INDEX

19
(FIVE YEARS 5)

2022 ◽  
Vol 23 (2) ◽  
pp. 1-39
Author(s):  
Tzanis Anevlavis ◽  
Matthew Philippe ◽  
Daniel Neider ◽  
Paulo Tabuada

While most approaches in formal methods address system correctness, ensuring robustness has remained a challenge. In this article, we present and study the logic rLTL, which provides a means to formally reason about both correctness and robustness in system design. Furthermore, we identify a large fragment of rLTL for which the verification problem can be efficiently solved, i.e., verification can be done by using an automaton, recognizing the behaviors described by the rLTL formula φ, of size at most O(3 |φ |), where |φ | is the length of φ. This result improves upon the previously known bound of O(5|φ |) for rLTL verification and is closer to the LTL bound of O(2|φ |). The usefulness of this fragment is demonstrated by a number of case studies showing its practical significance in terms of expressiveness, the ability to describe robustness, and the fine-grained information that rLTL brings to the process of system verification. Moreover, these advantages come at a low computational overhead with respect to LTL verification.


Quantum ◽  
2021 ◽  
Vol 5 ◽  
pp. 578
Author(s):  
Ulysse Chabaud ◽  
Frédéric Grosshans ◽  
Elham Kashefi ◽  
Damian Markham

The demonstration of quantum speedup, also known as quantum computational supremacy, that is the ability of quantum computers to outperform dramatically their classical counterparts, is an important milestone in the field of quantum computing. While quantum speedup experiments are gradually escaping the regime of classical simulation, they still lack efficient verification protocols and rely on partial validation. Here we derive an efficient protocol for verifying with single-mode Gaussian measurements the output states of a large class of continuous-variable quantum circuits demonstrating quantum speedup, including Boson Sampling experiments, thus enabling a convincing demonstration of quantum speedup with photonic computing. Beyond the quantum speedup milestone, our results also enable the efficient and reliable certification of a large class of intractable continuous-variable multimode quantum states.


2021 ◽  
Vol 64 (11) ◽  
pp. 131-138
Author(s):  
Zhengfeng Ji ◽  
Anand Natarajan ◽  
Thomas Vidick ◽  
John Wright ◽  
Henry Yuen

Note from the Research Highlights Co-Chairs: A Research Highlights paper appearing in Communications is usually peer-reviewed prior to publication. The following paper is unusual in that it is still under review. However, the result has generated enormous excitement in the research community, and came strongly nominated by SIGACT, a nomination seconded by external reviewers. The complexity class NP characterizes the collection of computational problems that have efficiently verifiable solutions. With the goal of classifying computational problems that seem to lie beyond NP, starting in the 1980s complexity theorists have considered extensions of the notion of efficient verification that allow for the use of randomness (the class MA), interaction (the class IP), and the possibility to interact with multiple proofs, or provers (the class MIP). The study of these extensions led to the celebrated PCP theorem and its applications to hardness of approximation and the design of cryptographic protocols. In this work, we study a fourth modification to the notion of efficient verification that originates in the study of quantum entanglement. We prove the surprising result that every problem that is recursively enumerable, including the Halting problem, can be efficiently verified by a classical probabilistic polynomial-time verifier interacting with two all-powerful but noncommunicating provers sharing entanglement. The result resolves long-standing open problems in the foundations of quantum mechanics (Tsirelson's problem) and operator algebras (Connes' embedding problem).


2021 ◽  
Author(s):  
Yao Hsiao ◽  
Dominic P. Mulligan ◽  
Nikos Nikoleris ◽  
Gustavo Petri ◽  
Caroline Trippel

2021 ◽  
Vol 5 (ICFP) ◽  
pp. 1-30
Author(s):  
Aymeric Fromherz ◽  
Aseem Rastogi ◽  
Nikhil Swamy ◽  
Sydney Gibson ◽  
Guido Martínez ◽  
...  

Steel is a language for developing and proving concurrent programs embedded in F ⋆ , a dependently typed programming language and proof assistant. Based on SteelCore, a concurrent separation logic (CSL) formalized in F ⋆ , our work focuses on exposing the proof rules of the logic in a form that enables programs and proofs to be effectively co-developed. Our main contributions include a new formulation of a Hoare logic of quintuples involving both separation logic and first-order logic, enabling efficient verification condition (VC) generation and proof discharge using a combination of tactics and SMT solving. We relate the VCs produced by our quintuple system to solving a system of associativity-commutativity (AC) unification constraints and develop tactics to (partially) solve these constraints using AC-matching modulo SMT-dischargeable equations. Our system is fully mechanized and implemented in F ⋆ . We evaluate it by developing several verified programs and libraries, including various sequential and concurrent linked data structures, proof libraries, and a library for 2-party session types. Our experience leads us to conclude that our system enables a mixture of automated and interactive proof, making it productive to build programs foundationally verified against a highly expressive, state-of-the-art CSL.


2021 ◽  
Vol 7 (1) ◽  
Author(s):  
Ryan S. Bennink

AbstractI present a method for estimating the fidelity F(μ, τ) between a preparable quantum state μ and a classically specified pure target state $$\tau =\left|\tau \right\rangle \left\langle \tau \right|$$ τ = τ τ , using simple quantum circuits and on-the-fly classical calculation (or lookup) of selected amplitudes of $$\left|\tau \right\rangle$$ τ . The method is sample efficient for anticoncentrated states (including many states that are hard to simulate classically), with approximate cost 4ϵ−2(1 − F)dpcoll where ϵ is the desired precision of the estimate, d is the dimension of the Hilbert space, and pcoll is the collision probability of the target distribution. This scaling is exponentially better than that of any method based on classical sampling. I also present a more sophisticated version of the method that uses any efficiently preparable and well-characterized quantum state as an importance sampler to further reduce the number of copies of μ needed. Though some challenges remain, this work takes a significant step toward scalable verification of complex states produced by quantum processors.


2021 ◽  
Author(s):  
Daniel J Delbarre ◽  
Luis Santos ◽  
Habib Ganjgahi ◽  
Neil Horner ◽  
Aaron McCoy ◽  
...  

Large scale neuroimaging datasets present unique challenges for automated processing pipelines. Motivated by a large-scale clinical trials dataset of Multiple Sclerosis (MS) with over 235,000 magnetic resonance imaging (MRI) scans, we consider the challenge of defacing - anonymisation to remove identifying features on the face and the ears. The defacing process must undergo quality control (QC) checks to ensure that the facial features have been adequately anonymised and that the brain tissue is left completely intact. Visual QC checks - particularly on a project of this scale - are time-consuming and can cause delays in preparing data for research. In this study, we have developed a convolutional neural network (CNN) that can assist with the QC of MRI defacing. Our CNN is able to distinguish between scans that are correctly defaced, and three sub-types of failures with high test accuracy (77\%). Through applying visualisation techniques, we are able to verify that the CNN uses the same anatomical features as human scorers when selecting classifications. Due to the sensitive nature of the data, strict thresholds are applied so that only classifications with high confidence are accepted, and scans that are passed by the CNN undergo a time-efficient verification check. Integration of the network into the anonymisation pipeline has led to nearly half of all scans being classified by the CNN, resulting in a considerable reduction in the amount of time needed for manual QC checks, while maintaining high QC standards to protect patient identities.


Sign in / Sign up

Export Citation Format

Share Document