proof of correctness
Recently Published Documents


TOTAL DOCUMENTS

80
(FIVE YEARS 13)

H-INDEX

11
(FIVE YEARS 1)

Author(s):  
Mihály Csaba Markót

AbstractIn this work computer-assisted optimality proofs are given for the problems of finding the densest packings of 31, 32, and 33 non-overlapping equal circles in a square. In a study of 2005, a fully interval arithmetic based global optimization method was introduced for the problem class, solving the cases 28, 29, 30. Until now, these were the largest problem instances solved on a computer. Using the techniques of that paper, the estimated solution time for the next three cases would have been 3–6 CPU months. In the present paper this former method is improved in both its local and global search phases. We discuss a new interval-based polygon representation of the core local method for eliminating suboptimal regions, which has a simpler implementation, easier proof of correctness, and faster behaviour than the former one. Furthermore, a modified strategy is presented for the global phase of the search, including improved symmetry filtering and tile pattern matching. With the new method the cases $$n=31,32,33$$ n = 31 , 32 , 33 have been solved in 26, 61, and 13 CPU hours, giving high precision enclosures for all global optimizers and the optimum value. After eliminating the hardware and compiler improvements since the former study, the new proof technique became roughly about 40–100 times faster than the previous one. In addition, the new implementation is suitable for solving the next few circle packing instances with similar computational effort.


Author(s):  
Martin Bies ◽  
Sebastian Posur

We provide explicit constructions for various ingredients of right exact monoidal structures on the category of finitely presented functors. As our main tool, we prove a multilinear version of the universal property of so-called Freyd categories, which in turn is used in the proof of correctness of our constructions. Furthermore, we compare our construction with the Day convolution of arbitrary additive functors. Day convolution always yields a closed monoidal structure on the category of all additive functors. In contrast, right exact monoidal structures for finitely presented functor categories are not necessarily closed. We provide a necessary criterion for being closed that relies on the underlying category having weak kernels and a so-called finitely presented prointernal hom structure. Our results are stated in a constructive way and thus serve as a unified approach for the implementation of tensor products in various contexts.


Author(s):  
Giles Reger ◽  
David Rydeheard

AbstractParametric runtime verification is the process of verifying properties of execution traces of (data carrying) events produced by a running system. This paper continues our work exploring the relationship between specification techniques for parametric runtime verification. Here we consider the correspondence between trace-slicing automata-based approaches and rule systems. The main contribution is a translation from quantified automata to rule systems, which has been implemented in Scala. This then allows us to highlight the key differences in how the two formalisms handle data, an important step in our wider effort to understand the correspondence between different specification languages for parametric runtime verification. This paper extends a previous conference version of this paper with further examples, a proof of correctness, and an optimisation based on a notion of redundancy observed during the development of the translation.


Author(s):  
Oladayo Olufemi Olakanmi ◽  
Adedamola Dada

In outsourcing computation models, weak devices (clients) increasingly rely on remote servers (workers) for data storage and computations. However, most of these servers are hackable or untrustworthy, which makes their computation questionable. Therefore, there is need for clients to validate the correctness of the results of their outsourced computations and ensure that servers learn nothing about their clients other than the outputs of their computation. In this work, an efficient privacy preservation validation approach is developed which allows clients to store and outsource their computations to servers in a semi-honest model such that servers' computational results could be validated by clients without re-computing the computation. This article employs a morphism approach for the client to efficiently perform the proof of correctness of its outsourced computation without re-computing the whole computation. A traceable pseudonym is employed by clients to enforce anonymity.


Quantum Fourier transform (QFT) plays a key role in many quantum algorithms, but the existing circuits of QFT are incomplete and lacking the proof of correctness. Furthermore, it is difficult to apply QFT to the concrete field of information processing. Thus, this chapter firstly investigates quantum vision representation (QVR) and develops a model of QVR (MQVR). Then, four complete circuits of QFT and inverse QFT (IQFT) are designed. Meanwhile, this chapter proves the correctness of the four complete circuits using formula derivation. Next, 2D QFT and 3D QFT based on QVR are proposed. Experimental results with simulation show the proposed QFTs are valid and useful in processing quantum images and videos. In conclusion, this chapter develops a complete framework of QFT based on QVR and provides a feasible scheme for QFT to be applied in quantum vision information processing.


Author(s):  
Matthijs Vákár

AbstractWe show how to define forward- and reverse-mode automatic differentiation source-code transformations or on a standard higher-order functional language. The transformations generate purely functional code, and they are principled in the sense that their definition arises from a categorical universal property. We give a semantic proof of correctness of the transformations. In their most elegant formulation, the transformations generate code with linear types. However, we demonstrate how the transformations can be implemented in a standard functional language without sacrificing correctness. To do so, we make use of abstract data types to represent the required linear types, e.g. through the use of a basic module system.


Author(s):  
M. Brédif ◽  
L. Caraffa ◽  
M. Yirci ◽  
P. Memari

Abstract. This paper deals with the distributed computation of Delaunay triangulations of massive point sets, mainly motivated by the needs of a scalable out-of-core surface reconstruction workflow from massive urban LIDAR datasets. Such a data often corresponds to a huge point cloud represented through a set of tiles of relatively homogeneous point sizes. This will be the input of our algorithm which will naturally partition this data across multiple processing elements. The distributed computation and communication between processing elements is orchestrated efficiently through an uncentralized model to represent, manage and locally construct the triangulation corresponding to each tile. Initially inspired by the star splaying approach, we review the Tile& Merge algorithm for computing Distributed Delaunay Triangulations on the cloud, provide a theoretical proof of correctness of this algorithm, and analyse the performance of our Spark implementation in terms of speedup and strong scaling in both synthetic and real use case datasets. A HPC implementation (e.g. using MPI), left for future work, would benefit from its more efficient message passing paradigm but lose the robustness and failure resilience of our Spark approach.


Author(s):  
Dirk Beyer ◽  
Sudeep Kanav

Abstract Program verification is the problem, for a given program $$P$$ and a specification $$\phi $$, of constructing a proof of correctness for the statement “program $$P$$ satisfies specification $$\phi $$” ($$P \models \phi $$) or a proof of violation ("Equation missing"). Usually, a correctness proof is based on inductive invariants, and a violation proof on a violating program trace. Verification engineers typically expect that a verification tool exports these proof artifacts. We propose to view the task of program verification as constructing a behavioral interface (represented e.g. by an automaton). We start with the interface $$I_{P}$$ of the program itself, which represents all traces of program executions. To prove correctness, we try to construct a more abstract interface $$I_{C}$$ of the program (overapproximation) that satisfies the specification. This interface, if found, represents more traces than $$I_{P}$$ that are all correct (satisfying the specification). Ultimately, we want a compact representation of the program behavior as a correctness interface $$I_{C}$$ in terms of inductive invariants. We can then extract a correctness witness, in standard exchange format, out of such a correctness interface. Symmetrically, to prove violation, we try to construct a more concrete interface $$I_{V}$$ of the program (underapproximation) that violates the specification. This interface, if found, represents fewer traces than $$I_{P}$$ that are all feasible (can be executed). Ultimately, we want a compact representation of the program behavior as a violation interface $$I_{V}$$ in terms of a violating program trace. We can then extract a violation witness, in standard exchange format, out of such a violation interface. This viewpoint exposes the duality of these two tasks — proving correctness and violation. It enables the decomposition of the verification process, and its tools, into (at least!) three components: interface synthesizers, refinement checkers, and specification checkers. We hope the reader finds this viewpoint useful, although the underlying ideas are not novel. We see it as a framework towards modular program verification.


Sign in / Sign up

Export Citation Format

Share Document