scholarly journals Verification of Programs with Mutual Recursion in the Pifagor Language

2018 ◽  
Vol 25 (4) ◽  
pp. 358-381
Author(s):  
Mariya S. Ushakov ◽  
Alexander I. Legalov

In the article, we consider verification of programs with mutual recursion in the data driven functional parallel language Pifagor. In this language the program could be represented as a data flow graph, that has no control connections, and has only data relations. Under these conditions it is possible to simplify the process of formal verification, since there is no need to analyse resource conflicts, which are present in the systems with ordinary architectures. The proof of programs correctness is based on the elimination of mutual recursions by program transformation. The universal method of mutual recursion of an arbitrary number of functions elimination consists in constructing the universal recursive function that simulates all the functions in the mutual recursion. A natural number is assigned to each function in mutual recursion. The universal recursive function takes as its argument the number of a function to be simulated and the arguments of this function. In some cases of the indirect recursion it is possible to use a simpler method of program transformation, namely, the merging of the functions code into a single function. To remove mutual recursion of an arbitrary number of functions, it is suggested to construct a graph of all connected functions and transform this graph by removing functions that are not connected with the target function, then by merging functions with indirect recursion and finally by constructing the universal recursive function. It is proved that in the Pifagor language such transformations of functions as code merging and universal recursive function construction do not change the correctness of the initial program. An example of partial correctness proof is given for the program that parses a simple arithmetic expression. We construct the graph of all connected functions and demonstrate two methods of proofs: by means of code merging and by means of the universal recursive function.

2002 ◽  
Vol 9 (2) ◽  
Author(s):  
Lasse R. Nielsen

We build on Danvy and Nielsen's first-order program transformation into continuation-passing style (CPS) to present a new correctness proof of the converse transformation, i.e., a one-pass transformation from CPS back to direct style. Previously published proofs were based on, e.g., a one-pass higher-order CPS transformation, and were complicated by having to reason about higher-order functions. In contrast, this work is based on a one-pass CPS transformation that is both compositional and first-order, and therefore the proof simply proceeds by structural induction on syntax.


2019 ◽  
Vol 66 ◽  
pp. 151-196 ◽  
Author(s):  
Kirthevasan Kandasamy ◽  
Gautam Dasarathy ◽  
Junier Oliva ◽  
Jeff Schneider ◽  
Barnabás Póczos

In many scientific and engineering applications, we are tasked with the maximisation of an expensive to evaluate black box function f. Traditional settings for this problem assume just the availability of this single function. However, in many cases, cheap approximations to f may be obtainable. For example, the expensive real world behaviour of a robot can be approximated by a cheap computer simulation. We can use these approximations to eliminate low function value regions cheaply and use the expensive evaluations of f in a small but promising region and speedily identify the optimum. We formalise this task as a multi-fidelity bandit problem where the target function and its approximations are sampled from a Gaussian process. We develop MF-GP-UCB, a novel method based on upper confidence bound techniques. In our theoretical analysis we demonstrate that it exhibits precisely the above behaviour and achieves better bounds on the regret than strategies which ignore multi-fidelity information. Empirically, MF-GP-UCB outperforms such naive strategies and other multi-fidelity methods on several synthetic and real experiments.


Author(s):  
Alberto Pettorossi ◽  
Maurizio Proietti

Program transformation is a methodology for deriving correct and efficient programs from specifications. In this chapter, we will look at the so called ’rules + strategies’ approach, and we will report on the main techniques which have been introduced in the literature for that approach, in the case of logic programs. We will also present some examples of program transformation, and we hope that through those examples the reader may acquire some familiarity with the techniques we will describe. The program transformation approach to the development of programs has been first advocated in the case of functional languages by Burstall and Darlington [1977]. In that seminal paper the authors give a comprehensive account of some basic transformation techniques which they had already presented in [Darlington, 1972; Burstall and Darlington, 1975]. Similar techniques were also developed in the case of logic languages by Clark and Sickel [1977], and Hogger [1981], who investigated the use of predicate logic as a language for both program specification and program derivation. In the transformation approach the task of writing a correct and efficient program is realized in two phases. The first phase consists in writing an initial, maybe inefficient, program whose correctness can easily be shown, and the second phase, possibly divided into various subphases, consists in transforming the initial program with the objective of deriving a new program which is more efficient. The separation of the correctness concern from the efficiency concern is one of the major advantages of the transformation methodology. Indeed, using this methodology one may avoid some difficulties often encountered in other approaches. One such difficulty, which may occur when following the stepwise refinement approach, is the design of the invariant assertions, which may be quite intricate, especially when developing very efficient programs. The experience gained during the past two decades or so shows that the methodology of program transformation is very valuable and attractive, in particular for the task of programming ‘in the small’, that is, for writing single modules of large software systems.


Author(s):  
Onur Günlü

The problem of reliable function computation is extended by imposing privacy, secrecy, and storage constraints on a remote source whose noisy measurements are observed by multiple parties. The main additions to the classic function computation problem include 1) privacy leakage to an eavesdropper is measured with respect to the remote source rather than the transmitting terminals’ observed sequences; 2) the information leakage to a fusion center with respect to the remote source is considered as a new privacy leakage metric; 3) the function computed is allowed to be a distorted version of the target function, which allows to reduce the storage rate as compared to a reliable function computation scenario in addition to reducing secrecy and privacy leakages; 4) two transmitting node observations are used to compute a function. Inner and outer bounds on the rate regions are derived for lossless and lossy single-function computation with two transmitting nodes, which recover previous results in the literature. For special cases that include invertible and partially-invertible functions, and degraded measurement channels, exact lossless and lossy rate regions are characterized, and one exact region is evaluated for an example scenario.


Entropy ◽  
2022 ◽  
Vol 24 (1) ◽  
pp. 110
Author(s):  
Onur Günlü

The problem of reliable function computation is extended by imposing privacy, secrecy, and storage constraints on a remote source whose noisy measurements are observed by multiple parties. The main additions to the classic function computation problem include (1) privacy leakage to an eavesdropper is measured with respect to the remote source rather than the transmitting terminals’ observed sequences; (2) the information leakage to a fusion center with respect to the remote source is considered a new privacy leakage metric; (3) the function computed is allowed to be a distorted version of the target function, which allows the storage rate to be reduced compared to a reliable function computation scenario, in addition to reducing secrecy and privacy leakages; (4) two transmitting node observations are used to compute a function. Inner and outer bounds on the rate regions are derived for lossless and lossy single-function computation with two transmitting nodes, which recover previous results in the literature. For special cases, including invertible and partially invertible functions, and degraded measurement channels, exact lossless and lossy rate regions are characterized, and one exact region is evaluated as an example scenario.


Author(s):  
E. Rau ◽  
N. Karelin ◽  
V. Dukov ◽  
M. Kolomeytsev ◽  
S. Gavrikov ◽  
...  

There are different methods and devices for the increase of the videosignal information in SEM. For example, with the help of special pure electronic [1] and opto-electronic [2] systems equipotential areas on the specimen surface in SEM were obtained. This report generalizes quantitative universal method for space distribution representation of research specimen parameter by contour equal signal lines. The method is based on principle of comparison of information signal value with the fixed levels.Transformation image system for obtaining equal signal lines maps was developed in two versions:1)In pure electronic system [3] it is necessary to compare signal U (see Fig.1-a), which gives potential distribution on specimen surface along each scanning line with fixed base level signals εifor obtaining quantitative equipotential information on solid state surface. The amplitude analyzer-comparator gives flare sport videopulses at any fixed coordinate and any instant time when initial signal U is equal to one of the base level signals ε.


In the article, the author considers the problems of complex algorithmization and systematization of approaches to optimizing the work plans of construction organizations (calendar plans) using various modern tools, including, for example, evolutionary algorithms for "conscious" enumeration of options for solving a target function from an array of possible constraints for a given nomenclature. Various typical schemes for modeling the processes of distribution of labor resources between objects of the production program are given, taking into account the array of source data. This data includes the possibility of using the material and technical supply base (delivery, storage, packaging) as a temporary container for placing the labor resource in case of released capacity, quantitative and qualification composition of the initial labor resource, the properties of the construction organization as a counterparty in the contract system with the customer of construction and installation works etc. A conceptual algorithm is formed that is the basis of the software package for operational harmonization of the production program ( work plans) in accordance with the loading of production units, the released capacities of labor resources and other conditions stipulated by the model. The application of the proposed algorithm is most convenient for a set of objects, which determines the relevance of its implementation in optimization models when planning production programs of building organizations that contain several objects distributed over a time scale.


2016 ◽  
Vol 22 (98) ◽  
pp. 378-383
Author(s):  
Andrei O. Levchenko ◽  
◽  
Yurii A., Maksymenko ◽  
Yurii I. Ryndin ◽  
Igor A. Shumkov ◽  
...  
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document