scholarly journals On Sparsity Awareness in Distributed Computations

Author(s):  
Keren Censor-Hillel ◽  
Dean Leitersdorf ◽  
Volodymyr Polosukhin
Author(s):  
Eddy Fromentin ◽  
Michel Raynal ◽  
Vijay Garg ◽  
Alex Tomlinson

2018 ◽  
Vol 25 (6) ◽  
pp. 589-606
Author(s):  
Marat M. Abbas ◽  
Vladimir A. Zakharov

Mathematical models of distributed computations, based on the calculus of mobile processes (π-calculus) are widely used for checking the information security properties of cryptographic protocols. Since π-calculus is Turing-complete, this problem is undecidable in general case. Therefore, the study is carried out only for some special classes of π-calculus processes with restricted computational capabilities, for example, for non-recursive processes, in which all runs have a bounded length, for processes with a bounded number of parallel components, etc. However, even in these cases, the proposed checking procedures are time consuming. We assume that this is due to the very nature of the π -calculus processes. The goal of this paper is to show that even for the weakest model of passive adversary and for relatively simple protocols that use only the basic π-calculus operations, the task of checking the information security properties of these protocols is co-NP-complete.


2005 ◽  
Vol 13 (4) ◽  
pp. 277-298 ◽  
Author(s):  
Rob Pike ◽  
Sean Dorward ◽  
Robert Griesemer ◽  
Sean Quinlan

Very large data sets often have a flat but regular structure and span multiple disks and machines. Examples include telephone call records, network logs, and web document repositories. These large data sets are not amenable to study using traditional database techniques, if only because they can be too large to fit in a single relational database. On the other hand, many of the analyses done on them can be expressed using simple, easily distributed computations: filtering, aggregation, extraction of statistics, and so on. We present a system for automating such analyses. A filtering phase, in which a query is expressed using a new procedural programming language, emits data to an aggregation phase. Both phases are distributed over hundreds or even thousands of computers. The results are then collated and saved to a file. The design – including the separation into two phases, the form of the programming language, and the properties of the aggregators – exploits the parallelism inherent in having data and computation distributed across many machines.


Sign in / Sign up

Export Citation Format

Share Document