distributed processes
Recently Published Documents


TOTAL DOCUMENTS

214
(FIVE YEARS 25)

H-INDEX

23
(FIVE YEARS 4)

2021 ◽  
Vol 52 (4) ◽  
pp. 76-77
Author(s):  
Ezio Bartocci ◽  
Michael A. Bender

With the publication of the Kannellakis-Smolka 1983 PODC paper, Kanellakis and Smolka pioneered the development of efficient algorithms for deciding behavioral equivalence of concurrent and distributed processes, especially bisimulation equivalence. Bisimulation is the cornerstone of the process-algebraic approach to modeling and verifying concurrent and distributed systems. They also presented complexity results that showed certain behavioral equivalences are computationally intractable. Collectively, their results founded the subdiscipline of algorithmic process theory, and established the associated bridges between the European research community, whose focus at the time was on process theory, and that of the US, with a rich tradition in algorithm design and computational complexity, but to whom process theory was largely unknown.


2021 ◽  
Vol 5 (OOPSLA) ◽  
pp. 1-31
Author(s):  
Guy L. Steele Jr. ◽  
Sebastiano Vigna

In 2014, Steele, Lea, and Flood presented SplitMix, an object-oriented pseudorandom number generator (prng) that is quite fast (9 64-bit arithmetic/logical operations per 64 bits generated) and also splittable . A conventional prng object provides a generate method that returns one pseudorandom value and updates the state of the prng; a splittable prng object also has a second operation, split , that replaces the original prng object with two (seemingly) independent prng objects, by creating and returning a new such object and updating the state of the original object. Splittable prng objects make it easy to organize the use of pseudorandom numbers in multithreaded programs structured using fork-join parallelism. This overall strategy still appears to be sound, but the specific arithmetic calculation used for generate in the SplitMix algorithm has some detectable weaknesses, and the period of any one generator is limited to 2 64 . Here we present the LXM family of prng algorithms. The idea is an old one: combine the outputs of two independent prng algorithms, then (optionally) feed the result to a mixing function. An LXM algorithm uses a linear congruential subgenerator and an F 2 -linear subgenerator; the examples studied in this paper use a linear congruential generator (LCG) of period 2 16 , 2 32 , 2 64 , or 2 128 with one of the multipliers recommended by L’Ecuyer or by Steele and Vigna, and an F 2 -linear xor-based generator (XBG) of the xoshiro family or xoroshiro family as described by Blackman and Vigna. For mixing functions we study the MurmurHash3 finalizer function; variants by David Stafford, Doug Lea, and degski; and the null (identity) mixing function. Like SplitMix, LXM provides both a generate operation and a split operation. Also like SplitMix, LXM requires no locking or other synchronization (other than the usual memory fence after instance initialization), and is suitable for use with simd instruction sets because it has no branches or loops. We analyze the period and equidistribution properties of LXM generators, and present the results of thorough testing of specific members of this family, using the TestU01 and PractRand test suites, not only on single instances of the algorithm but also for collections of instances, used in parallel, ranging in size from 2 to 2 24 . Single instances of LXM that include a strong mixing function appear to have no major weaknesses, and LXM is significantly more robust than SplitMix against accidental correlation in a multithreaded setting. We believe that LXM, like SplitMix, is suitable for “everyday” scientific and machine-learning applications (but not cryptographic applications), especially when concurrent threads or distributed processes are involved.


2021 ◽  
Vol 7 (3) ◽  
pp. 1-43
Author(s):  
Anas Daghistani ◽  
Walid G. Aref ◽  
Arif Ghafoor ◽  
Ahmed R. Mahmood

The proliferation of GPS-enabled devices has led to the development of numerous location-based services. These services need to process massive amounts of streamed spatial data in real-time. The current scale of spatial data cannot be handled using centralized systems. This has led to the development of distributed spatial streaming systems. Existing systems are using static spatial partitioning to distribute the workload. In contrast, the real-time streamed spatial data follows non-uniform spatial distributions that are continuously changing over time. Distributed spatial streaming systems need to react to the changes in the distribution of spatial data and queries. This article introduces SWARM, a lightweight adaptivity protocol that continuously monitors the data and query workloads across the distributed processes of the spatial data streaming system and redistributes and rebalances the workloads as soon as performance bottlenecks get detected. SWARM is able to handle multiple query-execution and data-persistence models. A distributed streaming system can directly use SWARM to adaptively rebalance the system’s workload among its machines with minimal changes to the original code of the underlying spatial application. Extensive experimental evaluation using real and synthetic datasets illustrate that, on average, SWARM achieves 2 improvement in throughput over a static grid partitioning that is determined based on observing a limited history of the data and query workloads. Moreover, SWARM reduces execution latency on average 4 compared with the other technique.


Author(s):  
Paul Robert Griffin ◽  
Alan Megargel ◽  
Venky R. Shankararaman

A typical example of a distributed process is trade finance where data and documents are transferred between multiple companies including importers, exporters, carriers, and banks. Blockchain is seen as a potential decentralized technology that can be used to automate such processes. However, there are also other competing technologies such as managed file transfers, messaging, and WebAPIs that may also be suitable for automating similar distributed processes. In this chapter, a decision framework is proposed to assist the solution architect in deciding the technology best suited to support decentralized control of a distributed business process where there are multiple companies involved. The framework takes as input the different areas of concern such as data, processing, governance, technical, and the pros and cons of the technologies in addressing these areas of concerns and provides a method to analyze and highlight the best technology for any process in question. Two example processes, trade finance and price distribution, are used to show the application of the framework.


2020 ◽  
Vol 20 (3) ◽  
pp. 45-63
Author(s):  
Andranik S. Akopov ◽  
Levon A. Beklaryan ◽  
Armen L. Beklaryan

AbstractThis work presents a novel approach to the design of a decision-making system for the cluster-based optimization of an evacuation process using a Parallel bi-objective Real-Coded Genetic Algorithm (P-RCGA). The algorithm is based on the dynamic interaction of distributed processes with individual characteristics that exchange the best potential decisions among themselves through a global population. Such an approach allows the HyperVolume performance metric (HV metric) as reflected in the quality of the subset of the Pareto optimal solutions to be improved. The results of P-RCGA were compared with other well-known multi-objective genetic algorithms (e.g., -MOEA, NSGA-II, SPEA2). Moreover, P-RCGA was aggregated with the developed simulation of the behavior of human agent-rescuers in emergency through the objective functions to optimize the main parameters of the evacuation process.


Sign in / Sign up

Export Citation Format

Share Document