distributed implementation
Recently Published Documents


TOTAL DOCUMENTS

239
(FIVE YEARS 34)

H-INDEX

17
(FIVE YEARS 2)

Author(s):  
Nicholas Assimakis ◽  
Maria Adam ◽  
Christos Massouros

In this paper a distributed implementation for the periodic steady state Kalman filter is proposed. The distributed algorithm has parallel structure and can be implemented using processors in parallel without idle time. The number of processors is equal to the model period. The resulting speedup is also derived. The Finite Impulse Response (FIR) form of the periodic steady state Kalman filter is derived.


2021 ◽  
Author(s):  
Christian Borgs ◽  
Jennifer T. Chayes ◽  
Devavrat Shah ◽  
Christina Lee Yu

Matrix estimation or completion has served as a canonical mathematical model for recommendation systems. More recently, it has emerged as a fundamental building block for data analysis as a first step to denoise the observations and predict missing values. Since the dawn of e-commerce, similarity-based collaborative filtering has been used as a heuristic for matrix etimation. At its core, it encodes typical human behavior: you ask your friends to recommend what you may like or dislike. Algorithmically, friends are similar “rows” or “columns” of the underlying matrix. The traditional heuristic for computing similarities between rows has costly requirements on the density of observed entries. In “Iterative Collaborative Filtering for Sparse Matrix Estimation” by Christian Borgs, Jennifer T. Chayes, Devavrat Shah, and Christina Lee Yu, the authors introduce an algorithm that computes similarities in sparse datasets by comparing expanded local neighborhoods in the associated data graph: in effect, you ask friends of your friends to recommend what you may like or dislike. This work provides bounds on the max entry-wise error of their estimate for low rank and approximately low rank matrices, which is stronger than the aggregate mean squared error bounds found in classical works. The algorithm is also interpretable, scalable, and amenable to distributed implementation.


2021 ◽  
Author(s):  
Bianca Wiesmayr ◽  
Alois Zoitl ◽  
Oscar Miguel-Escrig ◽  
Julio-Ariel Romero-Perez

2021 ◽  
pp. 003-015
Author(s):  
I.Z. Achour ◽  
◽  
A.Yu. Doroshenko ◽  
◽  

Despite the neuroevolution of augmenting topologies method strengths, like the capability to be used in cases where the formula for a cost function and the topology of the neural network are difficult to determine, one of the main problems of such methods is slow convergence towards optimal results, especially in cases with complex and challenging environments. This paper proposes the novel distributed implementation of neuroevolution of augmenting topologies method, which considering availability of sufficient computational resources allows drastically speed up the process of optimal neural network configuration search. Batch genome evaluation was implemented for the means of proposed solution performance optimization, fair, and even computational resources usage. The proposed distributed implementation benchmarking shows that the generated neural networks evaluation process gives a manifold increase of efficiency on the demonstrated task and computational environment.


2021 ◽  
Author(s):  
Xu Yang ◽  
Jiaqi Yan ◽  
Yilin Mo ◽  
Keyou You

2021 ◽  
Vol 51 (1) ◽  
pp. 2-9
Author(s):  
Giuseppe Di Lena ◽  
Andrea Tomassilli ◽  
Damien Saucez ◽  
Frédéric Giroire ◽  
Thierry Turletti ◽  
...  

Networks have become complex systems that combine various concepts, techniques, and technologies. As a consequence, modelling or simulating them now is extremely complicated and researchers massively resort to prototyping techniques. Mininet is the most popular tool when it comes to evaluate SDN propositions. Mininet allows to emulate SDN networks on a single computer but shows its limitations with resource intensive experiments as the emulating host may become overloaded. To tackle this issue, we propose Distrinet , a distributed implementation of Mininet over multiple hosts, based on LXD/LXC, Ansible, and VXLAN tunnels. Distrinet uses the same API than Mininet, meaning that it is compatible with Mininet programs. It is generic and can deploy experiments on Linux clusters (e.g., Grid'5000), as well as on the Amazon EC2 cloud platform.


2021 ◽  
Vol 69 ◽  
pp. 5159-5174
Author(s):  
Martin Herrmann ◽  
Charlotte Hermann ◽  
Michael Buchholz

Sign in / Sign up

Export Citation Format

Share Document