scholarly journals On Task Assignment in Data Intensive Scalable Computing

Author(s):  
Giovanni Agosta ◽  
Gerardo Pelosi ◽  
Ettore Speziale
2020 ◽  
Author(s):  
Mario A. R. Dantas

This work presents an introduction to the Data Intensive Scalable Computing (DISC) approach. This paradigm represents a valuable effort to tackle the large amount of data produced by several ordinary applications. Therefore, subjects such as characterization of big data and storage approaches, in addition to brief comparison between HPC and DISC are differentiated highlight.


2014 ◽  
Vol 926-930 ◽  
pp. 2807-2810
Author(s):  
Li Jun Liu

In order to spread across different locations, sharing of computer resources, and ease of use of idle CPU or storage space Resources, there is the concept of grid and grid computing. Data - intensive scientific and engineering applications ( such as seismic data Numerical Simulation of physics, computational mechanics, weather forecast ) needed in a wide area, quick and safe transmission in distributed computing environments Huge amounts of data. So how in a grid environment efficient, reliable, and secure transfer massive files are in the grid computing A study on the key issue. Design and realization of dynamic task assignment algorithm and Performance experiment of the system.


2018 ◽  
Vol 19 (3) ◽  
pp. iii-iv
Author(s):  
Sasko Ristov

We are happy to present this special issue of the scientific journal Scalable Computing: Practice and Experience. In this special issue on Infrastructures and Algorithms for Scalable Computing (Volume 19, No 3 June 2018), we have selected four papers out of submitted nine, which gone through a peer review according to the journal policy. All papers represent novel results in the fields of distributed algorithms and infrastructures for scalable computing. The first paper presents present a novel approach for efficient data placement, which improves the performance of workflow execution in distributed datacenters. The greedy heuristic algorithm, which is based on a network flow optimization framework, minimizes the total storage cost, including efforts to move and store the data from different source locations and dependencies. The second paper evaluated the significance of different clustering techniques viz. k-means, Hierarchical Agglomerative Clustering and Markov Clustering in groupingawaredata placement for data-intensive applications with interest locality. The evaluation in Azure reported that Markov Clustering-based data placement strategy improves the local map execution and reduces the execution time compared to Hadoops Default Data Placement Strategy and other evaluated clustering techniques. This is more emphasized for data-intensive applications that have interest locality. The third paper presents an experimental evaluation of the openMP thread-mapping strategies in different hardware environments (IntelXeon Phi coprocessor and hybrid CPU-MIC platforms). The paper shows the optimal choice of thread affinity, the number of threads and the execution mode that can provide optimal performance of the LU factorization. In the fourth paper, the authors study the amount of memory occupied by sparse matrices split up into same-size blocks. The paper considers and statistically evaluates four popular storage formats and combinations among them. The conclusion is that block-based storage formats may significantly reduce memory footprints of sparse matrices arising from a wide range of application domains. We use this opportunity to thank all contributors to this Special Issue: all authors who submitted the results of their latest research and all reviewers for their valuable comments and suggestions for improvement. We would like to express our special gratitude for the Editor-in-Chief, Professor Dana Petcu, for her constant support during the whole process of this Special Issue.


Sign in / Sign up

Export Citation Format

Share Document