data intensive applications
Recently Published Documents


TOTAL DOCUMENTS

426
(FIVE YEARS 85)

H-INDEX

23
(FIVE YEARS 5)

2021 ◽  
Author(s):  
Xinyu Chen ◽  
Hongshi Tan ◽  
Yao Chen ◽  
Bingsheng He ◽  
Weng-Fai Wong ◽  
...  

2021 ◽  
Vol 17 (4) ◽  
pp. 1-27
Author(s):  
Xiaojia Song ◽  
Tao Xie ◽  
Stephen Fischer

Existing near-data processing (NDP)-powered architectures have demonstrated their strength for some data-intensive applications. Data center servers, however, have to serve not only data-intensive but also compute-intensive applications. An in-depth understanding of the impact of NDP on various data center applications is still needed. For example, can a compute-intensive application also benefit from NDP? In addition, current NDP techniques focus on maximizing the data processing rate by always utilizing all computing resources at all times. Is this “always running in full gear” strategy consistently beneficial for an application? To answer these questions, we first propose two reconfigurable NDP-powered servers called RANS ( R econfigurable A RM-based N DP S erver) and RFNS ( R econfigurable F PGA-based N DP S erver). Next, we implement a single-engine prototype for each of them based on a conventional data center and then evaluate their effectiveness. Experimental results measured from the two prototypes are then extrapolated to estimate the properties of the two full-size reconfigurable NDP servers. Finally, several new findings are presented. For example, we find that while RANS can only benefit data-intensive applications, RFNS can offer benefits for both data-intensive and compute-intensive applications. Moreover, we find that for certain applications the reconfigurability of RANS/RFNS can deliver noticeable energy efficiency without any performance degradation.


2021 ◽  
Vol 22 (4) ◽  
pp. 401-412
Author(s):  
Hrachya Astsatryan ◽  
Arthur Lalayan ◽  
Aram Kocharyan ◽  
Daniel Hagimont

The MapReduce framework manages Big Data sets by splitting the large datasets into a set of distributed blocks and processes them in parallel. Data compression and in-memory file systems are widely used methods in Big Data processing to reduce resource-intensive I/O operations and improve I/O rate correspondingly. The article presents a performance-efficient modular and configurable decision-making robust service relying on data compression and in-memory data storage indicators. The service consists of Recommendation and Prediction modules, predicts the execution time of a given job based on metrics, and recommends the best configuration parameters to improve Hadoop and Spark frameworks' performance. Several CPU and data-intensive applications and micro-benchmarks have been evaluated to improve the performance, including Log Analyzer, WordCount, and K-Means.


Information ◽  
2021 ◽  
Vol 12 (11) ◽  
pp. 481
Author(s):  
Barbara Pes

With the massive growth of data-intensive applications, the machine learning field has gained widespread popularity [...]


2021 ◽  
Vol 22 (2) ◽  
Author(s):  
Shiming Ma ◽  
Jichang Chen ◽  
Yang Zhang ◽  
Anand Shrivastava ◽  
Hari Mohan

For the data-intensive applications, resource planning and scheduling has become an important part for smart cities. The cloud computing techniques are being used for planning and scheduling of resources in data-intensive applications. The regular methodologies being used are adequately successful for giving the asset allotment yet they do not provide time effectiveness during task execution. This article presents an effective and time prioritization based smart resource management platform employing the Cuckoo Search based Optimized Resource Allocation (CSO-RA) methodology. The opensource JStorm platform is utilized for dynamic asset planning while using big data analytics and the outcomes of the experimentation are observed using various assessment parameters. The proposed (CSO-RA) system is compared with the current methodologies like particle swarm optimization (PSO), ant colony optimization (ACO) and genetic algorithm (GA) based optimization methodologies and the viability of the proposed framework is established. The percentage of optimality observed for CSO-RA algorithm is 97\% and overall resource deployment rate of 28\% is achieved using CSO-RA method which is comparatively much better than PSO, GA and ACO conventional algorithms. Feasible outcomes are obtained by using the CSO-RA methodology for cloud computing based large scale optimization-based data intensive industrial applications.


2021 ◽  
pp. 199-222
Author(s):  
Ahmad Alnafessah ◽  
Gabriele Russo Russo ◽  
Valeria Cardellini ◽  
Giuliano Casale ◽  
Francesco Lo Presti

2021 ◽  
Author(s):  
Hoang-Dung Do ◽  
Valerie Hayot-Sasson ◽  
Rafael Ferreira Da Silva ◽  
Christopher Steele ◽  
Henri Casanova ◽  
...  

2021 ◽  
Vol 14 (11) ◽  
pp. 2586-2598
Author(s):  
Chunwei Liu ◽  
Hao Jiang ◽  
John Paparrizos ◽  
Aaron J. Elmore

Modern data-intensive applications often generate large amounts of low precision float data with a limited range of values. Despite the prevalence of such data, there is a lack of an effective solution to ingest, store, and analyze bounded, low-precision, numeric data. To address this gap, we propose Buff, a new compression technique that uses a decomposed columnar storage and encoding methods to provide effective compression, fast ingestion, and high-speed in-situ adaptive query operators with SIMD support.


Sign in / Sign up

Export Citation Format

Share Document