scholarly journals Methods for Predict Memory Usage of Big Data Computing System: A Comparison Study

2020 ◽  
Vol 1631 ◽  
pp. 012153
Author(s):  
Xiaofan Wu ◽  
Ying Li ◽  
Zhijian Cheng ◽  
Yuan Wang
2016 ◽  
Vol 11 (2) ◽  
pp. 252-264
Author(s):  
Weidong Qiu ◽  
Bozhong Liu ◽  
Can Ge ◽  
Lingzhi Xu ◽  
Xiaoming Tang ◽  
...  

CHANCE ◽  
2013 ◽  
Vol 26 (2) ◽  
pp. 28-32 ◽  
Author(s):  
Nicole Lazar

Author(s):  
Luiz Angelo Steffenel ◽  
Manuele Kirsch Pinheiro ◽  
Lucas Vaz Peres ◽  
Damaris Kirsch Pinheiro

The exponential dissemination of proximity computing devices (smartphones, tablets, nanocomputers, etc.) raises important questions on how to transmit, store and analyze data in networks integrating those devices. New approaches like edge computing aim at delegating part of the work to devices in the “edge” of the network. In this article, the focus is on the use of pervasive grids to implement edge computing and leverage such challenges, especially the strategies to ensure data proximity and context awareness, two factors that impact the performance of big data analyses in distributed systems. This article discusses the limitations of traditional big data computing platforms and introduces the principles and challenges to implement edge computing over pervasive grids. Finally, using CloudFIT, a distributed computing platform, the authors illustrate the deployment of a real geophysical application on a pervasive network.


Author(s):  
Ewa Niewiadomska-Szynkiewicz ◽  
Michał P. Karpowicz

Progress in life, physical sciences and technology depends on efficient data-mining and modern computing technologies. The rapid growth of data-intensive domains requires a continuous development of new solutions for network infrastructure, servers and storage in order to address Big Datarelated problems. Development of software frameworks, include smart calculation, communication management, data decomposition and allocation algorithms is clearly one of the major technological challenges we are faced with. Reduction in energy consumption is another challenge arising in connection with the development of efficient HPC infrastructures. This paper addresses the vital problem of energy-efficient high performance distributed and parallel computing. An overview of recent technologies for Big Data processing is presented. The attention is focused on the most popular middleware and software platforms. Various energy-saving approaches are presented and discussed as well.


2015 ◽  
Vol 22 (6) ◽  
pp. 1115-1119 ◽  
Author(s):  
Saurabh Sinha ◽  
Jun Song ◽  
Richard Weinshilboum ◽  
Victor Jongeneel ◽  
Jiawei Han

Abstract We describe here the vision, motivations, and research plans of the National Institutes of Health Center for Excellence in Big Data Computing at the University of Illinois, Urbana-Champaign. The Center is organized around the construction of “Knowledge Engine for Genomics” (KnowEnG), an E-science framework for genomics where biomedical scientists will have access to powerful methods of data mining, network mining, and machine learning to extract knowledge out of genomics data. The scientist will come to KnowEnG with their own data sets in the form of spreadsheets and ask KnowEnG to analyze those data sets in the light of a massive knowledge base of community data sets called the “Knowledge Network” that will be at the heart of the system. The Center is undertaking discovery projects aimed at testing the utility of KnowEnG for transforming big data to knowledge. These projects span a broad range of biological enquiry, from pharmacogenomics (in collaboration with Mayo Clinic) to transcriptomics of human behavior.


Sign in / Sign up

Export Citation Format

Share Document