ClientNet Cluster an Alternative of Transferring Big Data Files by Use of Mobile Code

Author(s):  
Waseem Akhtar Mufti
Keyword(s):  
Big Data ◽  
Author(s):  
Mohamed Elsotouhy ◽  
Geetika Jain ◽  
Archana Shrivastava

The concept of big data (BD) has been coupled with disaster management to improve the crisis response during pandemic and epidemic. BD has transformed every aspect and approach of handling the unorganized set of data files and converting the same into a piece of more structured information. The constant inflow of unstructured data shows the research lacuna, especially during a pandemic. This study is an effort to develop a pandemic disaster management approach based on BD. BD text analytics potential is immense in effective pandemic disaster management via visualization, explanation, and data analysis. To seize the understanding of using BD toward disaster management, we have taken a comprehensive approach in place of fragmented view by using BD text analytics approach to comprehend the various relationships about disaster management theory. The study’s findings indicate that it is essential to understand all the pandemic disaster management performed in the past and improve the future crisis response using BD. Though worldwide, all the communities face big chaos and have little help reaching a potential solution.


2020 ◽  
Vol 2020 ◽  
pp. 1-9 ◽  
Author(s):  
Hani Alquhayz ◽  
Mahdi Jemmali ◽  
Mohammad Mahmood Otoom

The paper is regarding the fair distribution of several files having different sizes to several storage supports. With the existence of several storage supports and different files, we search for a method that makes an appropriate backup. The appropriate backup guarantees a fair distribution of the big data (files). Fairness is related to the used spaces of storage support distribution. The problem is how to find a fair method that stores all files on the available storage supports, where each file is characterized by its size. We propose in this paper some fairness methods that seek to minimize the gap between used spaces of all storage supports. In this paper, several algorithms are developed to solve the proposed problem, and the experimental study shows the performance of these developed algorithms.


2019 ◽  
Vol 16 (9) ◽  
pp. 3824-3829
Author(s):  
Deepak Ahlawat ◽  
Deepali Gupta

Due to advancement in the technological world, there is a great surge in data. The main sources of generating such a large amount of data are social websites, internet sites etc. The large data files are combined together to create a big data architecture. Managing the data file in such a large volume is not easy. Therefore, modern techniques are developed to manage bulk data. To arrange and utilize such big data, Hadoop Distributed File System (HDFS) architecture from Hadoop was presented in the early stage of 2015. This architecture is used when traditional methods are insufficient to manage the data. In this paper, a novel clustering algorithm is implemented to manage a large amount of data. The concepts and frames of Big Data are studied. A novel algorithm is developed using the K means and cosine-based similarity clustering in this paper. The developed clustering algorithm is evaluated using the precision and recall parameters. The prominent results are obtained which successfully manages the big data issue.


2019 ◽  
Vol 16 (9) ◽  
pp. 3849-3853
Author(s):  
Dar Masroof Amin ◽  
Atul Garg

The globalisation of Internet is creating enormous amount of data on servers. The data created during last two years is itself equivalent to the data created during all these years. This exponential creation of data is due to the easy access to devices based on Internet of things. This information has become a source of predictive analysis for future happenings. The versatile use of computing devices is creating data of diverse nature and the analysts are predicting the future trend using data of their respective domain. The technology used to analyse the data has become a bottleneck over the time. The main reason behind this is that the rate with which the data is getting created is much more than the technology used to access the same. There are various mining techniques used to explore the useful information. In this research there is detailed analysis of how data is used and perceived by various data mining algorithms. Mining algorithms like Naïve Bayes, Support Vector Machines, Linear Discriminant Analysis Algorithm, Artificial Neural Networks, C4.5, C5.0, K-Nearest Neighbour are analysed. The input data used in these algorithms is big data files. This research mainly focuses on how the existing data algorithms are interacting with big data files. The research has been done on twitter comments.


Author(s):  
Erico Correia Da Silva ◽  
Liria Matsumoto Sato ◽  
Edson Toshimi Midorikawa

Author(s):  
Deepak Saini ◽  
Jasmine Saini

In the Cloud-based IoT systems, the major issue is handling the data because IoT will deliver an abundance of data to the Cloud for computing. In this situation, the cloud servers will compute the big data and try to identify the relevant data and give decisions accordingly. In the world of big data, it is a herculean task to manage inflow, storage, and exploration of millions of data files and the volume of information coming from multiple systems. The growth of this information calls for good design principles so that it can leverage the different big data tools available in the market today. From the information consumption standpoint, business users are exploring new insights from the big data that can uncover potential business value. Data lake is a technology framework that helps to solve this big data challenge.


2018 ◽  
Vol 1 (1) ◽  
pp. 205-210 ◽  
Author(s):  
Aliyu Musa ◽  
Matthias Dehmer ◽  
Olli Yli-Harja ◽  
Frank Emmert-Streib

We are living at a time that allows the generation of mass data in almost any field of science. For instance, in pharmacogenomics, there exist a number of big data repositories, e.g., the Library of Integrated Network-based Cellular Signatures (LINCS) that provide millions of measurements on the genomics level. However, to translate these data into meaningful information, the data need to be analyzable. The first step for such an analysis is the deliberate selection of subsets of raw data for studying dedicated research questions. Unfortunately, this is a non-trivial problem when millions of individual data files are available with an intricate connection structure induced by experimental dependencies. In this paper, we argue for the need to introduce such search capabilities for big genomics data repositories with a specific discussion about LINCS. Specifically, we suggest the introduction of smart interfaces allowing the exploitation of the connections among individual raw data files, giving raise to a network structure, by graph-based searches.


Author(s):  
Alexey Noskov ◽  
A. Yair Grinberger ◽  
Nikolaos Papapesios ◽  
Adam Rousell ◽  
Rafael Troilo ◽  
...  

Many methods for intrinsic quality assessment of spatial data are based on the OpenStreetMap full-history dump. Typically, the high-level analysis is conducted; few approaches take into account the low-level properties of data files. In this chapter, a low-level data-type analysis is introduced. It offers a novel framework for the overview of big data files and assessment of full-history data provenance (lineage). Developed tools generate tables and charts, which facilitate the comparison and analysis of datasets. Also, resulting data helped to develop a universal data model for optimal storing of OpenStreetMap full-history data in the form of a relational database. Databases for several pilot sites were evaluated by two use cases. First, a number of intrinsic data quality indicators and related metrics were implemented. Second, a framework for the inventory of spatial distribution of massive data uploads is discussed. Both use cases confirm the effectiveness of the proposed data-type analysis and derived relational data model.


2018 ◽  
Vol 6 (6) ◽  
pp. 198-200
Author(s):  
M. T. Nafis ◽  
R. Biswas
Keyword(s):  
Big Data ◽  

2018 ◽  
Vol 22 (3) ◽  
pp. 819-828 ◽  
Author(s):  
Ali Shatnawi ◽  
Yathrip AlZahouri ◽  
Mohammed A. Shehab ◽  
Yaser Jararweh ◽  
Mahmoud Al-Ayyoub
Keyword(s):  
Big Data ◽  

Sign in / Sign up

Export Citation Format

Share Document