scholarly journals QoSComm: A Data Flow Allocation Strategy among SDN-Based Data Centers for IoT Big Data Analytics

2020 ◽  
Vol 10 (21) ◽  
pp. 7586
Author(s):  
Jose E. Lozano-Rizk ◽  
Juan I. Nieto-Hipolito ◽  
Raul Rivera-Rodriguez ◽  
Maria A. Cosio-Leon ◽  
Mabel Vazquez-Briseño ◽  
...  

When Internet of Things (IoT) big data analytics (BDA) require to transfer data streams among software defined network (SDN)-based distributed data centers, the data flow forwarding in the communication network is typically done by an SDN controller using a traditional shortest path algorithm or just considering bandwidth requirements by the applications. In BDA, this scheme could affect their performance resulting in a longer job completion time because additional metrics were not considered, such as end-to-end delay, jitter, and packet loss rate in the data transfer path. These metrics are quality of service (QoS) parameters in the communication network. This research proposes a solution called QoSComm, an SDN strategy to allocate QoS-based data flows for BDA running across distributed data centers to minimize their job completion time. QoSComm operates in two phases: (i) based on the current communication network conditions, it calculates the feasible paths for each data center using a multi-objective optimization method; (ii) it distributes the resultant paths among data centers configuring their openflow Switches (OFS) dynamically. Simulation results show that QoSComm can improve BDA job completion time by an average of 18%.

2017 ◽  
Vol 2017 ◽  
pp. 1-16 ◽  
Author(s):  
Dillon Chrimes ◽  
Hamid Zamani

Big data analytics (BDA) is important to reduce healthcare costs. However, there are many challenges of data aggregation, maintenance, integration, translation, analysis, and security/privacy. The study objective to establish an interactive BDA platform with simulated patient data using open-source software technologies was achieved by construction of a platform framework with Hadoop Distributed File System (HDFS) using HBase (key-value NoSQL database). Distributed data structures were generated from benchmarked hospital-specific metadata of nine billion patient records. At optimized iteration, HDFS ingestion of HFiles to HBase store files revealed sustained availability over hundreds of iterations; however, to complete MapReduce to HBase required a week (for 10 TB) and a month for three billion (30 TB) indexed patient records, respectively. Found inconsistencies of MapReduce limited the capacity to generate and replicate data efficiently. Apache Spark and Drill showed high performance with high usability for technical support but poor usability for clinical services. Hospital system based on patient-centric data was challenging in using HBase, whereby not all data profiles were fully integrated with the complex patient-to-hospital relationships. However, we recommend using HBase to achieve secured patient data while querying entire hospital volumes in a simplified clinical event model across clinical services.


2018 ◽  
Vol 7 (2.24) ◽  
pp. 92
Author(s):  
B V Ram Naresh Yadav ◽  
P Anjaiah

Big data analytics and Cloud computing are the two most imperative innovations in the current IT industry. In a surprise, these technologies come up together to convey the effective outcomes to various business organizations. However, big data analytics require a huge amount of resources for storage and computation. The storage cost is massively increased on the input amounts of data and requires innovative algorithms to reduce the cost to store the data in a specific data centers in a cloud. In Today’s IT Industry, Cloud Computing has emerged as a popular paradigm to host customer, enterprise data and many other distributed applications. Cloud Service Providers (CSPs) store huge amounts of data and numerous distributed applications with different cost. For example Amazon provides storage services at a fraction of TB/month and each CSP having different Service Level Agreements with different storage offers. Customers are interested in reliable SLAs and it increases the cost since the number of replicas are more. The CSPs are attracting the users for initial storage/put operations and get operations from the cloud becomes hurdle and subsequently increases the cost. CSPs provides these services by maintaining multiple datacenters at multiple locations throughout the world. These datacenters provide distinctive get/put latencies and unit costs for resource reservation and utilization. The way of choosing distinctive CSPs data centers, becomes tricky for cloud users those who are using the distributed application globally i.e. online social networks.  In has mainly two challenges. Firstly, allocating the data to different datacenters to satisfy the SLO including the latency. Secondly, how one can reserve the remote resource i.e. memory with less cost. In this paper we have derived a new model to minimize the cost by satisfying the SLOs with integer programming. Additionally, we proposed an algorithm to store the data in a data center by minimizing the cost among different data centers and the computation of cost for put/get latencies. Our simulation works shows that the cost is minimized for resource reservation and utilization among different datacenters.  


Author(s):  
Dawn E. Holmes

‘Big data analytics’ argues that big data is only useful if we can extract useful information from it. It looks at some of the techniques used to discover useful information from big data, such as customer preferences or how fast an epidemic is spreading. Big data analytics is changing rapidly as the size of the datasets increases and classical statistics makes room for this new paradigm. An example of big data analytics is the algorithmic method called MapReduce, a distributed data processing system that forms part of the core functionality of the Hadoop Ecosystem. Amazon, Google, Facebook, and many others use Hadoop to store and process their data.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Mohammed Bergui ◽  
Said Najah ◽  
Nikola S. Nikolov

AbstractIn the era of global-scale services, organisations produce huge volumes of data, often distributed across multiple data centres, separated by vast geographical distances. While cluster computing applications, such as MapReduce and Spark, have been widely deployed in data centres to support commercial applications and scientific research, they are not designed for running jobs across geo-distributed data centres. The necessity to utilise such infrastructure introduces new challenges in the data analytics process due to bandwidth limitations of the inter-data-centre communication. In this article, we discuss challenges and survey the latest geo-distributed big-data analytics frameworks and schedulers (based on MapReduce and Spark) with WAN-bandwidth awareness.


2018 ◽  
Vol 15 (3) ◽  
Author(s):  
Blagoj Ristevski ◽  
Ming Chen

Abstract This paper surveys big data with highlighting the big data analytics in medicine and healthcare. Big data characteristics: value, volume, velocity, variety, veracity and variability are described. Big data analytics in medicine and healthcare covers integration and analysis of large amount of complex heterogeneous data such as various – omics data (genomics, epigenomics, transcriptomics, proteomics, metabolomics, interactomics, pharmacogenomics, diseasomics), biomedical data and electronic health records data. We underline the challenging issues about big data privacy and security. Regarding big data characteristics, some directions of using suitable and promising open-source distributed data processing software platform are given.


Sign in / Sign up

Export Citation Format

Share Document