scholarly journals Blockchain-Based Continued Integrity Service for IoT Big Data Management: A Comprehensive Design

Electronics ◽  
2020 ◽  
Vol 9 (9) ◽  
pp. 1434
Author(s):  
Yustus Eko Oktian ◽  
Sang-Gon Lee ◽  
Byung-Gook Lee

The state-of-the-art centralized Internet of Things (IoT) data flow pipeline has started aging since it cannot cope with the vast number of newly connected IoT devices. As a result, the community begins the transition to a decentralized pipeline to encourage data and resource sharing. However, the move is not trivial. With many instances allocating data or service arbitrarily, how can we guarantee the correctness of IoT data or processes that other parties offer. Furthermore, in case of dispute, how can the IoT data assist in determining which party is guilty of faulty behavior. Finally, the number of Service Level Agreement (SLA) increases as the number of sharing grows. The problem then becomes how we can provide a natural SLA generation and verification that we can automate instead of going through a manual and tedious legalization process through a trusted third party. In this paper, we explore blockchain solutions to answer those issues and propose continued data integrity services for IoT big data management. Specifically, we design five integrity protocols across three phases of IoT operations—during the transmission of IoT data (data in transit), when we physically store the data in the database (data at rest), and at the time of data processing (data in process). In each phase, we first lay out our motivations and survey the related blockchain solutions from the literature. We then use curated papers from our surveys as building blocks in designing the protocol. Using our proposal, we augment the overall value of IoT data and commands, generated in the IoT system, as they are now tamper-proof, verifiable, non-repudiable, and more robust.

Author(s):  
Mohd Farhan Md Fudzee ◽  
Jemal H. Abawajy

It is paramount to provide seamless and ubiquitous access to rich contents available online to interested users via a wide range of devices with varied characteristics. Recently, a service-oriented content adaptation scheme has emerged to address this content-device mismatch problem. In this scheme, content adaptation functions are provided as services by third-party providers. Clients pay for the consumed services and thus demand service quality. As such, negotiating for the QoS offers, assuring negotiated QoS levels and accuracy of adapted content version are essential. Any non-compliance should be handled and reported in real time. These issues elevate the management of service level agreement (SLA) as an important problem. This chapter presents prior work, important challenges, and a framework for managing SLA for service-oriented content adaptation platform.


2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Xiangli Chang ◽  
Hailang Cui

With the increasing popularity of a large number of Internet-based services and a large number of services hosted on cloud platforms, a more powerful back-end storage system is needed to support these services. At present, it is very difficult or impossible to implement a distributed storage to meet all the above assumptions. Therefore, the focus of research is to limit different characteristics to design different distributed storage solutions to meet different usage scenarios. Economic big data should have the basic requirements of high storage efficiency and fast retrieval speed. The large number of small files and the diversity of file types make the storage and retrieval of economic big data face severe challenges. This paper is oriented to the application requirements of cross-modal analysis of economic big data. According to the source and characteristics of economic big data, the data types are analyzed and the database storage architecture and data storage structure of economic big data are designed. Taking into account the spatial, temporal, and semantic characteristics of economic big data, this paper proposes a unified coding method based on the spatiotemporal data multilevel division strategy combined with Geohash and Hilbert and spatiotemporal semantic constraints. A prototype system was constructed based on Mongo DB, and the performance of the multilevel partition algorithm proposed in this paper was verified by the prototype system based on the realization of data storage management functions. The Wiener distributed memory based on the principle of Wiener filter is used to store the workload of each workload distributed storage window in a distributed manner. For distributed storage workloads, this article adopts specific types of workloads. According to its periodicity, the workload is divided into distributed storage windows of specific duration. At the beginning of each distributed storage window, distributed storage is distributed to the next distributed storage window. Experiments and tests have verified the distributed storage strategy proposed in this article, which proves that the Wiener distributed storage solution can save platform resources and configuration costs while ensuring Service Level Agreement (SLA).


2019 ◽  
Vol 9 (17) ◽  
pp. 3602 ◽  
Author(s):  
Lei Hang ◽  
Do-Hyeun Kim

Recently, technology startups have leveraged the potential of blockchain-based technologies to govern institutions or interpersonal trust by enforcing signed treaties among different individuals in a decentralized environment. However, it is going to be hard enough convincing that the blockchain technology could completely replace the trust among trading partners in the sharing economy as sharing services always operate in a highly dynamic environment. With the rapid expanding of the rental market, the sharing economy faces more and more severe challenges in the form of regulatory uncertainty and concerns about abuses. This paper proposes an enhanced decentralized sharing economy service using the service level agreement (SLA), which documents the services the provider will furnish and defines the service standards the provider is obligated to meet. The SLA specifications are defined as the smart contract, which facilitates multi-user collaboration and automates the process with no involvement of the third party. To demonstrate the usability of the proposed solution in the sharing economy, a notebook sharing case study is implemented using the Hyperledger Fabric. The functionalities of the smart contract are tested using the Hyperledger Composer. Moreover, the efficiency of the designed approach is demonstrated through a series of experimental tests using different performance metrics.


Algorithms ◽  
2020 ◽  
Vol 13 (3) ◽  
pp. 71 ◽  
Author(s):  
Athanasios Alexopoulos ◽  
Georgios Drakopoulos ◽  
Andreas Kanavos ◽  
Phivos Mylonas ◽  
Gerasimos Vonitsanos

At the dawn of the 10V or big data data era, there are a considerable number of sources such as smart phones, IoT devices, social media, smart city sensors, as well as the health care system, all of which constitute but a small portion of the data lakes feeding the entire big data ecosystem. This 10V data growth poses two primary challenges, namely storing and processing. Concerning the latter, new frameworks have been developed including distributed platforms such as the Hadoop ecosystem. Classification is a major machine learning task typically executed on distributed platforms and as a consequence many algorithmic techniques have been developed tailored for these platforms. This article extensively relies in two ways on classifiers implemented in MLlib, the main machine learning library for the Hadoop ecosystem. First, a vast number of classifiers is applied to two datasets, namely Higgs and PAMAP. Second, a two-step classification is ab ovo performed to the same datasets. Specifically, the singular value decomposition of the data matrix determines first a set of transformed attributes which in turn drive the classifiers of MLlib. The twofold purpose of the proposed architecture is to reduce complexity while maintaining a similar if not better level of the metrics of accuracy, recall, and F 1 . The intuition behind this approach stems from the engineering principle of breaking down complex problems to simpler and more manageable tasks. The experiments based on the same Spark cluster indicate that the proposed architecture outperforms the individual classifiers with respect to both complexity and the abovementioned metrics.


2021 ◽  
pp. 1-12
Author(s):  
Rajkumar Rajavel ◽  
Sathish Kumar Ravichandran ◽  
Partheeban Nagappan ◽  
Kanagachidambaresan Ramasubramanian Gobichettipalayam

A major demanding issue is developing a Service Level Agreement (SLA) based negotiation framework in the cloud. To provide personalized service access to consumers, a novel Automated Dynamic SLA Negotiation Framework (ADSLANF) is proposed using a dynamic SLA concept to negotiate on service terms and conditions. The existing frameworks exploit a direct negotiation mechanism where the provider and consumer can directly talk to each other, which may not be applicable in the future due to increasing demand on broker-based models. The proposed ADSLANF will take very less total negotiation time due to complicated negotiation mechanisms using a third-party broker agent. Also, a novel game theory decision system will suggest an optimal solution to the negotiating agent at the time of generating a proposal or counter proposal. This optimal suggestion will make the negotiating party aware of the optimal acceptance range of the proposal and avoid the negotiation break off by quickly reaching an agreement.


2020 ◽  
Vol 12 (21) ◽  
pp. 9255
Author(s):  
Madhubala Ganesan ◽  
Ah-Lian Kor ◽  
Colin Pattinson ◽  
Eric Rondeau

Internet of Things (IoT) coupled with big data analytics is emerging as the core of smart and sustainable systems which bolsters economic, environmental and social sustainability. Cloud-based data centers provide high performance computing power to analyze voluminous IoT data to provide invaluable insights to support decision making. However, multifarious servers in data centers appear to be the black hole of superfluous energy consumption that contributes to 23% of the global carbon dioxide (CO2) emissions in ICT (Information and Communication Technology) industry. IoT-related energy research focuses on low-power sensors and enhanced machine-to-machine communication performance. To date, cloud-based data centers still face energy–related challenges which are detrimental to the environment. Virtual machine (VM) consolidation is a well-known approach to affect energy-efficient cloud infrastructures. Although several research works demonstrate positive results for VM consolidation in simulated environments, there is a gap for investigations on real, physical cloud infrastructure for big data workloads. This research work addresses the gap of conducting real physical cloud infrastructure-based experiments. The primary goal of setting up a real physical cloud infrastructure is for the evaluation of dynamic VM consolidation approaches which include integrated algorithms from existing relevant research. An open source VM consolidation framework, Openstack NEAT is adopted and experiments are conducted on a Multi-node Openstack Cloud with Apache Spark as the big data platform. Open sourced Openstack has been deployed because it enables rapid innovation, and boosts scalability as well as resource utilization. Additionally, this research work investigates the performance based on service level agreement (SLA) metrics and energy usage of compute hosts. Relevant results concerning the best performing combination of algorithms are presented and discussed.


2021 ◽  
Vol 2021 ◽  
pp. 1-6
Author(s):  
Jie Xu

The Narrow Band-Internet of Things (NB-IoT) is a wideband radio technology developed for the Internet of Things that enables smoother- and farther-reaching connectivity between IoT devices. In addition to traditional network optimization devices, Bluetooth and Wi-Fi, its virtue is low cost, and it consumes less energy and has high coverage and extended battery life. In order to secure the balance of task execution latency across NB-IoT devices, in this research work, we design a handheld NB-IoT wireless communication device. Furthermore, we provide realistic resource-sharing methods between multimedia and sensor data in NB-IoT wireless deployment by our accurate analytical methodology. In addition, we have considerably enhanced technology for gathering Big Data from several scattered sources, in combination with advancements in big data processing methodologies. The proposed handheld terminal has a wide variety of commercial applications in intelligent manufacturing and smart parking. Simulation outcomes illustrate the benefits of our handheld terminal, which provides practical solutions for network optimization, improving market share and penetration rate.


Author(s):  
Shweta Kaushik ◽  
Charu Gandhi

Cloud computing has started a new era in the field of computing, which allows the access of remote data or services at anytime and anywhere. In today's competitive environment, the service dynamism, elasticity, and choices offered by this highly scalable technology are too attractive for enterprises to ignore. The scalability feature of cloud computing allows one to expand and contract the resources. The owner's data stored at the remote location, but he is usually afraid of sharing confidential data with cloud service provider. If the service provider is not the trusted one, there may be a chance of leakage of confidential data to external third party. Security and privacy of data require high consideration, which is resolved by storing the data in encrypted form. Data owner requires that the service provider should be trustworthy to store its confidential data without any exposure. One of the outstanding solutions for maintaining trust between different communicating parties could be the service level agreement between them.


Author(s):  
Richard T. Herschel

This paper examines big data and the opportunities it presents for improved business intelligence and decision making. Big data comes in multiple forms. It can be structured, semi-structured, or unstructured. The opportunity it presents is that there is so much of it and it is readily available to organizations. Organizations use big data for business intelligence (BI). They can apply analytics in BI activities to assess big data in order to gain new insights and opportunities for decision making. The problem is that oftentimes the data is of poor quality and it contains personal information. This paper explores these issues and examines the importance of effective data management in facilitating sound business intelligence. The Master Data Management methodology is reviewed and the importance of management support in its deployment is emphasized. With the advent of new sources of big data from IoT devices, the need for even more management involvement is stressed to ensure that organizational BI yield sound decisions and that use of data are in compliance with new regulations.


Sign in / Sign up

Export Citation Format

Share Document