scholarly journals Implementation of Blockchain Consensus Algorithm on Embedded Architecture

2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Tarek Frikha ◽  
Faten Chaabane ◽  
Nadhir Aouinti ◽  
Omar Cheikhrouhou ◽  
Nader Ben Amor ◽  
...  

The adoption of Internet of Things (IoT) technology across many applications, such as autonomous systems, communication, and healthcare, is driving the market’s growth at a positive rate. The emergence of advanced data analytics techniques such as blockchain for connected IoT devices has the potential to reduce the cost and increase in cloud platform adoption. Blockchain is a key technology for real-time IoT applications providing trust in distributed robotic systems running on embedded hardware without the need for certification authorities. There are many challenges in blockchain IoT applications such as the power consumption and the execution time. These specific constraints have to be carefully considered besides other constraints such as number of nodes and data security. In this paper, a novel approach is discussed based on hybrid HW/SW architecture and designed for Proof of Work (PoW) consensus which is the most used consensus mechanism in blockchain. The proposed architecture is validated using the Ethereum blockchain with the Keccak 256 and the field-programmable gate array (FPGA) ZedBoard development kit. This implementation shows improvement in execution time of 338% and minimizing power consumption of 255% compared to the use of Nvidia Maxwell GPUs.

2018 ◽  
Vol 38 (1) ◽  
pp. 121-129 ◽  
Author(s):  
Pablo Antonio Pico Valencia ◽  
Juan A. Holgado-Terriza ◽  
Deiver Herrera-Sánchez ◽  
José Luis Sampietro

Recently, the scientific community has demonstrated a special interest in the process related to the integration of the agent-oriented technology with Internet of Things (IoT) platforms. Then, it arises a novel approach named Internet of Agents (IoA) as an alternative to add an intelligence and autonomy component for IoT devices and networks. This paper presents an analysis of the main benefits derived from the use of the IoA approach, based on a practical point of view regarding the necessities that humans demand in their daily life and work, which can be solved by IoT networks modeled as IoA infrastructures. It has been presented 24 study cases of the IoA approach at different domains ––smart industry, smart city and smart health wellbeing–– in order to define the scope of these proposals in terms of intelligence and autonomy in contrast to their corresponding generic IoT applications.


2011 ◽  
Vol 2011 ◽  
pp. 1-25 ◽  
Author(s):  
R. Al-Haddad ◽  
R. Oreifej ◽  
R. A. Ashraf ◽  
R. F. DeMara

As reconfigurable devices' capacities and the complexity of applications that use them increase, the need forself-relianceof deployed systems becomes increasingly prominent. Organic computing paradigms have been proposed for fault-tolerant systems because they promote behaviors that allow complex digital systems to adapt and survive in demanding environments. In this paper, we develop asustainable modular adaptive redundancy technique (SMART)composed of a two-layered organic system. The hardware layer is implemented on a XilinxVirtex-4Field Programmable Gate Array (FPGA) to provide self-repair using a novel approach calledreconfigurable adaptive redundancy system (RARS). The software layer supervises the organic activities on the FPGA and extends the self-healing capabilities through application-independent, intrinsic, and evolutionary repair techniques that leverage the benefits of dynamic partial reconfiguration (PR). SMART was evaluated using a Sobel edge-detection application and was shown to tolerate stressful sequences of injected transient and permanent faults while reducing dynamic power consumption by 30% compared to conventionaltriple modular redundancy (TMR)techniques, with nominal impact on the fault-tolerance capabilities. Moreover, PR is employed to keep the system on line while under repair and also to reduce repair time. Experiments have shown a 27.48% decrease in repair time when PR is employed compared to the full bitstream configuration case.


Sensors ◽  
2019 ◽  
Vol 19 (4) ◽  
pp. 833 ◽  
Author(s):  
Ingook Jang ◽  
Donghun Lee ◽  
Jinchul Choi ◽  
Youngsung Son

The traditional Internet of Things (IoT) paradigm has evolved towards intelligent IoT applications which exploit knowledge produced by IoT devices using artificial intelligence techniques. Knowledge sharing between IoT devices is a challenging issue in this trend. In this paper, we propose a Knowledge of Things (KoT) framework which enables sharing self-taught knowledge between IoT devices which require similar or identical knowledge without help from the cloud. The proposed KoT framework allows an IoT device to effectively produce, cumulate, and share its self-taught knowledge with other devices at the edge in the vicinity. This framework can alleviate behavioral repetition in users and computational redundancy in systems in intelligent IoT applications. To demonstrate the feasibility of the proposed concept, we examine a smart home case study and build a prototype of the KoT framework-based smart home system. Experimental results show that the proposed KoT framework reduces the response time to use intelligent IoT devices from a user’s perspective and the power consumption for compuation from a system’s perspective.


2021 ◽  
Author(s):  
Subin narayanan ◽  
Dimitris Tsolkas ◽  
Nikos Passas ◽  
Andreas Höglund ◽  
Olof Liberg

<div>The effective support of 5G-Internet of Things (IoT) requires cellular service in deep coverage areas while providing long battery life for IoT devices which perform infrequent small data transmission towards the base station. Relaying is a promising solution to extend the coverage while at the same time meeting the battery life requirements of the IoT devices. Considering this, we analyze the suitability of layer-3 relaying over the 3GPP Release 16 NR-PC5 interface to support massive IoT applications. More precisely, we study the unicast connection establishment mechanism over the NR PC5 interface in a partial coverage scenario. Further, a set of optimizations on the Release 16 NR-PC5 procedure to effectively support massive IoT applications are proposed and analyzed. The obtained performance evaluation results which are presented in terms of data success probability, device power consumption, and signaling overhead, quantify how effectively the Release 16 NR-PC5 interface can support the requirement of IoT in the 5G and beyond era. The proposed sidelink small data transmission and frame-level access provides the largest gain overall and can reduce the device power consumption by an average of 68%, and signaling overhead by 15% while maintaining a data success probability of more than 90% in an IMT-2020 defined IoT traffic scenario.</div>


2021 ◽  
Vol 11 (22) ◽  
pp. 11011
Author(s):  
Moin Uddin ◽  
Muhammad Muzammal ◽  
Muhammad Khurram Hameed ◽  
Ibrahim Tariq Javed ◽  
Bandar Alamri ◽  
...  

Internet of things is widely used in the current era to collect data from sensors and perform specific tasks through processing according to the requirements. The data collected can be sent to a blockchain network to create secure and tamper-resistant records of transactions. The combination of blockchain with IoT has huge potential as it can provide decentralized computation, storage, and exchange for IoT data. However, IoT applications require a low-latency consensus mechanism due to its constraints. In this paper, CBCIoT, a consensus algorithm for blockchain-based IoT applications, is proposed. The primary purpose of this algorithm is to improve scalability in terms of validation and verification rate. The algorithm is developed to be compatible with IoT devices where a slight delay is acceptable. The simulation results show the proposed algorithm’s efficiency in terms of block generation time and transactions per second.


2022 ◽  
Vol 18 (2) ◽  
pp. 1-25
Author(s):  
Jing Li ◽  
Weifa Liang ◽  
Zichuan Xu ◽  
Xiaohua Jia ◽  
Wanlei Zhou

We are embracing an era of Internet of Things (IoT). The latency brought by unstable wireless networks caused by limited resources of IoT devices seriously impacts the quality of services of users, particularly the service delay they experienced. Mobile Edge Computing (MEC) technology provides promising solutions to delay-sensitive IoT applications, where cloudlets (edge servers) are co-located with wireless access points in the proximity of IoT devices. The service response latency for IoT applications can be significantly shortened due to that their data processing can be performed in a local MEC network. Meanwhile, most IoT applications usually impose Service Function Chain (SFC) enforcement on their data transmission, where each data packet from its source gateway of an IoT device to the destination (a cloudlet) of the IoT application must pass through each Virtual Network Function (VNF) in the SFC in an MEC network. However, little attention has been paid on such a service provisioning of multi-source IoT applications in an MEC network with SFC enforcement. In this article, we study service provisioning in an MEC network for multi-source IoT applications with SFC requirements and aiming at minimizing the cost of such service provisioning, where each IoT application has multiple data streams from different sources to be uploaded to a location (cloudlet) in the MEC network for aggregation, processing, and storage purposes. To this end, we first formulate two novel optimization problems: the cost minimization problem of service provisioning for a single multi-source IoT application, and the service provisioning problem for a set of multi-source IoT applications, respectively, and show that both problems are NP-hard. Second, we propose a service provisioning framework in the MEC network for multi-source IoT applications that consists of uploading stream data from multiple sources of the IoT application to the MEC network, data stream aggregation and routing through the VNF instance placement and sharing, and workload balancing among cloudlets. Third, we devise an efficient algorithm for the cost minimization problem built upon the proposed service provisioning framework, and further extend the solution for the service provisioning problem of a set of multi-source IoT applications. We finally evaluate the performance of the proposed algorithms through experimental simulations. Simulation results demonstrate that the proposed algorithms are promising.


2021 ◽  
Author(s):  
Subin narayanan ◽  
Dimitris Tsolkas ◽  
Nikos Passas ◽  
Andreas Höglund ◽  
Olof Liberg

<div>The effective support of 5G-Internet of Things (IoT) requires cellular service in deep coverage areas while providing long battery life for IoT devices which perform infrequent small data transmission towards the base station. Relaying is a promising solution to extend the coverage while at the same time meeting the battery life requirements of the IoT devices. Considering this, we analyze the suitability of layer-3 relaying over the 3GPP Release 16 NR-PC5 interface to support massive IoT applications. More precisely, we study the unicast connection establishment mechanism over the NR PC5 interface in a partial coverage scenario. Further, a set of optimizations on the Release 16 NR-PC5 procedure to effectively support massive IoT applications are proposed and analyzed. The obtained performance evaluation results which are presented in terms of data success probability, device power consumption, and signaling overhead, quantify how effectively the Release 16 NR-PC5 interface can support the requirement of IoT in the 5G and beyond era. The proposed sidelink small data transmission and frame-level access provides the largest gain overall and can reduce the device power consumption by an average of 68%, and signaling overhead by 15% while maintaining a data success probability of more than 90% in an IMT-2020 defined IoT traffic scenario.</div>


2021 ◽  
Vol 10 (1) ◽  
pp. 13
Author(s):  
Claudia Campolo ◽  
Giacomo Genovese ◽  
Antonio Iera ◽  
Antonella Molinaro

Several Internet of Things (IoT) applications are booming which rely on advanced artificial intelligence (AI) and, in particular, machine learning (ML) algorithms to assist the users and make decisions on their behalf in a large variety of contexts, such as smart homes, smart cities, smart factories. Although the traditional approach is to deploy such compute-intensive algorithms into the centralized cloud, the recent proliferation of low-cost, AI-powered microcontrollers and consumer devices paves the way for having the intelligence pervasively spread along the cloud-to-things continuum. The take off of such a promising vision may be hurdled by the resource constraints of IoT devices and by the heterogeneity of (mostly proprietary) AI-embedded software and hardware platforms. In this paper, we propose a solution for the AI distributed deployment at the deep edge, which lays its foundation in the IoT virtualization concept. We design a virtualization layer hosted at the network edge that is in charge of the semantic description of AI-embedded IoT devices, and, hence, it can expose as well as augment their cognitive capabilities in order to feed intelligent IoT applications. The proposal has been mainly devised with the twofold aim of (i) relieving the pressure on constrained devices that are solicited by multiple parties interested in accessing their generated data and inference, and (ii) and targeting interoperability among AI-powered platforms. A Proof-of-Concept (PoC) is provided to showcase the viability and advantages of the proposed solution.


Author(s):  
Jaber Almutairi ◽  
Mohammad Aldossary

AbstractRecently, the number of Internet of Things (IoT) devices connected to the Internet has increased dramatically as well as the data produced by these devices. This would require offloading IoT tasks to release heavy computation and storage to the resource-rich nodes such as Edge Computing and Cloud Computing. Although Edge Computing is a promising enabler for latency-sensitive related issues, its deployment produces new challenges. Besides, different service architectures and offloading strategies have a different impact on the service time performance of IoT applications. Therefore, this paper presents a novel approach for task offloading in an Edge-Cloud system in order to minimize the overall service time for latency-sensitive applications. This approach adopts fuzzy logic algorithms, considering application characteristics (e.g., CPU demand, network demand and delay sensitivity) as well as resource utilization and resource heterogeneity. A number of simulation experiments are conducted to evaluate the proposed approach with other related approaches, where it was found to improve the overall service time for latency-sensitive applications and utilize the edge-cloud resources effectively. Also, the results show that different offloading decisions within the Edge-Cloud system can lead to various service time due to the computational resources and communications types.


Sign in / Sign up

Export Citation Format

Share Document