scholarly journals Stateful Stream Processing Containerized as Microservice to Support Digital Twins in Fog Computing

Author(s):  
Ameer Basim Abdulameer Alaasam ◽  
Gleb Igorevich Radchenko ◽  
Andrei Nikolaevitch Tchernykh ◽  
José Luis González-Compeán González-Compeán

Digital twins of processes and devices use information from sensors to synchronize their state with the entities of the physical world. The concept of stream computing enables effective processing of events generated by such sensors. However, the need to track the state of an instance of the object leads to the impossibility of organizing instances of digital twins as stateless services. Another feature of digital twins is that several tasks implemented on their basis require the ability to respond to incoming events at near-real-time speed. In this case, the use of cloud computing becomes unacceptable due to high latency. Fog computing manages this problem by moving some computational tasks closer to the data sources. One of the recent solutions providing the development of loosely coupled distributed systems is a Microservice approach, which implies the organization of the distributed system as a set of coherent and independent services interacting with each other using messages. The microservice is most often isolated by utilizing containers to overcome the high overheads of using virtual machines. The main problem is that microservices and containers together are stateless by nature. The container technology still does not fully support live container migration between physical hosts without data loss. It causes challenges in ensuring the uninterrupted operation of services in fog computing environments. Thus, an essential challenge is to create a containerized stateful stream processing based microservice to support digital twins in the fog computing environment. Within the scope of this article, we study live stateful stream processing migration and how to redistribute computational activity across cloud and fog nodes using Kafka middleware and its Stream DSL API.

2021 ◽  
Vol 11 (22) ◽  
pp. 10996
Author(s):  
Jongbeom Lim

As Internet of Things (IoT) and Industrial Internet of Things (IIoT) devices are becoming increasingly popular in the era of the Fourth Industrial Revolution, the orchestration and management of numerous fog devices encounter a scalability problem. In fog computing environments, to embrace various types of computation, cloud virtualization technology is widely used. With virtualization technology, IoT and IIoT tasks can be run on virtual machines or containers, which are able to migrate from one machine to another. However, efficient and scalable orchestration of migrations for mobile users and devices in fog computing environments is not an easy task. Naïve or unmanaged migrations may impinge on the reliability of cloud tasks. In this paper, we propose a scalable fog computing orchestration mechanism for reliable cloud task scheduling. The proposed scalable orchestration mechanism considers live migrations of virtual machines and containers for the edge servers to reduce both cloud task failures and suspended time when a device is disconnected due to mobility. The performance evaluation shows that our proposed fog computing orchestration is scalable while preserving the reliability of cloud tasks.


2016 ◽  
Vol 3 (1) ◽  
pp. 36-48 ◽  
Author(s):  
Zhiwei Xu ◽  
Xuebin Chi ◽  
Nong Xiao

Abstract A high-performance computing environment, also known as a supercomputing environment, e-Science environment or cyberinfrastructure, is a crucial system that connects users’ applications to supercomputers, and provides usability, efficiency, sharing, and collaboration capabilities. This review presents important lessons drawn from China's nationwide efforts to build and use a high-performance computing environment over the past 20 years (1995–2015), including three observations and two open problems. We present evidence that such an environment helps to grow China's nationwide supercomputing ecosystem by orders of magnitude, where a loosely coupled architecture accommodates diversity. An important open problem is why technology for global networked supercomputing has not yet become as widespread as the Internet or Web. In the next 20 years, high-performance computing environments will need to provide zettaflops computing capability and 10 000 times better energy efficiency, and support seamless human-cyber-physical ternary computing.


2020 ◽  
Vol 46 (8) ◽  
pp. 511-525
Author(s):  
Ameer B. A. Alaasam ◽  
G. Radchenko ◽  
A. Tchernykh ◽  
J. L. González Compeán

2021 ◽  
Vol 8 (4) ◽  
pp. 82-88
Author(s):  
Alraddady et al. ◽  

The tremendous increase in IoT devices and the amount of data they produced is very expensive to be processed at cloud data centers. Therefore, fog computing was introduced in 2012 by Cisco as a decentralized computing environment that is considered to be more efficient in handling such a plethora in the number of requests. Fog computing is a distributed computing paradigm that focuses on bringing data processing at the network peripheral to reduce response time and increase the quality of service. Dependability challenges of such distributed and heterogeneous computing environments are considered in this paper. Because fog computing is a new computing paradigm, several studies have been presented to tackle its challenges and issues. However, dependability in specific did not receive much attention. In the paper, we explore several solutions to increase dependability in fog computing such as fault tolerance techniques, placement policies, middleware, and data management mechanisms aiming to help system designers choose the most appropriate solution.


Sign in / Sign up

Export Citation Format

Share Document