scholarly journals ioFog: Prediction-based Fog Computing Architecture for Offline IoT

Author(s):  
Mehbub Alam ◽  
Nurzaman Ahmed ◽  
Rakesh Matam ◽  
Ferdous Ahmed Barbhuiya

<div>Due to the multi-hop, long-distance, and wireless backbone connectivity, provisioning critical and diverse services face challenges such as low latency and reliability. This paper proposes ioFog, an offline fog architecture for achieving reliability and low latency in a large backbone network. Our solution uses a Markov chain-based task prediction model to offer dynamic service requirements with minimal dependency on the Internet. The proposed architecture considers a central Fog Controller (FC) to (i) provide a global status overview and (ii) predict upcoming tasks of Fog Nodes for intelligent offloading decisions. The FC also has the current status of the existing fog nodes in terms of their processing and storage capabilities. Accordingly, it can schedule the possible future offline computations and task allocations. ioFog considers the requirements of individual IoT applications and enables improved fog computing decisions. As compared to the existing offline IoT solutions, ioFog improves service time significantly and service delivery ratio up to 23%.</div>

2021 ◽  
Author(s):  
Mehbub Alam ◽  
Nurzaman Ahmed ◽  
Rakesh Matam ◽  
Ferdous Ahmed Barbhuiya

<div>Due to the multi-hop, long-distance, and wireless backbone connectivity, provisioning critical and diverse services face challenges such as low latency and reliability. This paper proposes ioFog, an offline fog architecture for achieving reliability and low latency in a large backbone network. Our solution uses a Markov chain-based task prediction model to offer dynamic service requirements with minimal dependency on the Internet. The proposed architecture considers a central Fog Controller (FC) to (i) provide a global status overview and (ii) predict upcoming tasks of Fog Nodes for intelligent offloading decisions. The FC also has the current status of the existing fog nodes in terms of their processing and storage capabilities. Accordingly, it can schedule the possible future offline computations and task allocations. ioFog considers the requirements of individual IoT applications and enables improved fog computing decisions. As compared to the existing offline IoT solutions, ioFog improves service time significantly and service delivery ratio up to 23%.</div>


Author(s):  
Sejal Atit Bhavsar ◽  
Kirit J Modi

Fog computing is a paradigm that extends cloud computing services to the edge of the network. Fog computing provides data, storage, compute and application services to end users. The distinguishing characteristics of fog computing are its proximity to the end users. The application services are hosted on network edges like on routers, switches, etc. The goal of fog computing is to improve the efficiency and reduce the amount of data that needs to be transported to cloud for analysis, processing and storage. Due to heterogeneous characteristics of fog computing, there are some issues, i.e. security, fault tolerance, resource scheduling and allocation. To better understand fault tolerance, we highlighted the basic concepts of fault tolerance by understanding different fault tolerance techniques i.e. Reactive, Proactive and the hybrid. In addition to the fault tolerance, how to balance resource utilization and security in fog computing are also discussed here. Furthermore, to overcome platform level issues of fog computing, Hybrid fault tolerance model using resource management and security is presented by us.


2017 ◽  
Vol 27 (03n04) ◽  
pp. 1750010 ◽  
Author(s):  
Amedeo Sapio ◽  
Mario Baldi ◽  
Fulvio Risso ◽  
Narendra Anand ◽  
Antonio Nucci

Traffic capture and analysis is key to many domains including network management, security and network forensics. Traditionally, it is performed by a dedicated device accessing traffic at a specific point within the network through a link tap or a port of a node mirroring packets. This approach is problematic because the dedicated device must be equipped with a large amount of computation and storage resources to store and analyze packets. Alternatively, in order to achieve scalability, analysis can be performed by a cluster of hosts. However, this is normally located at a remote location with respect to the observation point, hence requiring to move across the network a large volume of captured traffic. To address this problem, this paper presents an algorithm to distribute the task of capturing, processing and storing packets traversing a network across multiple packet forwarding nodes (e.g., IP routers). Essentially, our solution allows individual nodes on the path of a flow to operate on subsets of packets of that flow in a completely distributed and decentralized manner. The algorithm ensures that each packet is processed by n nodes, where n can be set to 1 to minimize overhead or to a higher value to achieve redundancy. Nodes create a distributed index that enables efficient retrieval of packets they store (e.g., for forensics applications). Finally, the basic principles of the presented solution can also be applied, with minimal changes, to the distributed execution of generic tasks on data flowing through a network of nodes with processing and storage capabilities. This has applications in various fields ranging from Fog Computing, to microservice architectures and the Internet of Things.


2020 ◽  
Vol 3 (1) ◽  
pp. 75-105 ◽  
Author(s):  
Rahul Neware ◽  
Urmila Shrawankar

Fog computing spreads the cloud administrations and services to the edge of the system, and brings processing, communications and reserving, and storage capacity closer to edge gadgets and end-clients and, in the process, aims at enhancing versatility, low latency, transfer speed and safety and protection. This article takes an extensive and wide-ranging view of fog computing, covering several aspects. At the outset is the many-layered structural design of fog computing and its attributes. After that, chief advances like communication and inter-exchange, computing, etc. are delineated, while showing how these backup and facilitate the installations and various applications. Following that, it is shown that how, despite fog computing being a feature-rich platform, it is dogged by its susceptibility to several security, privacy, and safety concerns, which stem from the nature of its widely distributed and open architecture. Finally, some suggestions are advanced to address some of the safety challenges discussed so as to propel the further growth of fog computing.


2021 ◽  
Vol 4 (1) ◽  
pp. 1-17
Author(s):  
Hewan Shrestha ◽  
Puviyarai T. ◽  
Sana Sodanapalli ◽  
Chandramohan Dhasarathan

The emerging trend of internet of things in recent times is a blessing for various industries in the world. With the increasing amount of data generated by these devices, it makes it difficult for proper data flow and computation over the regular cloud architecture. Fog computing is a great alternative for cloud computing as it supports computation in devices over a large distributed geographical area, which is a plus for fog computing. Having applications in various domains including healthcare, logistics, design, marketing, manufacturing, and many more, fog computing is a great boon for the future. Evolving fog computing in various domains with different methods and techniques has shaped a clear future for it. Applicability of fog computing in vehicular communications and storage-as-a-service has made the term more popular these days. It is a review of all the possible fog computing-enabled applications and their future scope. It also prepares a basis for further research into fog computing domain-enabled services with low latency and minimum costs.


2019 ◽  
Vol 11 (11) ◽  
pp. 222 ◽  
Author(s):  
Marica Amadeo ◽  
Giuseppe Ruggeri ◽  
Claudia Campolo ◽  
Antonella Molinaro ◽  
Valeria Loscrí ◽  
...  

By offering low-latency and context-aware services, fog computing will have a peculiar role in the deployment of Internet of Things (IoT) applications for smart environments. Unlike the conventional remote cloud, for which consolidated architectures and deployment options exist, many design and implementation aspects remain open when considering the latest fog computing paradigm. In this paper, we focus on the problems of dynamically discovering the processing and storage resources distributed among fog nodes and, accordingly, orchestrating them for the provisioning of IoT services for smart environments. In particular, we show how these functionalities can be effectively supported by the revolutionary Named Data Networking (NDN) paradigm. Originally conceived to support named content delivery, NDN can be extended to request and provide named computation services, with NDN nodes acting as both content routers and in-network service executors. To substantiate our analysis, we present an NDN fog computing framework with focus on a smart campus scenario, where the execution of IoT services is dynamically orchestrated and performed by NDN nodes in a distributed fashion. A simulation campaign in ndnSIM, the reference network simulator of the NDN research community, is also presented to assess the performance of our proposal against state-of-the-art solutions. Results confirm the superiority of the proposal in terms of service provisioning time, paid at the expenses of a slightly higher amount of traffic exchanged among fog nodes.


Author(s):  
Sejal Atit Bhavsar ◽  
Kirit J Modi

Fog computing is a paradigm that extends cloud computing services to the edge of the network. Fog computing provides data, storage, compute and application services to end users. The distinguishing characteristics of fog computing are its proximity to the end users. The application services are hosted on network edges like on routers, switches, etc. The goal of fog computing is to improve the efficiency and reduce the amount of data that needs to be transported to cloud for analysis, processing and storage. Due to heterogeneous characteristics of fog computing, there are some issues, i.e. security, fault tolerance, resource scheduling and allocation. To better understand fault tolerance, we highlighted the basic concepts of fault tolerance by understanding different fault tolerance techniques i.e. Reactive, Proactive and the hybrid. In addition to the fault tolerance, how to balance resource utilization and security in fog computing are also discussed here. Furthermore, to overcome platform level issues of fog computing, Hybrid fault tolerance model using resource management and security is presented by us.


2021 ◽  
pp. 308-318
Author(s):  
Hadeel T. Rajab ◽  
Manal F. Younis

 Internet of Things (IoT) contributes to improve the quality of life as it supports many applications, especially healthcare systems. Data generated from IoT devices is sent to the Cloud Computing (CC) for processing and storage, despite the latency caused by the distance. Because of the revolution in IoT devices, data sent to CC has been increasing. As a result, another problem added to the latency was increasing congestion on the cloud network. Fog Computing (FC) was used to solve these problems because of its proximity to IoT devices, while filtering data is sent to the CC. FC is a middle layer located between IoT devices and the CC layer. Due to the massive data generated by IoT devices on FC, Dynamic Weighted Round Robin (DWRR) algorithm was used, which represents a load balancing (LB) algorithm that is applied to schedule and distributes data among fog servers by reading CPU and memory values of these servers in order to improve system performance. The results proved that DWRR algorithm provides high throughput which reaches 3290 req/sec at 919 users. A lot of research is concerned with distribution of workload by using LB techniques without paying much attention to Fault Tolerance (FT), which implies that the system continues to operate even when fault occurs. Therefore, we proposed a replication FT technique called primary-backup replication based on dynamic checkpoint interval on FC. Checkpoint was used to replicate new data from a primary server to a backup server dynamically by monitoring CPU values of primary fog server, so that checkpoint occurs only when the CPU value is larger than 0.2 to reduce overhead. The results showed that the execution time of data filtering process on the FC with a dynamic checkpoint is less than the time spent in the case of the static checkpoint that is independent on the CPU status.


2020 ◽  
Vol 2020 ◽  
pp. 1-15 ◽  
Author(s):  
Jielin Jiang ◽  
Zheng Li ◽  
Yuan Tian ◽  
Najla Al-Nabhan

Cloud computing is widely used for its powerful and accessible computing and storage capacity. However, with the development trend of Internet of Things (IoTs), the distance between cloud and terminal devices can no longer meet the new requirements of low latency and real-time interaction of IoTs. Fog has been proposed as a complement to the cloud which moves servers to the edge of the network, making it possible to process service requests of terminal devices locally. Despite the fact that fog computing solves many obstacles for the development of IoT, there are still many problems to be solved for its immature technology. In this paper, the concepts and characteristics of cloud and fog computing are introduced, followed by the comparison and collaboration between them. We summarize main challenges IoT faces in new application requirements (e.g., low latency, network bandwidth constraints, resource constraints of devices, stability of service, and security) and analyze fog-based solutions. The remaining challenges and research directions of fog after integrating into IoT system are discussed. In addition, the key role that fog computing based on 5G may play in the field of intelligent driving and tactile robots is prospected.


Sensors ◽  
2020 ◽  
Vol 20 (22) ◽  
pp. 6441 ◽  
Author(s):  
Salam Hamdan ◽  
Moussa Ayyash ◽  
Sufyan Almajali

The rapid growth of the Internet of Things (IoT) applications and their interference with our daily life tasks have led to a large number of IoT devices and enormous sizes of IoT-generated data. The resources of IoT devices are limited; therefore, the processing and storing IoT data in these devices are inefficient. Traditional cloud-computing resources are used to partially handle some of the IoT resource-limitation issues; however, using the resources in cloud centers leads to other issues, such as latency in time-critical IoT applications. Therefore, edge-cloud-computing technology has recently evolved. This technology allows for data processing and storage at the edge of the network. This paper studies, in-depth, edge-computing architectures for IoT (ECAs-IoT), and then classifies them according to different factors such as data placement, orchestration services, security, and big data. Besides, the paper studies each architecture in depth and compares them according to various features. Additionally, ECAs-IoT is mapped according to two existing IoT layered models, which helps in identifying the capabilities, features, and gaps of every architecture. Moreover, the paper presents the most important limitations of existing ECAs-IoT and recommends solutions to them. Furthermore, this survey details the IoT applications in the edge-computing domain. Lastly, the paper recommends four different scenarios for using ECAs-IoT by IoT applications.


Sign in / Sign up

Export Citation Format

Share Document