Packet Capture and Analysis on MEDINA, A Massively Distributed Network Data Caching Platform

2017 ◽  
Vol 27 (03n04) ◽  
pp. 1750010 ◽  
Author(s):  
Amedeo Sapio ◽  
Mario Baldi ◽  
Fulvio Risso ◽  
Narendra Anand ◽  
Antonio Nucci

Traffic capture and analysis is key to many domains including network management, security and network forensics. Traditionally, it is performed by a dedicated device accessing traffic at a specific point within the network through a link tap or a port of a node mirroring packets. This approach is problematic because the dedicated device must be equipped with a large amount of computation and storage resources to store and analyze packets. Alternatively, in order to achieve scalability, analysis can be performed by a cluster of hosts. However, this is normally located at a remote location with respect to the observation point, hence requiring to move across the network a large volume of captured traffic. To address this problem, this paper presents an algorithm to distribute the task of capturing, processing and storing packets traversing a network across multiple packet forwarding nodes (e.g., IP routers). Essentially, our solution allows individual nodes on the path of a flow to operate on subsets of packets of that flow in a completely distributed and decentralized manner. The algorithm ensures that each packet is processed by n nodes, where n can be set to 1 to minimize overhead or to a higher value to achieve redundancy. Nodes create a distributed index that enables efficient retrieval of packets they store (e.g., for forensics applications). Finally, the basic principles of the presented solution can also be applied, with minimal changes, to the distributed execution of generic tasks on data flowing through a network of nodes with processing and storage capabilities. This has applications in various fields ranging from Fog Computing, to microservice architectures and the Internet of Things.

Author(s):  
Sejal Atit Bhavsar ◽  
Kirit J Modi

Fog computing is a paradigm that extends cloud computing services to the edge of the network. Fog computing provides data, storage, compute and application services to end users. The distinguishing characteristics of fog computing are its proximity to the end users. The application services are hosted on network edges like on routers, switches, etc. The goal of fog computing is to improve the efficiency and reduce the amount of data that needs to be transported to cloud for analysis, processing and storage. Due to heterogeneous characteristics of fog computing, there are some issues, i.e. security, fault tolerance, resource scheduling and allocation. To better understand fault tolerance, we highlighted the basic concepts of fault tolerance by understanding different fault tolerance techniques i.e. Reactive, Proactive and the hybrid. In addition to the fault tolerance, how to balance resource utilization and security in fog computing are also discussed here. Furthermore, to overcome platform level issues of fog computing, Hybrid fault tolerance model using resource management and security is presented by us.


2019 ◽  
Vol 11 (11) ◽  
pp. 222 ◽  
Author(s):  
Marica Amadeo ◽  
Giuseppe Ruggeri ◽  
Claudia Campolo ◽  
Antonella Molinaro ◽  
Valeria Loscrí ◽  
...  

By offering low-latency and context-aware services, fog computing will have a peculiar role in the deployment of Internet of Things (IoT) applications for smart environments. Unlike the conventional remote cloud, for which consolidated architectures and deployment options exist, many design and implementation aspects remain open when considering the latest fog computing paradigm. In this paper, we focus on the problems of dynamically discovering the processing and storage resources distributed among fog nodes and, accordingly, orchestrating them for the provisioning of IoT services for smart environments. In particular, we show how these functionalities can be effectively supported by the revolutionary Named Data Networking (NDN) paradigm. Originally conceived to support named content delivery, NDN can be extended to request and provide named computation services, with NDN nodes acting as both content routers and in-network service executors. To substantiate our analysis, we present an NDN fog computing framework with focus on a smart campus scenario, where the execution of IoT services is dynamically orchestrated and performed by NDN nodes in a distributed fashion. A simulation campaign in ndnSIM, the reference network simulator of the NDN research community, is also presented to assess the performance of our proposal against state-of-the-art solutions. Results confirm the superiority of the proposal in terms of service provisioning time, paid at the expenses of a slightly higher amount of traffic exchanged among fog nodes.


Author(s):  
Sejal Atit Bhavsar ◽  
Kirit J Modi

Fog computing is a paradigm that extends cloud computing services to the edge of the network. Fog computing provides data, storage, compute and application services to end users. The distinguishing characteristics of fog computing are its proximity to the end users. The application services are hosted on network edges like on routers, switches, etc. The goal of fog computing is to improve the efficiency and reduce the amount of data that needs to be transported to cloud for analysis, processing and storage. Due to heterogeneous characteristics of fog computing, there are some issues, i.e. security, fault tolerance, resource scheduling and allocation. To better understand fault tolerance, we highlighted the basic concepts of fault tolerance by understanding different fault tolerance techniques i.e. Reactive, Proactive and the hybrid. In addition to the fault tolerance, how to balance resource utilization and security in fog computing are also discussed here. Furthermore, to overcome platform level issues of fog computing, Hybrid fault tolerance model using resource management and security is presented by us.


2021 ◽  
pp. 308-318
Author(s):  
Hadeel T. Rajab ◽  
Manal F. Younis

 Internet of Things (IoT) contributes to improve the quality of life as it supports many applications, especially healthcare systems. Data generated from IoT devices is sent to the Cloud Computing (CC) for processing and storage, despite the latency caused by the distance. Because of the revolution in IoT devices, data sent to CC has been increasing. As a result, another problem added to the latency was increasing congestion on the cloud network. Fog Computing (FC) was used to solve these problems because of its proximity to IoT devices, while filtering data is sent to the CC. FC is a middle layer located between IoT devices and the CC layer. Due to the massive data generated by IoT devices on FC, Dynamic Weighted Round Robin (DWRR) algorithm was used, which represents a load balancing (LB) algorithm that is applied to schedule and distributes data among fog servers by reading CPU and memory values of these servers in order to improve system performance. The results proved that DWRR algorithm provides high throughput which reaches 3290 req/sec at 919 users. A lot of research is concerned with distribution of workload by using LB techniques without paying much attention to Fault Tolerance (FT), which implies that the system continues to operate even when fault occurs. Therefore, we proposed a replication FT technique called primary-backup replication based on dynamic checkpoint interval on FC. Checkpoint was used to replicate new data from a primary server to a backup server dynamically by monitoring CPU values of primary fog server, so that checkpoint occurs only when the CPU value is larger than 0.2 to reduce overhead. The results showed that the execution time of data filtering process on the FC with a dynamic checkpoint is less than the time spent in the case of the static checkpoint that is independent on the CPU status.


2021 ◽  
Author(s):  
Mehbub Alam ◽  
Nurzaman Ahmed ◽  
Rakesh Matam ◽  
Ferdous Ahmed Barbhuiya

<div>Due to the multi-hop, long-distance, and wireless backbone connectivity, provisioning critical and diverse services face challenges such as low latency and reliability. This paper proposes ioFog, an offline fog architecture for achieving reliability and low latency in a large backbone network. Our solution uses a Markov chain-based task prediction model to offer dynamic service requirements with minimal dependency on the Internet. The proposed architecture considers a central Fog Controller (FC) to (i) provide a global status overview and (ii) predict upcoming tasks of Fog Nodes for intelligent offloading decisions. The FC also has the current status of the existing fog nodes in terms of their processing and storage capabilities. Accordingly, it can schedule the possible future offline computations and task allocations. ioFog considers the requirements of individual IoT applications and enables improved fog computing decisions. As compared to the existing offline IoT solutions, ioFog improves service time significantly and service delivery ratio up to 23%.</div>


2021 ◽  
Vol 1 (2) ◽  
pp. 60-70
Author(s):  
Hindreen Rashid Abdulqadir ◽  
Subhi R. M. Zeebaree ◽  
Hanan M. Shukur ◽  
Mohammed Mohammed Sadeeq ◽  
Baraa Wasfi Salim ◽  
...  

The exponential growth of the Internet of Things (IoT) technology poses various challenges to the classic centralized cloud computing paradigm, including high latency, limited capacity, and network failure. Cloud computing and Fog computing carry the cloud closer to IoT computers in order to overcome these problems. Cloud and Fog provide IoT processing and storage of IoT items locally instead of sending them to the cloud. Cloud and Fog provide quicker reactions and better efficiency in conjunction with the cloud. Cloud and fog computing should also be viewed as the safest approach to ensure that IoT delivers reliable and stable resources to multiple IoT customers. This article discusses the latest in cloud and Fog computing and their convergence with IoT by stressing deployment's advantages and complexities. It also concentrates on cloud and Fog design and new IoT technologies, enhanced by utilizing the cloud and Fog model. Finally, transparent topics are addressed, along with potential testing recommendations for cloud storage and Fog computing, and IoT.


2021 ◽  
Author(s):  
Mehbub Alam ◽  
Nurzaman Ahmed ◽  
Rakesh Matam ◽  
Ferdous Ahmed Barbhuiya

<div>Due to the multi-hop, long-distance, and wireless backbone connectivity, provisioning critical and diverse services face challenges such as low latency and reliability. This paper proposes ioFog, an offline fog architecture for achieving reliability and low latency in a large backbone network. Our solution uses a Markov chain-based task prediction model to offer dynamic service requirements with minimal dependency on the Internet. The proposed architecture considers a central Fog Controller (FC) to (i) provide a global status overview and (ii) predict upcoming tasks of Fog Nodes for intelligent offloading decisions. The FC also has the current status of the existing fog nodes in terms of their processing and storage capabilities. Accordingly, it can schedule the possible future offline computations and task allocations. ioFog considers the requirements of individual IoT applications and enables improved fog computing decisions. As compared to the existing offline IoT solutions, ioFog improves service time significantly and service delivery ratio up to 23%.</div>


2018 ◽  
Vol 2 (1) ◽  
pp. 43
Author(s):  
Suwignyo Suwignyo ◽  
Abdul Rachim ◽  
Arizal Sapitri

Ice is a water that cooled below 0 °C and used for complement in drink. Ice can be found almost everywhere, including in the Wahid Hasyim Sempaja Roadside. From the preliminary test, obtained 5 samples ice cube were contaminated by Escherichia coli. The purpose of this study was to determine relationship between hygiene and sanitation with presence of Eschericia coli in ice cube of home industry at Wahid Hasyim Roadside Samarinda. This research used quantitative with survey methode. The population in this study was all of the seller in 2nd Wahid Hasyim Roadside. Sample was taken by Krejcie and Morgan so the there were 44 samples and used Cluster Random Sampling. The instruments are questionnaries, observation and laboratory test. Data analysis was carried out univariate and bivariate (using Fisher test p= 0.05). The conclusion of this study there are a relation between chosing raw material (p=0,03) and saving raw material (p=0,03) with presence of Eschericia coli. There was no relation between processing raw material into ice cube with presence of Eschericia coli (p=0,15).Advice that can be given to ice cube should maintain hygiene and sanitation of the selection, processing and storage of ice cube.


Sign in / Sign up

Export Citation Format

Share Document