A Novel Resource Management Framework for Fog Computing by Using Machine Learning Algorithm

Author(s):  
Shanthi Thangam Manukumar ◽  
Vijayalakshmi Muthuswamy

With the development of edge devices and mobile devices, the authenticated fast access for the networks is necessary and important. To make the edge and mobile devices smart, fast, and for the better quality of service (QoS), fog computing is an efficient way. Fog computing is providing the way for resource provisioning, service providers, high response time, and the best solution for mobile network traffic. In this chapter, the proposed method is for handling the fog resource management using efficient offloading mechanism. Offloading is done based on machine learning prediction technology and also by using the KNN algorithm to identify the nearest fog nodes to offload. The proposed method minimizes the energy consumption, latency and improves the QoS for edge devices, IoT devices, and mobile devices.

2021 ◽  
Vol 11 (3) ◽  
pp. 7273-7278
Author(s):  
M. Anwer ◽  
M. U. Farooq ◽  
S. M. Khan ◽  
W. Waseemullah

Many researchers have examined the risks imposed by the Internet of Things (IoT) devices on big companies and smart towns. Due to the high adoption of IoT, their character, inherent mobility, and standardization limitations, smart mechanisms, capable of automatically detecting suspicious movement on IoT devices connected to the local networks are needed. With the increase of IoT devices connected through internet, the capacity of web traffic increased. Due to this change, attack detection through common methods and old data processing techniques is now obsolete. Detection of attacks in IoT and detecting malicious traffic in the early stages is a very challenging problem due to the increase in the size of network traffic. In this paper, a framework is recommended for the detection of malicious network traffic. The framework uses three popular classification-based malicious network traffic detection methods, namely Support Vector Machine (SVM), Gradient Boosted Decision Trees (GBDT), and Random Forest (RF), with RF supervised machine learning algorithm achieving far better accuracy (85.34%). The dataset NSL KDD was used in the recommended framework and the performances in terms of training, predicting time, specificity, and accuracy were compared.


Churn has a significant impact on mobile network operators and telecommunications service providers. Many studies on churn have been reported, but no one can say that they can create universal human tools for predicting churn or that we can see all the reasons for it. The purpose of this study is to derive the call behavior factors of churning customers and to find ways to reduce the churn of target customers who exhibit these call behaviors. For this, this study uses decision tree and machine learning for the prediction of churn in telecom service. Based on the analysis results, first, the fact that the total number of customers who have more than 316.7 in churn shows that the higher the number of calls, the higher the chance of churn. Second, among customers with total day minutes above 316.7, those with customer service calls above 8.5 show a high likelihood of churn among complaining customers. The overall accuracy is 91.4%. Among the customers who predicted not to be churned, the accuracy that would not be churned was 92.87%, and the accuracy that was churned was 78.4% among the customers predicted to be churned


2020 ◽  
Vol 11 (4) ◽  
pp. 17-30
Author(s):  
Shefali Varshney ◽  
Rajinder Sandhu ◽  
P. K. Gupta

Application placement in the fog environment is becoming one of the major challenges because of its distributed, hierarchical, and heterogeneous nature. Also, user expectations and various features of IoT devices further increase the complexity of the problem for the placement of applications in the fog computing environment. Therefore, to improve the QoE of various end-users for the use of various system services, proper placement of applications in the fog computing environment plays an important role. In this paper, the authors have proposed a service placement methodology for the fog computing environment. For a better selection of application services, AHP technique has been used which provides results in the form of ranks. The performance evaluation of the proposed technique has been done by using a customized testbed that considers the parameters like CPU cycle, storage, maximum latency, processing speed, and network bandwidth. Experimental results obtained for the proposed methodology improved the efficiency of the fog network.


2020 ◽  
Vol 115 (3) ◽  
pp. 1839-1867
Author(s):  
Piotr Nawrocki ◽  
Bartlomiej Sniezynski

AbstractIn this paper we present an original adaptive task scheduling system, which optimizes the energy consumption of mobile devices using machine learning mechanisms and context information. The system learns how to allocate resources appropriately: how to schedule services/tasks optimally between the device and the cloud, which is especially important in mobile systems. Decisions are made taking the context into account (e.g. network connection type, location, potential time and cost of executing the application or service). In this study, a supervised learning agent architecture and service selection algorithm are proposed to solve this problem. Adaptation is performed online, on a mobile device. Information about the context, task description, the decision made and its results such as power consumption are stored and constitute training data for a supervised learning algorithm, which updates the knowledge used to determine the optimal location for the execution of a given type of task. To verify the solution proposed, appropriate software has been developed and a series of experiments have been conducted. Results show that as a result of the experience gathered and the learning process performed, the decision module has become more efficient in assigning the task to either the mobile device or cloud resources.


Information ◽  
2020 ◽  
Vol 11 (5) ◽  
pp. 279 ◽  
Author(s):  
Bambang Susilo ◽  
Riri Fitri Sari

The internet has become an inseparable part of human life, and the number of devices connected to the internet is increasing sharply. In particular, Internet of Things (IoT) devices have become a part of everyday human life. However, some challenges are increasing, and their solutions are not well defined. More and more challenges related to technology security concerning the IoT are arising. Many methods have been developed to secure IoT networks, but many more can still be developed. One proposed way to improve IoT security is to use machine learning. This research discusses several machine-learning and deep-learning strategies, as well as standard datasets for improving the security performance of the IoT. We developed an algorithm for detecting denial-of-service (DoS) attacks using a deep-learning algorithm. This research used the Python programming language with packages such as scikit-learn, Tensorflow, and Seaborn. We found that a deep-learning model could increase accuracy so that the mitigation of attacks that occur on an IoT network is as effective as possible.


Author(s):  
Md Mamunur Rashid ◽  
Joarder Kamruzzaman ◽  
Mohammad Mehedi Hassan ◽  
Tasadduq Imam ◽  
Steven Gordon

In recent years, the widespread deployment of the Internet of Things (IoT) applications has contributed to the development of smart cities. A smart city utilizes IoT-enabled technologies, communications and applications to maximize operational efficiency and enhance both the service providers’ quality of services and people’s wellbeing and quality of life. With the growth of smart city networks, however, comes the increased risk of cybersecurity threats and attacks. IoT devices within a smart city network are connected to sensors linked to large cloud servers and are exposed to malicious attacks and threats. Thus, it is important to devise approaches to prevent such attacks and protect IoT devices from failure. In this paper, we explore an attack and anomaly detection technique based on machine learning algorithms (LR, SVM, DT, RF, ANN and KNN) to defend against and mitigate IoT cybersecurity threats in a smart city. Contrary to existing works that have focused on single classifiers, we also explore ensemble methods such as bagging, boosting and stacking to enhance the performance of the detection system. Additionally, we consider an integration of feature selection, cross-validation and multi-class classification for the discussed domain, which has not been well considered in the existing literature. Experimental results with the recent attack dataset demonstrate that the proposed technique can effectively identify cyberattacks and the stacking ensemble model outperforms comparable models in terms of accuracy, precision, recall and F1-Score, implying the promise of stacking in this domain.


Sensors ◽  
2020 ◽  
Vol 20 (23) ◽  
pp. 6942
Author(s):  
Motahareh Mobasheri ◽  
Yangwoo Kim ◽  
Woongsup Kim

The term big data has emerged in network concepts since the Internet of Things (IoT) made data generation faster through various smart environments. In contrast, bandwidth improvement has been slower; therefore, it has become a bottleneck, creating the need to solve bandwidth constraints. Over time, due to smart environment extensions and the increasing number of IoT devices, the number of fog nodes has increased. In this study, we introduce fog fragment computing in contrast to conventional fog computing. We address bandwidth management using fog nodes and their cooperation to overcome the extra required bandwidth for IoT devices with emergencies and bandwidth limitations. We formulate the decision-making problem of the fog nodes using a reinforcement learning approach and develop a Q-learning algorithm to achieve efficient decisions by forcing the fog nodes to help each other under special conditions. To the best of our knowledge, there has been no research with this objective thus far. Therefore, we compare this study with another scenario that considers a single fog node to show that our new extended method performs considerably better.


Sign in / Sign up

Export Citation Format

Share Document