scholarly journals A Hybrid Machine Learning Method in detecting anomalies in IoT at the fog layer

Author(s):  
Believe Ayodele ◽  
Michaela Tromans Jones

Abstract With the rapid growth and utilization of IoT devices around the world, attacks on these devices are also increasing thereby posing a security and privacy issue for industry providers and end-users alike. A common way to detect anomaly behaviour is to analyze the network traffic and categorize the outcome into benign and malignant traffic. With an increase in network traffic and sophistication of attacking techniques daily, there is a need for a state-of-the-art pattern recognition technique that can handle this ever increasing and ever-changing traffic and can also improve over time as attacks become more sophisticated. This research paper proposes a hybrid model for anomaly detection at the IoT fog layer using an ANN as a base model and several binary classifiers (which served as meta-classifiers) connected in series. The proposed model was tested and evaluated on a dataset of ‘x’ observations, demonstrating that such a model is both highly effective and efficient in detecting IoT network traffic anomalies.

2020 ◽  
Author(s):  
Rodrigo Moreira ◽  
Larissa Rodrigues ◽  
Pedro Rosa ◽  
Flávio Silva

The network traffic classification allows improving the management, and the network services offer taking into account the kind of application. The future network architectures, mainly mobile networks, foresee intelligent mechanisms in their architectural frameworks to deliver application-aware network requirements. The potential of convolutional neural networks capabilities, widely exploited in several contexts, can be used in network traffic classification. Thus, it is necessary to develop methods based on the content of packets transforming it into a suitable input for CNN technologies. Hence, we implemented and evaluated the Packet Vision, a method capable of building images from packets raw-data, considering both header and payload. Our approach excels those found in state-of-the-art by delivering security and privacy by transforming the raw-data packet into images. Therefore, we built a dataset with four traffic classes evaluating the performance of three CNNs architectures: AlexNet, ResNet-18, and SqueezeNet. Experiments showcase the Packet Vision combined with CNNs applicability and suitability as a promising approach to deliver outstanding performance in classifying network traffic.


Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 359
Author(s):  
Houshyar Honar Pajooh ◽  
Mohammad Rashid ◽  
Fakhrul Alam ◽  
Serge Demidenko

Providing security and privacy to the Internet of Things (IoT) networks while achieving it with minimum performance requirements is an open research challenge. Blockchain technology, as a distributed and decentralized ledger, is a potential solution to tackle the limitations of the current peer-to-peer IoT networks. This paper presents the development of an integrated IoT system implementing the permissioned blockchain Hyperledger Fabric (HLF) to secure the edge computing devices by employing a local authentication process. In addition, the proposed model provides traceability for the data generated by the IoT devices. The presented solution also addresses the IoT systems’ scalability challenges, the processing power and storage issues of the IoT edge devices in the blockchain network. A set of built-in queries is leveraged by smart-contracts technology to define the rules and conditions. The paper validates the performance of the proposed model with practical implementation by measuring performance metrics such as transaction throughput and latency, resource consumption, and network use. The results show that the proposed platform with the HLF implementation is promising for the security of resource-constrained IoT devices and is scalable for deployment in various IoT scenarios.


Electronics ◽  
2021 ◽  
Vol 11 (1) ◽  
pp. 81
Author(s):  
Jorge Coelho ◽  
Luís Nogueira

Internet of things (IoT) devices play a crucial role in the design of state-of-the-art infrastructures, with an increasing demand to support more complex services and applications. However, IoT devices are known for having limited computational capacities. Traditional approaches used to offload applications to the cloud to ease the burden on end-user devices, at the expense of a greater latency and increased network traffic. Our goal is to optimize the use of IoT devices, particularly those being underutilized. In this paper, we propose a pragmatic solution, built upon the Erlang programming language, that allows a group of IoT devices to collectively execute services, using their spare resources with minimal interference, and achieving a level of performance that otherwise would not be met by individual execution.


2020 ◽  
Author(s):  
Faisal Hussain ◽  
Syed Ghazanfar Abbas ◽  
Muhammad Husnain ◽  
Ubaid U. Fayyaz ◽  
Farrukh Shahzad ◽  
...  

Abstract The network attacks are increasing both in frequency and intensity with the rapid growth of internet of things (IoT) devices. Recently, denial of service (DoS) and distributed denial of service (DDoS) attacks are reported as the most frequent attacks in IoT networks. The traditional security solutions like firewalls, intrusion detection systems, etc., are unable to detect the complex DoS and DDoS attacks since most of them filter the normal and attack traffic based upon the static predefined rules. However, these solutions can become reliable and effective when integrated with artificial intelligence (AI) based techniques. During the last few years, deep learning models especially convolutional neural networks achieved high significance due to their outstanding performance in the image processing field. The potential of these convolutional neural network (CNN) models can be used to efficiently detect the complex DoS and DDoS by converting the network traffic dataset into images. Therefore, in this work, we proposed a methodology to convert the network traffic data into image form and trained a state-of-the-art CNN model, i.e., ResNet over the converted data. The proposed methodology accomplished 99.99\% accuracy for detecting the DoS and DDoS in case of binary classification. Furthermore, the proposed methodology achieved 87\% average precision for recognizing eleven types of DoS and DDoS attack patterns which is 9\% higher as compared to the state-of-the-art.


2019 ◽  
Vol 10 (6) ◽  
pp. 1382-1394
Author(s):  
R. Vijayalakshmi ◽  
V. K. Soma Sekhar Srinivas ◽  
E. Manjoolatha ◽  
G. Rajeswari ◽  
M. Sundaramurthy

2021 ◽  
pp. 1-16
Author(s):  
Ibtissem Gasmi ◽  
Mohamed Walid Azizi ◽  
Hassina Seridi-Bouchelaghem ◽  
Nabiha Azizi ◽  
Samir Brahim Belhaouari

Context-Aware Recommender System (CARS) suggests more relevant services by adapting them to the user’s specific context situation. Nevertheless, the use of many contextual factors can increase data sparsity while few context parameters fail to introduce the contextual effects in recommendations. Moreover, several CARSs are based on similarity algorithms, such as cosine and Pearson correlation coefficients. These methods are not very effective in the sparse datasets. This paper presents a context-aware model to integrate contextual factors into prediction process when there are insufficient co-rated items. The proposed algorithm uses Latent Dirichlet Allocation (LDA) to learn the latent interests of users from the textual descriptions of items. Then, it integrates both the explicit contextual factors and their degree of importance in the prediction process by introducing a weighting function. Indeed, the PSO algorithm is employed to learn and optimize weights of these features. The results on the Movielens 1 M dataset show that the proposed model can achieve an F-measure of 45.51% with precision as 68.64%. Furthermore, the enhancement in MAE and RMSE can respectively reach 41.63% and 39.69% compared with the state-of-the-art techniques.


2021 ◽  
Vol 11 (8) ◽  
pp. 3636
Author(s):  
Faria Zarin Subah ◽  
Kaushik Deb ◽  
Pranab Kumar Dhar ◽  
Takeshi Koshiba

Autism spectrum disorder (ASD) is a complex and degenerative neuro-developmental disorder. Most of the existing methods utilize functional magnetic resonance imaging (fMRI) to detect ASD with a very limited dataset which provides high accuracy but results in poor generalization. To overcome this limitation and to enhance the performance of the automated autism diagnosis model, in this paper, we propose an ASD detection model using functional connectivity features of resting-state fMRI data. Our proposed model utilizes two commonly used brain atlases, Craddock 200 (CC200) and Automated Anatomical Labelling (AAL), and two rarely used atlases Bootstrap Analysis of Stable Clusters (BASC) and Power. A deep neural network (DNN) classifier is used to perform the classification task. Simulation results indicate that the proposed model outperforms state-of-the-art methods in terms of accuracy. The mean accuracy of the proposed model was 88%, whereas the mean accuracy of the state-of-the-art methods ranged from 67% to 85%. The sensitivity, F1-score, and area under receiver operating characteristic curve (AUC) score of the proposed model were 90%, 87%, and 96%, respectively. Comparative analysis on various scoring strategies show the superiority of BASC atlas over other aforementioned atlases in classifying ASD and control.


2021 ◽  
Vol 7 (2) ◽  
pp. 245-246
Author(s):  
Weizhi Meng ◽  
Daniel Xiapu Luo ◽  
Chunhua Su ◽  
Debiao He ◽  
Marios Anagnostopoulos ◽  
...  

Author(s):  
Masoumeh Zareapoor ◽  
Jie Yang

Image-to-Image translation aims to learn an image from a source domain to a target domain. However, there are three main challenges, such as lack of paired datasets, multimodality, and diversity, that are associated with these problems and need to be dealt with. Convolutional neural networks (CNNs), despite of having great performance in many computer vision tasks, they fail to detect the hierarchy of spatial relationships between different parts of an object and thus do not form the ideal representative model we look for. This article presents a new variation of generative models that aims to remedy this problem. We use a trainable transformer, which explicitly allows the spatial manipulation of data within training. This differentiable module can be augmented into the convolutional layers in the generative model, and it allows to freely alter the generated distributions for image-to-image translation. To reap the benefits of proposed module into generative model, our architecture incorporates a new loss function to facilitate an effective end-to-end generative learning for image-to-image translation. The proposed model is evaluated through comprehensive experiments on image synthesizing and image-to-image translation, along with comparisons with several state-of-the-art algorithms.


Sign in / Sign up

Export Citation Format

Share Document