On the Performance of Machine Learning Models for Anomaly-Based Intelligent Intrusion Detection Systems for the Internet of Things

Author(s):  
Ghada Abdelmoumin ◽  
Danda B. Rawat ◽  
Abdul Rahman
2020 ◽  
Vol 2020 ◽  
pp. 1-9
Author(s):  
Poria Pirozmand ◽  
Mohsen Angoraj Ghafary ◽  
Safieh Siadat ◽  
Jiankang Ren

The Internet of Things is an emerging technology that integrates the Internet and physical smart objects. This technology currently is used in many areas of human life, including education, agriculture, medicine, military and industrial processes, and trade. Integrating real-world objects with the Internet can pose security threats to many of our day-to-day activities. Intrusion detection systems (IDS) can be used in this technology as one of the security methods. In intrusion detection systems, early and correct detection (with high accuracy) of intrusions is considered very important. In this research, game theory is used to develop the performance of intrusion detection systems. In the proposed method, the attacker infiltration mode and the behavior of the intrusion detection system as a two-player and nonparticipatory dynamic game are completely analyzed and Nash equilibrium solution is used to create specific subgames. During the simulation performed using MATLAB software, various parameters were examined using the definitions of game theory and Nash equilibrium to extract the parameters that had the most accurate detection results. The results obtained from the simulation of the proposed method showed that the use of intrusion detection systems in the Internet of Things based on cloud-fog can be very effective in identifying attacks with the least amount of errors in this network.


Cybersecurity ◽  
2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Ansam Khraisat ◽  
Ammar Alazab

AbstractThe Internet of Things (IoT) has been rapidly evolving towards making a greater impact on everyday life to large industrial systems. Unfortunately, this has attracted the attention of cybercriminals who made IoT a target of malicious activities, opening the door to a possible attack on the end nodes. To this end, Numerous IoT intrusion detection Systems (IDS) have been proposed in the literature to tackle attacks on the IoT ecosystem, which can be broadly classified based on detection technique, validation strategy, and deployment strategy. This survey paper presents a comprehensive review of contemporary IoT IDS and an overview of techniques, deployment Strategy, validation strategy and datasets that are commonly applied for building IDS. We also review how existing IoT IDS detect intrusive attacks and secure communications on the IoT. It also presents the classification of IoT attacks and discusses future research challenges to counter such IoT attacks to make IoT more secure. These purposes help IoT security researchers by uniting, contrasting, and compiling scattered research efforts. Consequently, we provide a unique IoT IDS taxonomy, which sheds light on IoT IDS techniques, their advantages and disadvantages, IoT attacks that exploit IoT communication systems, corresponding advanced IDS and detection capabilities to detect IoT attacks.


2020 ◽  
Vol 12 (16) ◽  
pp. 6434 ◽  
Author(s):  
Corey Dunn ◽  
Nour Moustafa ◽  
Benjamin Turnbull

With the increasing popularity of the Internet of Things (IoT) platforms, the cyber security of these platforms is a highly active area of research. One key technology underpinning smart IoT systems is machine learning, which classifies and predicts events from large-scale data in IoT networks. Machine learning is susceptible to cyber attacks, particularly data poisoning attacks that inject false data when training machine learning models. Data poisoning attacks degrade the performances of machine learning models. It is an ongoing research challenge to develop trustworthy machine learning models resilient and sustainable against data poisoning attacks in IoT networks. We studied the effects of data poisoning attacks on machine learning models, including the gradient boosting machine, random forest, naive Bayes, and feed-forward deep learning, to determine the levels to which the models should be trusted and said to be reliable in real-world IoT settings. In the training phase, a label modification function is developed to manipulate legitimate input classes. The function is employed at data poisoning rates of 5%, 10%, 20%, and 30% that allow the comparison of the poisoned models and display their performance degradations. The machine learning models have been evaluated using the ToN_IoT and UNSW NB-15 datasets, as they include a wide variety of recent legitimate and attack vectors. The experimental results revealed that the models’ performances will be degraded, in terms of accuracy and detection rates, if the number of the trained normal observations is not significantly larger than the poisoned data. At the rate of data poisoning of 30% or greater on input data, machine learning performances are significantly degraded.


Sensors ◽  
2020 ◽  
Vol 20 (9) ◽  
pp. 2533 ◽  
Author(s):  
Massimo Merenda ◽  
Carlo Porcaro ◽  
Demetrio Iero

In a few years, the world will be populated by billions of connected devices that will be placed in our homes, cities, vehicles, and industries. Devices with limited resources will interact with the surrounding environment and users. Many of these devices will be based on machine learning models to decode meaning and behavior behind sensors’ data, to implement accurate predictions and make decisions. The bottleneck will be the high level of connected things that could congest the network. Hence, the need to incorporate intelligence on end devices using machine learning algorithms. Deploying machine learning on such edge devices improves the network congestion by allowing computations to be performed close to the data sources. The aim of this work is to provide a review of the main techniques that guarantee the execution of machine learning models on hardware with low performances in the Internet of Things paradigm, paving the way to the Internet of Conscious Things. In this work, a detailed review on models, architecture, and requirements on solutions that implement edge machine learning on Internet of Things devices is presented, with the main goal to define the state of the art and envisioning development requirements. Furthermore, an example of edge machine learning implementation on a microcontroller will be provided, commonly regarded as the machine learning “Hello World”.


Sign in / Sign up

Export Citation Format

Share Document