scholarly journals Intelligent system for IoT botnet detection using SVM and PSO optimization

2021 ◽  
pp. 68-84
Author(s):  
Mahmoud A. Salam ◽  

Botnet attacks involving Internet-of-Things (IoT) devices have skyrocketed in recent years due to the proliferation of internet IoT devices that can be readily infiltrated. The botnet is a common threat, exploiting the absence of basic IoT security technologies and can perform several DDoS attacks. Existing IoT botnet detection methods still have issues, such as relying on labeled data, not being validated with newer botnets, and using very complex machine learning algorithms, making the development of new methods to detect compromised IoT devices urgent to reduce the negative implications of these IoT botnets. Due to the vast amount of normal data accessible, anomaly detection algorithms seem to promise for identifying botnet attacks on the Internet of Things (IoT). For anomaly detection, the One-Class Support vector machine is a strong method (ONE-SVM). Many aspects influence the classification outcomes of the ONE-SVM technique, like that of the subset of features utilized for training the ONE-SVM model, hyperparameters of the kernel. An evolutionary IoT botnet detection algorithm is described in this paper. Particle Swarm Optimization technique (PSO) is used to tune the hyperparameters of the ONE-SVM to detect IoT botnet assaults launched from hacked IoT devices. A new version of a real benchmark dataset is used to evaluate the proposed method's performance using traditional anomaly detection evaluation measures. This technique exceeds all existing algorithms in terms of false positive, true positive and rates, and G-mean for all IoT device categories, according to testing results. It also achieves the shortest detection time despite lowering the number of picked features by a significant amount.

2021 ◽  
Vol 30 (04) ◽  
pp. 2150020
Author(s):  
Luke Holbrook ◽  
Miltiadis Alamaniotis

With the increase of cyber-attacks on millions of Internet of Things (IoT) devices, the poor network security measures on those devices are the main source of the problem. This article aims to study a number of these machine learning algorithms available for their effectiveness in detecting malware in consumer internet of things devices. In particular, the Support Vector Machines (SVM), Random Forest, and Deep Neural Network (DNN) algorithms are utilized for a benchmark with a set of test data and compared as tools in safeguarding the deployment for IoT security. Test results on a set of 4 IoT devices exhibited that all three tested algorithms presented here detect the network anomalies with high accuracy. However, the deep neural network provides the highest coefficient of determination R2, and hence, it is identified as the most precise among the tested algorithms concerning the security of IoT devices based on the data sets we have undertaken.


Author(s):  
Mehedi Hasan Raj ◽  
A. N. M. Asifur Rahman ◽  
Umma Habiba Akter ◽  
Khayrun Nahar Riya ◽  
Anika Tasneem Nijhum ◽  
...  

Nowadays, the Internet of Things (IoT) is a common word for the people because of its increasing number of users. Statistical results show that the users of IoT devices are dramatically increasing, and in the future, it will be to an ever-increasing extent. Because of the increasing number of users, security experts are now concerned about its security. In this research, we would like to improve the security system of IoT devices, particularly in IoT botnet, by applying various machine learning (ML) techniques. In this paper, we have set up an approach to detect botnet of IoT devices using three one-class classifier ML algorithms. The algorithms are: one-class support vector machine (OCSVM), elliptic envelope (EE), and local outlier factor (LOF). Our method is a network flow-based botnet detection technique, and we use the input packet, protocol, source port, destination port, and time as features of our algorithms. After a number of preprocessing steps, we feed the preprocessed data to our algorithms that can achieve a good precision score that is approximately 77–99%. The one-class SVM achieves the best accuracy score, approximately 99% in every dataset, and EE’s accuracy score varies from 91% to 98%; however, the LOF factor achieves lowest accuracy score that is from 77% to 99%. Our algorithms are cost-effective and provide good accuracy in short execution time.


Author(s):  
Samar Amassmir ◽  
Said Tkatek ◽  
Otman Abdoun ◽  
Jaafar Abouchabaka

This paper proposes a comparison of three machine learning algorithms for a better intelligent irrigation system based on internet of things (IoT) for differents products. This work's major contribution is to specify the most accurate algorithm among the three machine learning algorithms (k-nearest neighbors (KNN), support vector machine (SVM), artificial neural network (ANN)). This is achieved by collecting irrigation data of a specific products and split it into training data and test data then compare the accuracy of the three algorithms. To evaluate the performance of our algorithm we built a system of IoT devices. The temperature and humidity sensors are installed in the field interact with the Arduino microcontroller. The Arduino is connected to Raspberry Pi3, which holds the machine learning algorithm. It turned out to be ANN algorithm is the most accurate for such system of irrigation. The ANN algorithm is the best choice for an intelligent system to minimize water loss for some products.


2019 ◽  
Vol 23 (2) ◽  
pp. 1345-1360 ◽  
Author(s):  
Ahmad Alnafessah ◽  
Giuliano Casale

Abstract Late detection and manual resolutions of performance anomalies in Cloud Computing and Big Data systems may lead to performance violations and financial penalties. Motivated by this issue, we propose an artificial neural network based methodology for anomaly detection tailored to the Apache Spark in-memory processing platform. Apache Spark is widely adopted by industry because of its speed and generality, however there is still a shortage of comprehensive performance anomaly detection methods applicable to this platform. We propose an artificial neural networks driven methodology to quickly sift through Spark logs data and operating system monitoring metrics to accurately detect and classify anomalous behaviors based on the Spark resilient distributed dataset characteristics. The proposed method is evaluated against three popular machine learning algorithms, decision trees, nearest neighbor, and support vector machine, as well as against four variants that consider different monitoring datasets. The results prove that our proposed method outperforms other methods, typically achieving 98–99% F-scores, and offering much greater accuracy than alternative techniques to detect both the period in which anomalies occurred and their type.


Sensors ◽  
2020 ◽  
Vol 20 (16) ◽  
pp. 4501 ◽  
Author(s):  
Katherinne Shirley Huancayo Ramos ◽  
Marco Antonio Sotelo Monge ◽  
Jorge Maestre Vidal

Botnets are some of the most recurrent cyber-threats, which take advantage of the wide heterogeneity of endpoint devices at the Edge of the emerging communication environments for enabling the malicious enforcement of fraud and other adversarial tactics, including malware, data leaks or denial of service. There have been significant research advances in the development of accurate botnet detection methods underpinned on supervised analysis but assessing the accuracy and performance of such detection methods requires a clear evaluation model in the pursuit of enforcing proper defensive strategies. In order to contribute to the mitigation of botnets, this paper introduces a novel evaluation scheme grounded on supervised machine learning algorithms that enable the detection and discrimination of different botnets families on real operational environments. The proposal relies on observing, understanding and inferring the behavior of each botnet family based on network indicators measured at flow-level. The assumed evaluation methodology contemplates six phases that allow building a detection model against botnet-related malware distributed through the network, for which five supervised classifiers were instantiated were instantiated for further comparisons—Decision Tree, Random Forest, Naive Bayes Gaussian, Support Vector Machine and K-Neighbors. The experimental validation was performed on two public datasets of real botnet traffic—CIC-AWS-2018 and ISOT HTTP Botnet. Bearing the heterogeneity of the datasets, optimizing the analysis with the Grid Search algorithm led to improve the classification results of the instantiated algorithms. An exhaustive evaluation was carried out demonstrating the adequateness of our proposal which prompted that Random Forest and Decision Tree models are the most suitable for detecting different botnet specimens among the chosen algorithms. They exhibited higher precision rates whilst analyzing a large number of samples with less processing time. The variety of testing scenarios were deeply assessed and reported to set baseline results for future benchmark analysis targeted on flow-based behavioral patterns.


2021 ◽  
Vol 11 (12) ◽  
pp. 5713
Author(s):  
Majda Wazzan ◽  
Daniyal Algazzawi ◽  
Omaima Bamasaq ◽  
Aiiad Albeshri ◽  
Li Cheng

Internet of Things (IoT) is promising technology that brings tremendous benefits if used optimally. At the same time, it has resulted in an increase in cybersecurity risks due to the lack of security for IoT devices. IoT botnets, for instance, have become a critical threat; however, systematic and comprehensive studies analyzing the importance of botnet detection methods are limited in the IoT environment. Thus, this study aimed to identify, assess and provide a thoroughly review of experimental works on the research relevant to the detection of IoT botnets. To accomplish this goal, a systematic literature review (SLR), an effective method, was applied for gathering and critically reviewing research papers. This work employed three research questions on the detection methods used to detect IoT botnets, the botnet phases and the different malicious activity scenarios. The authors analyzed the nominated research and the key methods related to them. The detection methods have been classified based on the techniques used, and the authors investigated the botnet phases during which detection is accomplished. This research procedure was used to create a source of foundational knowledge of IoT botnet detection methods. As a result of this study, the authors analyzed the current research gaps and suggest future research directions.


2020 ◽  
Vol 23 (4) ◽  
pp. 274-284 ◽  
Author(s):  
Jingang Che ◽  
Lei Chen ◽  
Zi-Han Guo ◽  
Shuaiqun Wang ◽  
Aorigele

Background: Identification of drug-target interaction is essential in drug discovery. It is beneficial to predict unexpected therapeutic or adverse side effects of drugs. To date, several computational methods have been proposed to predict drug-target interactions because they are prompt and low-cost compared with traditional wet experiments. Methods: In this study, we investigated this problem in a different way. According to KEGG, drugs were classified into several groups based on their target proteins. A multi-label classification model was presented to assign drugs into correct target groups. To make full use of the known drug properties, five networks were constructed, each of which represented drug associations in one property. A powerful network embedding method, Mashup, was adopted to extract drug features from above-mentioned networks, based on which several machine learning algorithms, including RAndom k-labELsets (RAKEL) algorithm, Label Powerset (LP) algorithm and Support Vector Machine (SVM), were used to build the classification model. Results and Conclusion: Tenfold cross-validation yielded the accuracy of 0.839, exact match of 0.816 and hamming loss of 0.037, indicating good performance of the model. The contribution of each network was also analyzed. Furthermore, the network model with multiple networks was found to be superior to the one with a single network and classic model, indicating the superiority of the proposed model.


Transport ◽  
2020 ◽  
Vol 35 (5) ◽  
pp. 462-473
Author(s):  
Aleksandar Vorkapić ◽  
Radoslav Radonja ◽  
Karlo Babić ◽  
Sanda Martinčić-Ipšić

The aim of this article is to enhance performance monitoring of a two-stroke electronically controlled ship propulsion engine on the operating envelope. This is achieved by setting up a machine learning model capable of monitoring influential operating parameters and predicting the fuel consumption. Model is tested with different machine learning algorithms, namely linear regression, multilayer perceptron, Support Vector Machines (SVM) and Random Forests (RF). Upon verification of modelling framework and analysing the results in order to improve the prediction accuracy, the best algorithm is selected based on standard evaluation metrics, i.e. Root Mean Square Error (RMSE) and Relative Absolute Error (RAE). Experimental results show that, by taking an adequate combination and processing of relevant sensory data, SVM exhibit the lowest RMSE 7.1032 and RAE 0.5313%. RF achieve the lowest RMSE 22.6137 and RAE 3.8545% in a setting when minimal number of input variables is considered, i.e. cylinder indicated pressures and propulsion engine revolutions. Further, article deals with the detection of anomalies of operating parameters, which enables the evaluation of the propulsion engine condition and the early identification of failures and deterioration. Such a time-dependent, self-adopting anomaly detection model can be used for comparison with the initial condition recorded during the test and sea run or after survey and docking. Finally, we propose a unified model structure, incorporating fuel consumption prediction and anomaly detection model with on-board decision-making process regarding navigation and maintenance.


Sensors ◽  
2019 ◽  
Vol 19 (20) ◽  
pp. 4536 ◽  
Author(s):  
Yan Zhong ◽  
Simon Fong ◽  
Shimin Hu ◽  
Raymond Wong ◽  
Weiwei Lin

The Internet of Things (IoT) and sensors are becoming increasingly popular, especially in monitoring large and ambient environments. Applications that embrace IoT and sensors often require mining the data feeds that are collected at frequent intervals for intelligence. Despite the fact that such sensor data are massive, most of the data contents are identical and repetitive; for example, human traffic in a park at night. Most of the traditional classification algorithms were originally formulated decades ago, and they were not designed to handle such sensor data effectively. Hence, the performance of the learned model is often poor because of the small granularity in classification and the sporadic patterns in the data. To improve the quality of data mining from the IoT data, a new pre-processing methodology based on subspace similarity detection is proposed. Our method can be well integrated with traditional data mining algorithms and anomaly detection methods. The pre-processing method is flexible for handling similar kinds of sensor data that are sporadic in nature that exist in many ambient sensing applications. The proposed methodology is evaluated by extensive experiment with a collection of classical data mining models. An improvement over the precision rate is shown by using the proposed method.


2020 ◽  
Vol 2020 ◽  
pp. 1-14
Author(s):  
Hasan Alkahtani ◽  
Theyazn H. H. Aldhyani ◽  
Mohammed Al-Yaari

Telecommunication has registered strong and rapid growth in the past decade. Accordingly, the monitoring of computers and networks is too complicated for network administrators. Hence, network security represents one of the biggest serious challenges that can be faced by network security communities. Taking into consideration the fact that e-banking, e-commerce, and business data will be shared on the computer network, these data may face a threat from intrusion. The purpose of this research is to propose a methodology that will lead to a high level and sustainable protection against cyberattacks. In particular, an adaptive anomaly detection framework model was developed using deep and machine learning algorithms to manage automatically-configured application-level firewalls. The standard network datasets were used to evaluate the proposed model which is designed for improving the cybersecurity system. The deep learning based on Long-Short Term Memory Recurrent Neural Network (LSTM-RNN) and machine learning algorithms namely Support Vector Machine (SVM), K-Nearest Neighbor (K-NN) algorithms were implemented to classify the Denial-of-Service attack (DoS) and Distributed Denial-of-Service (DDoS) attacks. The information gain method was applied to select the relevant features from the network dataset. These network features were significant to improve the classification algorithm. The system was used to classify DoS and DDoS attacks in four stand datasets namely KDD cup 199, NSL-KDD, ISCX, and ICI-ID2017. The empirical results indicate that the deep learning based on the LSTM-RNN algorithm has obtained the highest accuracy. The proposed system based on the LSTM-RNN algorithm produced the highest testing accuracy rate of 99.51% and 99.91% with respect to KDD Cup’99, NSL-KDD, ISCX, and ICI-Id2017 datasets, respectively. A comparative result analysis between the machine learning algorithms, namely SVM and KNN, and the deep learning algorithms based on the LSTM-RNN model is presented. Finally, it is concluded that the LSTM-RNN model is efficient and effective to improve the cybersecurity system for detecting anomaly-based cybersecurity.


Sign in / Sign up

Export Citation Format

Share Document