scholarly journals Intrusion Detection System to Advance Internet of Things Infrastructure-Based Deep Learning Algorithms

Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-18
Author(s):  
Hasan Alkahtani ◽  
Theyazn H. H. Aldhyani

Smart grids, advanced information technology, have become the favored intrusion targets due to the Internet of Things (IoT) using sensor devices to collect data from a smart grid environment. These data are sent to the cloud, which is a huge network of super servers that provides different services to different smart infrastructures, such as smart homes and smart buildings. These can provide a large space for attackers to launch destructive cyberattacks. The novelty of this proposed research is the development of a robust framework system for detecting intrusions based on the IoT environment. An IoTID20 dataset attack was employed to develop the proposed system; it is a newly generated dataset from the IoT infrastructure. In this framework, three advanced deep learning algorithms were applied to classify the intrusion: a convolution neural network (CNN), a long short-term memory (LSTM), and a hybrid convolution neural network with the long short-term memory (CNN-LSTM) model. The complexity of the network dataset was dimensionality reduced, and to improve the proposed system, the particle swarm optimization method (PSO) was used to select relevant features from the network dataset. The obtained features were processed using deep learning algorithms. The experimental results showed that the proposed systems achieved accuracy as follows: CNN = 96.60%, LSTM = 99.82%, and CNN-LSTM = 98.80%. The proposed framework attained the desired performance on a new variable dataset, and the system will be implemented in our university IoT environment. The results of comparative predictions between the proposed framework and existing systems showed that the proposed system more efficiently and effectively enhanced the security of the IoT environment from attacks. The experimental results confirmed that the proposed framework based on deep learning algorithms for an intrusion detection system can effectively detect real-world attacks and is capable of enhancing the security of the IoT environment.

2021 ◽  
Author(s):  
Ashwini Bhaskar Abhale ◽  
S S Manivannan

Abstract Because of the ever increasing number of Internet users, Internet security is becoming more essential. To identify and detect attackers, many researchers utilized data mining methods. Existing data mining techniques are unable to provide a sufficient degree of detection precision. An intrusion detection system for wireless networks is being developed to ensure data transmission security. The Network Intrusion Detection Algorithm (NIDS) uses a deep classification system to classify network connections as good or harmful. Deep Convolution Neural Network (DCNN), Deep Recurrent Neural Network (DRNN), Deep Long Short-Term Memory (DLSTM), Deep Convolution Neural Network Long Short-Term Memory (DCNN LSTM), and Deep Gated Recurrent Unit (DGRU) methods that use NSLKDD data records to train models are proposed. The experiments were carried out for a total of 1000 epochs. During the experiment, we achieved a model accuracy of more than 98 percent. We also discovered that as the number of layers in a model grows, so does the accuracy.


Sensors ◽  
2021 ◽  
Vol 21 (14) ◽  
pp. 4884
Author(s):  
Danish Javeed ◽  
Tianhan Gao ◽  
Muhammad Taimoor Khan ◽  
Ijaz Ahmad

The Internet of Things (IoT) has emerged as a new technological world connecting billions of devices. Despite providing several benefits, the heterogeneous nature and the extensive connectivity of the devices make it a target of different cyberattacks that result in data breach and financial loss. There is a severe need to secure the IoT environment from such attacks. In this paper, an SDN-enabled deep-learning-driven framework is proposed for threats detection in an IoT environment. The state-of-the-art Cuda-deep neural network, gated recurrent unit (Cu- DNNGRU), and Cuda-bidirectional long short-term memory (Cu-BLSTM) classifiers are adopted for effective threat detection. We have performed 10 folds cross-validation to show the unbiasedness of results. The up-to-date publicly available CICIDS2018 data set is introduced to train our hybrid model. The achieved accuracy of the proposed scheme is 99.87%, with a recall of 99.96%. Furthermore, we compare the proposed hybrid model with Cuda-Gated Recurrent Unit, Long short term memory (Cu-GRULSTM) and Cuda-Deep Neural Network, Long short term memory (Cu- DNNLSTM), as well as with existing benchmark classifiers. Our proposed mechanism achieves impressive results in terms of accuracy, F1-score, precision, speed efficiency, and other evaluation metrics.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
FatimaEzzahra Laghrissi ◽  
Samira Douzi ◽  
Khadija Douzi ◽  
Badr Hssina

AbstractAn intrusion detection system (IDS) is a device or software application that monitors a network for malicious activity or policy violations. It scans a network or a system for a harmful activity or security breaching. IDS protects networks (Network-based intrusion detection system NIDS) or hosts (Host-based intrusion detection system HIDS), and work by either looking for signatures of known attacks or deviations from normal activity. Deep learning algorithms proved their effectiveness in intrusion detection compared to other machine learning methods. In this paper, we implemented deep learning solutions for detecting attacks based on Long Short-Term Memory (LSTM). PCA (principal component analysis) and Mutual information (MI) are used as dimensionality reduction and feature selection techniques. Our approach was tested on a benchmark data set, KDD99, and the experimental outcomes show that models based on PCA achieve the best accuracy for training and testing, in both binary and multiclass classification.


2021 ◽  
Vol 13 (10) ◽  
pp. 1953
Author(s):  
Seyed Majid Azimi ◽  
Maximilian Kraus ◽  
Reza Bahmanyar ◽  
Peter Reinartz

In this paper, we address various challenges in multi-pedestrian and vehicle tracking in high-resolution aerial imagery by intensive evaluation of a number of traditional and Deep Learning based Single- and Multi-Object Tracking methods. We also describe our proposed Deep Learning based Multi-Object Tracking method AerialMPTNet that fuses appearance, temporal, and graphical information using a Siamese Neural Network, a Long Short-Term Memory, and a Graph Convolutional Neural Network module for more accurate and stable tracking. Moreover, we investigate the influence of the Squeeze-and-Excitation layers and Online Hard Example Mining on the performance of AerialMPTNet. To the best of our knowledge, we are the first to use these two for regression-based Multi-Object Tracking. Additionally, we studied and compared the L1 and Huber loss functions. In our experiments, we extensively evaluate AerialMPTNet on three aerial Multi-Object Tracking datasets, namely AerialMPT and KIT AIS pedestrian and vehicle datasets. Qualitative and quantitative results show that AerialMPTNet outperforms all previous methods for the pedestrian datasets and achieves competitive results for the vehicle dataset. In addition, Long Short-Term Memory and Graph Convolutional Neural Network modules enhance the tracking performance. Moreover, using Squeeze-and-Excitation and Online Hard Example Mining significantly helps for some cases while degrades the results for other cases. In addition, according to the results, L1 yields better results with respect to Huber loss for most of the scenarios. The presented results provide a deep insight into challenges and opportunities of the aerial Multi-Object Tracking domain, paving the way for future research.


PLoS ONE ◽  
2020 ◽  
Vol 15 (11) ◽  
pp. e0240663
Author(s):  
Beibei Ren

With the rapid development of big data and deep learning, breakthroughs have been made in phonetic and textual research, the two fundamental attributes of language. Language is an essential medium of information exchange in teaching activity. The aim is to promote the transformation of the training mode and content of translation major and the application of the translation service industry in various fields. Based on previous research, the SCN-LSTM (Skip Convolutional Network and Long Short Term Memory) translation model of deep learning neural network is constructed by learning and training the real dataset and the public PTB (Penn Treebank Dataset). The feasibility of the model’s performance, translation quality, and adaptability in practical teaching is analyzed to provide a theoretical basis for the research and application of the SCN-LSTM translation model in English teaching. The results show that the capability of the neural network for translation teaching is nearly one times higher than that of the traditional N-tuple translation model, and the fusion model performs much better than the single model, translation quality, and teaching effect. To be specific, the accuracy of the SCN-LSTM translation model based on deep learning neural network is 95.21%, the degree of translation confusion is reduced by 39.21% compared with that of the LSTM (Long Short Term Memory) model, and the adaptability is 0.4 times that of the N-tuple model. With the highest level of satisfaction in practical teaching evaluation, the SCN-LSTM translation model has achieved a favorable effect on the translation teaching of the English major. In summary, the performance and quality of the translation model are improved significantly by learning the language characteristics in translations by teachers and students, providing ideas for applying machine translation in professional translation teaching.


2018 ◽  
Vol 10 (11) ◽  
pp. 113 ◽  
Author(s):  
Yue Li ◽  
Xutao Wang ◽  
Pengjian Xu

Text classification is of importance in natural language processing, as the massive text information containing huge amounts of value needs to be classified into different categories for further use. In order to better classify text, our paper tries to build a deep learning model which achieves better classification results in Chinese text than those of other researchers’ models. After comparing different methods, long short-term memory (LSTM) and convolutional neural network (CNN) methods were selected as deep learning methods to classify Chinese text. LSTM is a special kind of recurrent neural network (RNN), which is capable of processing serialized information through its recurrent structure. By contrast, CNN has shown its ability to extract features from visual imagery. Therefore, two layers of LSTM and one layer of CNN were integrated to our new model: the BLSTM-C model (BLSTM stands for bi-directional long short-term memory while C stands for CNN.) LSTM was responsible for obtaining a sequence output based on past and future contexts, which was then input to the convolutional layer for extracting features. In our experiments, the proposed BLSTM-C model was evaluated in several ways. In the results, the model exhibited remarkable performance in text classification, especially in Chinese texts.


Water ◽  
2020 ◽  
Vol 12 (5) ◽  
pp. 1500 ◽  
Author(s):  
Halit Apaydin ◽  
Hajar Feizi ◽  
Mohammad Taghi Sattari ◽  
Muslume Sevba Colak ◽  
Shahaboddin Shamshirband ◽  
...  

Due to the stochastic nature and complexity of flow, as well as the existence of hydrological uncertainties, predicting streamflow in dam reservoirs, especially in semi-arid and arid areas, is essential for the optimal and timely use of surface water resources. In this research, daily streamflow to the Ermenek hydroelectric dam reservoir located in Turkey is simulated using deep recurrent neural network (RNN) architectures, including bidirectional long short-term memory (Bi-LSTM), gated recurrent unit (GRU), long short-term memory (LSTM), and simple recurrent neural networks (simple RNN). For this purpose, daily observational flow data are used during the period 2012–2018, and all models are coded in Python software programming language. Only delays of streamflow time series are used as the input of models. Then, based on the correlation coefficient (CC), mean absolute error (MAE), root mean square error (RMSE), and Nash–Sutcliffe efficiency coefficient (NS), results of deep-learning architectures are compared with one another and with an artificial neural network (ANN) with two hidden layers. Results indicate that the accuracy of deep-learning RNN methods are better and more accurate than ANN. Among methods used in deep learning, the LSTM method has the best accuracy, namely, the simulated streamflow to the dam reservoir with 90% accuracy in the training stage and 87% accuracy in the testing stage. However, the accuracies of ANN in training and testing stages are 86% and 85%, respectively. Considering that the Ermenek Dam is used for hydroelectric purposes and energy production, modeling inflow in the most realistic way may lead to an increase in energy production and income by optimizing water management. Hence, multi-percentage improvements can be extremely useful. According to results, deep-learning methods of RNNs can be used for estimating streamflow to the Ermenek Dam reservoir due to their accuracy.


Sign in / Sign up

Export Citation Format

Share Document