scholarly journals Social Multimedia Security and Suspicious Activity Detection in SDN using Hybrid Deep Learning Technique

Author(s):  
Dr. Joy Iong Zong Chen ◽  
Dr. Smys S.

Social multimedia traffic is growing exponentially with the increased usage and continuous development of services and applications based on multimedia. Quality of Service (QoS), Quality of Information (QoI), scalability, reliability and such factors that are essential for social multimedia networks are realized by secure data transmission. For delivering actionable and timely insights in order to meet the growing demands of the user, multimedia analytics is performed by means of a trust-based paradigm. Efficient management and control of the network is facilitated by limiting certain capabilities such as energy-aware networking and runtime security in Software Defined Networks. In social multimedia context, suspicious flow detection is performed by a hybrid deep learning based anomaly detection scheme in order to enhance the SDN reliability. The entire process is divided into two modules namely – Abnormal activities detection using support vector machine based on Gradient descent and improved restricted Boltzmann machine which facilitates the anomaly detection module, and satisfying the strict requirements of QoS like low latency and high bandwidth in SDN using end-to-end data delivery module. In social multimedia, data delivery and anomaly detection services are essential in order to improve the efficiency and effectiveness of the system. For this purpose, we use benchmark datasets as well as real time evaluation to experimentally evaluate the proposed scheme. Detection of malicious events like confidential data collection, profile cloning and identity theft are performed to analyze the performance of the system using CMU-based insider threat dataset for large scale analysis.

2017 ◽  
Vol 11 (01) ◽  
pp. 85-109 ◽  
Author(s):  
Samira Pouyanfar ◽  
Shu-Ching Chen

With the explosion of multimedia data, semantic event detection from videos has become a demanding and challenging topic. In addition, when the data has a skewed data distribution, interesting event detection also needs to address the data imbalance problem. The recent proliferation of deep learning has made it an essential part of many Artificial Intelligence (AI) systems. Till now, various deep learning architectures have been proposed for numerous applications such as Natural Language Processing (NLP) and image processing. Nonetheless, it is still impracticable for a single model to work well for different applications. Hence, in this paper, a new ensemble deep learning framework is proposed which can be utilized in various scenarios and datasets. The proposed framework is able to handle the over-fitting issue as well as the information losses caused by single models. Moreover, it alleviates the imbalanced data problem in real-world multimedia data. The whole framework includes a suite of deep learning feature extractors integrated with an enhanced ensemble algorithm based on the performance metrics for the imbalanced data. The Support Vector Machine (SVM) classifier is utilized as the last layer of each deep learning component and also as the weak learners in the ensemble module. The framework is evaluated on two large-scale and imbalanced video datasets (namely, disaster and TRECVID). The extensive experimental results illustrate the advantage and effectiveness of the proposed framework. It also demonstrates that the proposed framework outperforms several well-known deep learning methods, as well as the conventional features integrated with different classifiers.


Sensors ◽  
2021 ◽  
Vol 21 (23) ◽  
pp. 8017
Author(s):  
Nurfazrina M. Zamry ◽  
Anazida Zainal ◽  
Murad A. Rassam ◽  
Eman H. Alkhammash ◽  
Fuad A. Ghaleb ◽  
...  

Wireless Sensors Networks have been the focus of significant attention from research and development due to their applications of collecting data from various fields such as smart cities, power grids, transportation systems, medical sectors, military, and rural areas. Accurate and reliable measurements for insightful data analysis and decision-making are the ultimate goals of sensor networks for critical domains. However, the raw data collected by WSNs usually are not reliable and inaccurate due to the imperfect nature of WSNs. Identifying misbehaviours or anomalies in the network is important for providing reliable and secure functioning of the network. However, due to resource constraints, a lightweight detection scheme is a major design challenge in sensor networks. This paper aims at designing and developing a lightweight anomaly detection scheme to improve efficiency in terms of reducing the computational complexity and communication and improving memory utilization overhead while maintaining high accuracy. To achieve this aim, one-class learning and dimension reduction concepts were used in the design. The One-Class Support Vector Machine (OCSVM) with hyper-ellipsoid variance was used for anomaly detection due to its advantage in classifying unlabelled and multivariate data. Various One-Class Support Vector Machine formulations have been investigated and Centred-Ellipsoid has been adopted in this study due to its effectiveness. Centred-Ellipsoid is the most effective kernel among studies formulations. To decrease the computational complexity and improve memory utilization, the dimensions of the data were reduced using the Candid Covariance-Free Incremental Principal Component Analysis (CCIPCA) algorithm. Extensive experiments were conducted to evaluate the proposed lightweight anomaly detection scheme. Results in terms of detection accuracy, memory utilization, computational complexity, and communication overhead show that the proposed scheme is effective and efficient compared few existing schemes evaluated. The proposed anomaly detection scheme achieved the accuracy higher than 98%, with (𝑛𝑑) memory utilization and no communication overhead.


2021 ◽  
pp. 1-15
Author(s):  
Savaridassan Pankajashan ◽  
G. Maragatham ◽  
T. Kirthiga Devi

Anomaly-based detection is coupled with recognizing the uncommon, to catch the unusual activity, and to find the strange action behind that activity. Anomaly-based detection has a wide scope of critical applications, from bank application security to regular sciences to medical systems to marketing apps. Anomaly-based detection adopted by various Machine Learning techniques is really a type of system that consists of artificial intelligence. With the ever-expanding volume and new sorts of information, for example, sensor information from an incontestably enormous amount of IoT devices and from network flow data from cloud computing, it is implicitly understood without surprise that there is a developing enthusiasm for having the option to deal with more conclusions automatically by means of AI and ML applications. But with respect to anomaly detection, many applications of the scheme are simply the passion for detection. In this paper, Machine Learning (ML) techniques, namely the SVM, Isolation forest classifiers experimented and with reference to Deep Learning (DL) techniques, the proposed DA-LSTM (Deep Auto-Encoder LSTM) model are adopted for preprocessing of log data and anomaly-based detection to get better performance measures of detection. An enhanced LSTM (long-short-term memory) model, optimizing for the suitable parameter using a genetic algorithm (GA), is utilized to recognize better the anomaly from the log data that is filtered, adopting a Deep Auto-Encoder (DA). The Deep Neural network models are utilized to change over unstructured log information to training ready features, which are reasonable for log classification in detecting anomalies. These models are assessed, utilizing two benchmark datasets, the Openstack logs, and CIDDS-001 intrusion detection OpenStack server dataset. The outcomes acquired show that the DA-LSTM model performs better than other notable ML techniques. We further investigated the performance metrics of the ML and DL models through the well-known indicator measurements, specifically, the F-measure, Accuracy, Recall, and Precision. The exploratory conclusion shows that the Isolation Forest, and Support vector machine classifiers perform roughly 81%and 79%accuracy with respect to the performance metrics measurement on the CIDDS-001 OpenStack server dataset while the proposed DA-LSTM classifier performs around 99.1%of improved accuracy than the familiar ML algorithms. Further, the DA-LSTM outcomes on the OpenStack log data-sets show better anomaly detection compared with other notable machine learning models.


2020 ◽  
Vol 2020 ◽  
pp. 1-12
Author(s):  
Chunbo Liu ◽  
Lanlan Pan ◽  
Zhaojun Gu ◽  
Jialiang Wang ◽  
Yitong Ren ◽  
...  

System logs can record the system status and important events during system operation in detail. Detecting anomalies in the system logs is a common method for modern large-scale distributed systems. Yet threshold-based classification models used for anomaly detection output only two values: normal or abnormal, which lacks probability of estimating whether the prediction results are correct. In this paper, a statistical learning algorithm Venn-Abers predictor is adopted to evaluate the confidence of prediction results in the field of system log anomaly detection. It is able to calculate the probability distribution of labels for a set of samples and provide a quality assessment of predictive labels to some extent. Two Venn-Abers predictors LR-VA and SVM-VA have been implemented based on Logistic Regression and Support Vector Machine, respectively. Then, the differences among different algorithms are considered so as to build a multimodel fusion algorithm by Stacking. And then a Venn-Abers predictor based on the Stacking algorithm called Stacking-VA is implemented. The performances of four types of algorithms (unimodel, Venn-Abers predictor based on unimodel, multimodel, and Venn-Abers predictor based on multimodel) are compared in terms of validity and accuracy. Experiments are carried out on a log dataset of the Hadoop Distributed File System (HDFS). For the comparative experiments on unimodels, the results show that the validities of LR-VA and SVM-VA are better than those of the two corresponding underlying models. Compared with the underlying model, the accuracy of the SVM-VA predictor is better than that of LR-VA predictor, and more significantly, the recall rate increases from 81% to 94%. In the case of experiments on multiple models, the algorithm based on Stacking multimodel fusion is significantly superior to the underlying classifier. The average accuracy of Stacking-VA is larger than 0.95, which is more stable than the prediction results of LR-VA and SVM-VA. Experimental results show that the Venn-Abers predictor is a flexible tool that can make accurate and valid probability predictions in the field of system log anomaly detection.


2020 ◽  
Vol 2020 ◽  
pp. 1-14
Author(s):  
Hasan Alkahtani ◽  
Theyazn H. H. Aldhyani ◽  
Mohammed Al-Yaari

Telecommunication has registered strong and rapid growth in the past decade. Accordingly, the monitoring of computers and networks is too complicated for network administrators. Hence, network security represents one of the biggest serious challenges that can be faced by network security communities. Taking into consideration the fact that e-banking, e-commerce, and business data will be shared on the computer network, these data may face a threat from intrusion. The purpose of this research is to propose a methodology that will lead to a high level and sustainable protection against cyberattacks. In particular, an adaptive anomaly detection framework model was developed using deep and machine learning algorithms to manage automatically-configured application-level firewalls. The standard network datasets were used to evaluate the proposed model which is designed for improving the cybersecurity system. The deep learning based on Long-Short Term Memory Recurrent Neural Network (LSTM-RNN) and machine learning algorithms namely Support Vector Machine (SVM), K-Nearest Neighbor (K-NN) algorithms were implemented to classify the Denial-of-Service attack (DoS) and Distributed Denial-of-Service (DDoS) attacks. The information gain method was applied to select the relevant features from the network dataset. These network features were significant to improve the classification algorithm. The system was used to classify DoS and DDoS attacks in four stand datasets namely KDD cup 199, NSL-KDD, ISCX, and ICI-ID2017. The empirical results indicate that the deep learning based on the LSTM-RNN algorithm has obtained the highest accuracy. The proposed system based on the LSTM-RNN algorithm produced the highest testing accuracy rate of 99.51% and 99.91% with respect to KDD Cup’99, NSL-KDD, ISCX, and ICI-Id2017 datasets, respectively. A comparative result analysis between the machine learning algorithms, namely SVM and KNN, and the deep learning algorithms based on the LSTM-RNN model is presented. Finally, it is concluded that the LSTM-RNN model is efficient and effective to improve the cybersecurity system for detecting anomaly-based cybersecurity.


2016 ◽  
Vol 58 ◽  
pp. 121-134 ◽  
Author(s):  
Sarah M. Erfani ◽  
Sutharshan Rajasegarar ◽  
Shanika Karunasekera ◽  
Christopher Leckie

2015 ◽  
Vol 8 (3) ◽  
pp. 1055-1071 ◽  
Author(s):  
R. G. Sivira ◽  
H. Brogniez ◽  
C. Mallet ◽  
Y. Oussar

Abstract. A statistical method trained and optimized to retrieve seven-layer relative humidity (RH) profiles is presented and evaluated with measurements from radiosondes. The method makes use of the microwave payload of the Megha-Tropiques platform, namely the SAPHIR sounder and the MADRAS imager. The approach, based on a generalized additive model (GAM), embeds both the physical and statistical characteristics of the inverse problem in the training phase, and no explicit thermodynamical constraint – such as a temperature profile or an integrated water vapor content – is provided to the model at the stage of retrieval. The model is built for cloud-free conditions in order to avoid the cases of scattering of the microwave radiation in the 18.7–183.31 GHz range covered by the payload. Two instrumental configurations are tested: a SAPHIR-MADRAS scheme and a SAPHIR-only scheme to deal with the stop of data acquisition of MADRAS in January 2013 for technical reasons. A comparison to learning machine algorithms (artificial neural network and support-vector machine) shows equivalent performance over a large realistic set, promising low errors (biases < 2.2%RH) and scatters (correlations > 0.8) throughout the troposphere (150–900 hPa). A comparison to radiosonde measurements performed during the international field experiment CINDY/DYNAMO/AMIE (winter 2011–2012) confirms these results for the mid-tropospheric layers (correlations between 0.6 and 0.92), with an expected degradation of the quality of the estimates at the surface and top layers. Finally a rapid insight of the estimated large-scale RH field from Megha-Tropiques is presented and compared to ERA-Interim.


2021 ◽  
Author(s):  
Ali Moradi Vartouni ◽  
Matin Shokri ◽  
Mohammad Teshnehlab

Protecting websites and applications from cyber-threats is vital for any organization. A Web application firewall (WAF) prevents attacks to damaging applications. This provides a web security by filtering and monitoring traffic network to protect against attacks. A WAF solution based on the anomaly detection can identify zero-day attacks. Deep learning is the state-of-the-art method that is widely used to detect attacks in the anomaly-based WAF area. Although deep learning has demonstrated excellent results on anomaly detection tasks in web requests, there is trade-off between false-positive and missed-attack rates which is a key problem in WAF systems. On the other hand, anomaly detection methods suffer adjusting threshold-level to distinguish attack and normal traffic. In this paper, first we proposed a model based on Deep Support Vector Data Description (Deep SVDD), then we compare two feature extraction strategies, one-hot and bigram, on the raw requests. Second to overcome threshold challenges, we introduce a novel end-to-end algorithm Auto-Threshold Deep SVDD (ATDSVDD) to determine an appropriate threshold during the learning process. As a result we compare our model with other deep models on CSIC-2010 and ECML/PKDD-2007 datasets. Results show ATDSVDD on bigram feature data have better performance in terms of accuracy and generalization. <br>


2021 ◽  
Vol 2021 ◽  
pp. 1-17
Author(s):  
Bin Li ◽  
Yuqing He

The synergy of computational logistics and deep learning provides a new methodology and solution to the operational decisions of container terminal handling systems (CTHS) at the strategic, tactical, and executive levels. Above all, the container terminal logistics tactical operational complexity is discussed by computational logistics, and the liner handling volume (LHV) has important influences on a series of terminal scheduling decision problems. Subsequently, a feature-extraction-based lightweight convolutional and recurrent neural network adaptive computing model (FEB-LCR-ACM) is presented initially to predict the LHV by the fusion of multiple deep learning algorithms and mechanisms, especially for the specific feature extraction package of tsfresh. Consequently, the container-terminal-oriented logistics service scheduling decision support design paradigm is put forward tentatively by FEB-LCR-ACM. Finally, a typical large-scale container terminal of China is chosen to implement, execute, and evaluate the FEB-LCR-ACM based on the terminal running log around the indicator of LHV. In the case of severe vibration of LHV between 2 twenty-foot equivalent units (TEUs) and 4215 TEUs, while forecasting the LHV of 300 liners by the log of five years, the forecasting error within 100 TEUs almost accounts for 80%. When predicting the operation of 350 ships by the log of six years, the forecasting deviation within 100 TEUs reaches up to nearly 90%. The abovementioned two deep learning experimental performances with FEB-LCR-ACM are so far ahead of the forecasting results by the classical machine learning algorithm that is similar to Gaussian support vector machine. Consequently, the FEB-LCR-ACM achieves sufficiently good performance for the LHV prediction with a lightweight deep learning architecture based on the typical small datasets, and then it is supposed to overcome the operational nonlinearity, dynamics, coupling, and complexity of CTHS partially.


Sign in / Sign up

Export Citation Format

Share Document