scholarly journals Detecting Insider Threat from Behavioral Logs Based on Ensemble and Self-Supervised Learning

2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Chunrui Zhang ◽  
Shen Wang ◽  
Dechen Zhan ◽  
Tingyue Yu ◽  
Tiangang Wang ◽  
...  

Recent studies have highlighted that insider threats are more destructive than external network threats. Despite many research studies on this, the spatial heterogeneity and sample imbalance of input features still limit the effectiveness of existing machine learning-based detection methods. To solve this problem, we proposed a supervised insider threat detection method based on ensemble learning and self-supervised learning. Moreover, we propose an entity representation method based on TF-IDF to improve the detection effect. Experimental results show that the proposed method can effectively detect malicious sessions in CERT4.2 and CERT6.2 datasets, where the AUCs are 99.2% and 95.3% in the best case.

2020 ◽  
Vol 10 (15) ◽  
pp. 5208
Author(s):  
Mohammed Nasser Al-Mhiqani ◽  
Rabiah Ahmad ◽  
Z. Zainal Abidin ◽  
Warusia Yassin ◽  
Aslinda Hassan ◽  
...  

Insider threat has become a widely accepted issue and one of the major challenges in cybersecurity. This phenomenon indicates that threats require special detection systems, methods, and tools, which entail the ability to facilitate accurate and fast detection of a malicious insider. Several studies on insider threat detection and related areas in dealing with this issue have been proposed. Various studies aimed to deepen the conceptual understanding of insider threats. However, there are many limitations, such as a lack of real cases, biases in making conclusions, which are a major concern and remain unclear, and the lack of a study that surveys insider threats from many different perspectives and focuses on the theoretical, technical, and statistical aspects of insider threats. The survey aims to present a taxonomy of contemporary insider types, access, level, motivation, insider profiling, effect security property, and methods used by attackers to conduct attacks and a review of notable recent works on insider threat detection, which covers the analyzed behaviors, machine-learning techniques, dataset, detection methodology, and evaluation metrics. Several real cases of insider threats have been analyzed to provide statistical information about insiders. In addition, this survey highlights the challenges faced by other researchers and provides recommendations to minimize obstacles.


2019 ◽  
Vol 9 (19) ◽  
pp. 4018 ◽  
Author(s):  
Kim ◽  
Park ◽  
Kim ◽  
Cho ◽  
Kang

Insider threats are malicious activities by authorized users, such as theft of intellectual property or security information, fraud, and sabotage. Although the number of insider threats is much lower than external network attacks, insider threats can cause extensive damage. As insiders are very familiar with an organization’s system, it is very difficult to detect their malicious behavior. Traditional insider-threat detection methods focus on rule-based approaches built by domain experts, but they are neither flexible nor robust. In this paper, we propose insider-threat detection methods based on user behavior modeling and anomaly detection algorithms. Based on user log data, we constructed three types of datasets: user’s daily activity summary, e-mail contents topic distribution, and user’s weekly e-mail communication history. Then, we applied four anomaly detection algorithms and their combinations to detect malicious activities. Experimental results indicate that the proposed framework can work well for imbalanced datasets in which there are only a few insider threats and where no domain experts’ knowledge is provided.


Entropy ◽  
2021 ◽  
Vol 23 (10) ◽  
pp. 1258
Author(s):  
Taher Al-Shehari ◽  
Rakan A. Alsowail

Insider threats are malicious acts that can be carried out by an authorized employee within an organization. Insider threats represent a major cybersecurity challenge for private and public organizations, as an insider attack can cause extensive damage to organization assets much more than external attacks. Most existing approaches in the field of insider threat focused on detecting general insider attack scenarios. However, insider attacks can be carried out in different ways, and the most dangerous one is a data leakage attack that can be executed by a malicious insider before his/her leaving an organization. This paper proposes a machine learning-based model for detecting such serious insider threat incidents. The proposed model addresses the possible bias of detection results that can occur due to an inappropriate encoding process by employing the feature scaling and one-hot encoding techniques. Furthermore, the imbalance issue of the utilized dataset is also addressed utilizing the synthetic minority oversampling technique (SMOTE). Well known machine learning algorithms are employed to detect the most accurate classifier that can detect data leakage events executed by malicious insiders during the sensitive period before they leave an organization. We provide a proof of concept for our model by applying it on CMU-CERT Insider Threat Dataset and comparing its performance with the ground truth. The experimental results show that our model detects insider data leakage events with an AUC-ROC value of 0.99, outperforming the existing approaches that are validated on the same dataset. The proposed model provides effective methods to address possible bias and class imbalance issues for the aim of devising an effective insider data leakage detection system.


電腦學刊 ◽  
2021 ◽  
Vol 32 (4) ◽  
pp. 201-210
Author(s):  
Zhenjiang Zhang Zhenjiang Zhang ◽  
Yang Zhang Zhenjiang Zhang


2019 ◽  
Vol 2019 ◽  
pp. 1-9 ◽  
Author(s):  
Xin Ma ◽  
Shize Guo ◽  
Wei Bai ◽  
Jun Chen ◽  
Shiming Xia ◽  
...  

The explosive growth of malware variants poses a continuously and deeply evolving challenge to information security. Traditional malware detection methods require a lot of manpower. However, machine learning has played an important role on malware classification and detection, and it is easily spoofed by malware disguising to be benign software by employing self-protection techniques, which leads to poor performance for existing techniques based on the machine learning method. In this paper, we analyze the local maliciousness about malware and implement an anti-interference detection framework based on API fragments, which uses the LSTM model to classify API fragments and employs ensemble learning to determine the final result of the entire API sequence. We present our experimental results on Ali-Tianchi contest API databases. By comparing with the experiments of some common methods, it is proved that our method based on local maliciousness has better performance, which is a higher accuracy rate of 0.9734.


2020 ◽  
Vol 10 (2) ◽  
pp. 1-26
Author(s):  
Naghmeh Moradpoor Sheykhkanloo ◽  
Adam Hall

An insider threat can take on many forms and fall under different categories. This includes malicious insider, careless/unaware/uneducated/naïve employee, and the third-party contractor. Machine learning techniques have been studied in published literature as a promising solution for such threats. However, they can be biased and/or inaccurate when the associated dataset is hugely imbalanced. Therefore, this article addresses the insider threat detection on an extremely imbalanced dataset which includes employing a popular balancing technique known as spread subsample. The results show that although balancing the dataset using this technique did not improve performance metrics, it did improve the time taken to build the model and the time taken to test the model. Additionally, the authors realised that running the chosen classifiers with parameters other than the default ones has an impact on both balanced and imbalanced scenarios, but the impact is significantly stronger when using the imbalanced dataset.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Gang Li ◽  
Yongqiang Chen ◽  
Jian Zhou ◽  
Xuan Zheng ◽  
Xue Li

PurposePeriodic inspection and maintenance are essential for effective pavement preservation. Cracks not only affect the appearance of the road and reduce the levelness, but also shorten the life of road. However, traditional road crack detection methods based on manual investigations and image processing are costly, inefficiency and unreliable. The research aims to replace the traditional road crack detection method and further improve the detection effect.Design/methodology/approachIn this paper, a crack detection method based on matrix network fusing corner-based detection and segmentation network is proposed to effectively identify cracks. The method combines ResNet 152 with matrix network as the backbone network to achieve feature reuse of the crack. The crack region is identified by corners, and segmentation network is constructed to extract the crack. Finally, parameters such as the length and width of the cracks were calculated from the geometric characteristics of the cracks and the relative errors with the actual values were 4.23 and 6.98% respectively.FindingsTo improve the accuracy of crack detection, the model was optimized with the Adam algorithm and mixed with two publicly available datasets for model training and testing and compared with various methods. The results show that the detection performance of our method is better than many excellent algorithms, and the anti-interference ability is strong.Originality/valueThis paper proposed a new type of road crack detection method. The detection effect is better than a variety of detection algorithms and has strong anti-interference ability, which can completely replace traditional crack detection methods and meet engineering needs.


Author(s):  
Gerald Matthews ◽  
Lauren Reinerman-Jones ◽  
Ryan Wohleber ◽  
Eric Ortiz

Insider Threats (ITs) are hard to identify because of their knowledge of the organization and motivation to avoid detection. One approach to detecting ITs utilizes Active Indicators (AI), stimuli that elicit a characteristic response from the insider. The present research implemented this approach within a simulation of financial investigative work. A sequence of AIs associated with accessing a locked file was introduced into an ongoing workflow. Participants allocated to an insider role accessed the file illicitly. Eye tracking metrics were used to differentiate insiders and control participants performing legitimate role. Data suggested that ITs may show responses suggestive of strategic concealment of interest and emotional stress. Such findings may provide the basis for a cognitive engineering approach to IT detection.


Sign in / Sign up

Export Citation Format

Share Document