scholarly journals Systematic Review and Quantitative Comparison of Cyberattack Scenario Detection and Projection

Electronics ◽  
2020 ◽  
Vol 9 (10) ◽  
pp. 1722
Author(s):  
Ivan Kovačević ◽  
Stjepan Groš ◽  
Karlo Slovenec

Intrusion Detection Systems (IDSs) automatically analyze event logs and network traffic in order to detect malicious activity and policy violations. Because IDSs have a large number of false positives and false negatives and the technical nature of their alerts requires a lot of manual analysis, the researchers proposed approaches that automate the analysis of alerts to detect large-scale attacks and predict the attacker’s next steps. Unfortunately, many such approaches use unique datasets and success metrics, making comparison difficult. This survey provides an overview of the state of the art in detecting and projecting cyberattack scenarios, with a focus on evaluation and the corresponding metrics. Representative papers are collected while using Google Scholar and Scopus searches. Mutually comparable success metrics are calculated and several comparison tables are provided. Our results show that commonly used metrics are saturated on popular datasets and cannot assess the practical usability of the approaches. In addition, approaches with knowledge bases require constant maintenance, while data mining and ML approaches depend on the quality of available datasets, which, at the time of writing, are not representative enough to provide general knowledge regarding attack scenarios, so more emphasis needs to be placed on researching the behavior of attackers.

2021 ◽  
Vol 2 (3) ◽  
pp. 132-142
Author(s):  
Amer Abulmajeed Abdulrahman Alsameraee ◽  
Mahmood Khalel Ibrahem

Several Intrusion Detection Systems (IDS) have been proposed in the current decade. Most datasets which associate with intrusion detection dataset suffer from an imbalance class problem. This problem limits the performance of classifier for minority classes. This paper has presented a novel class imbalance processing technology for large scale multiclass dataset, referred to as BMCD. Our algorithm is based on adapting the Synthetic Minority Over-Sampling Technique (SMOTE) with multiclass dataset to improve the detection rate of minority classes while ensuring efficiency. In this work we have been combined five individual CICIDS2017 dataset to create one multiclass dataset which contains several types of attacks. To prove the efficiency of our algorithm, several machine learning algorithms have been applied on combined dataset with and without using BMCD algorithm. The experimental results have concluded that BMCD provides an effective solution to imbalanced intrusion detection and outperforms the state-of-the-art intrusion detection methods.


2015 ◽  
Vol 821-823 ◽  
pp. 528-532 ◽  
Author(s):  
Dirk Lewke ◽  
Karl Otto Dohnke ◽  
Hans Ulrich Zühlke ◽  
Mercedes Cerezuela Barret ◽  
Martin Schellenberger ◽  
...  

One challenge for volume manufacturing of 4H-SiC devices is the state-of-the-art wafer dicing technology – the mechanical blade dicing which suffers from high tool wear and low feed rates. In this paper we discuss Thermal Laser Separation (TLS) as a novel dicing technology for large scale production of SiC devices. We compare the latest TLS experimental data resulting from fully processed 4H-SiC wafers with results obtained by mechanical dicing technology. Especially typical product relevant features like process control monitoring (PCM) structures and backside metallization, quality of diced SiC-devices as well as productivity are considered. It could be shown that with feed rates up to two orders of magnitude higher than state-of-the-art, no tool wear and high quality of diced chips, TLS has a very promising potential to fulfill the demands of volume manufacturing of 4H-SiC devices.


2020 ◽  
Vol 36 (10) ◽  
pp. 3011-3017 ◽  
Author(s):  
Olga Mineeva ◽  
Mateo Rojas-Carulla ◽  
Ruth E Ley ◽  
Bernhard Schölkopf ◽  
Nicholas D Youngblut

Abstract Motivation Methodological advances in metagenome assembly are rapidly increasing in the number of published metagenome assemblies. However, identifying misassemblies is challenging due to a lack of closely related reference genomes that can act as pseudo ground truth. Existing reference-free methods are no longer maintained, can make strong assumptions that may not hold across a diversity of research projects, and have not been validated on large-scale metagenome assemblies. Results We present DeepMAsED, a deep learning approach for identifying misassembled contigs without the need for reference genomes. Moreover, we provide an in silico pipeline for generating large-scale, realistic metagenome assemblies for comprehensive model training and testing. DeepMAsED accuracy substantially exceeds the state-of-the-art when applied to large and complex metagenome assemblies. Our model estimates a 1% contig misassembly rate in two recent large-scale metagenome assembly publications. Conclusions DeepMAsED accurately identifies misassemblies in metagenome-assembled contigs from a broad diversity of bacteria and archaea without the need for reference genomes or strong modeling assumptions. Running DeepMAsED is straight-forward, as well as is model re-training with our dataset generation pipeline. Therefore, DeepMAsED is a flexible misassembly classifier that can be applied to a wide range of metagenome assembly projects. Availability and implementation DeepMAsED is available from GitHub at https://github.com/leylabmpi/DeepMAsED. Supplementary information Supplementary data are available at Bioinformatics online.


2021 ◽  
Vol 10 (9) ◽  
pp. 144-147
Author(s):  
Huiling LI ◽  
Xuan SU ◽  
Shuaipeng ZHANG

Massive amounts of business process event logs are collected and stored by modern information systems. Model discovery aims to discover a process model from such event logs, however, most of the existing approaches still suffer from low efficiency when facing large-scale event logs. Event log sampling techniques provide an effective scheme to improve the efficiency of process discovery, but the existing techniques still cannot guarantee the quality of model mining. Therefore, a sampling approach based on set coverage algorithm named set coverage sampling approach is proposed. The proposed sampling approach has been implemented in the open-source process mining toolkit ProM. Furthermore, experiments using a real event log data set from conformance checking and time performance analysis show that the proposed event log sampling approach can greatly improve the efficiency of log sampling on the premise of ensuring the quality of model mining.


Author(s):  
Vít Bukač ◽  
Vashek Matyáš

In this chapter, the reader explores both the founding ideas and the state-of-the-art research on host-based intrusion detection systems. HIDSs are categorized by their intrusion detection method. Each category is thoroughly investigated, and its limitations and benefits are discussed. Seminal research findings and ideas are presented and supplied with comments. Separate sections are devoted to the protection against tampering and to the HIDS evasion techniques that are employed by attackers. Existing research trends are highlighted, and possible future directions are suggested.


Author(s):  
Kun Yuan ◽  
Qian Zhang ◽  
Chang Huang ◽  
Shiming Xiang ◽  
Chunhong Pan

Person Re-identification (ReID) is a challenging retrieval task that requires matching a person's image across non-overlapping camera views. The quality of fulfilling this task is largely determined on the robustness of the features that are used to describe the person. In this paper, we show the advantage of jointly utilizing multi-scale abstract information to learn powerful features over full body and parts. A scale normalization module is proposed to balance different scales through residual-based integration. To exploit the information hidden in non-rigid body parts, we propose an anchor-based method to capture the local contents by stacking convolutions of kernels with various aspect ratios, which focus on different spatial distributions. Finally, a well-defined framework is constructed for simultaneously learning the representations of both full body and parts. Extensive experiments conducted on current challenging large-scale person ReID datasets, including Market1501, CUHK03 and DukeMTMC, demonstrate that our proposed method achieves the state-of-the-art results.


2017 ◽  
Vol 2017 ◽  
pp. 1-17
Author(s):  
Chunhua Li ◽  
Pengpeng Zhao ◽  
Victor S. Sheng ◽  
Xuefeng Xian ◽  
Jian Wu ◽  
...  

Machine-constructed knowledge bases often contain noisy and inaccurate facts. There exists significant work in developing automated algorithms for knowledge base refinement. Automated approaches improve the quality of knowledge bases but are far from perfect. In this paper, we leverage crowdsourcing to improve the quality of automatically extracted knowledge bases. As human labelling is costly, an important research challenge is how we can use limited human resources to maximize the quality improvement for a knowledge base. To address this problem, we first introduce a concept of semantic constraints that can be used to detect potential errors and do inference among candidate facts. Then, based on semantic constraints, we propose rank-based and graph-based algorithms for crowdsourced knowledge refining, which judiciously select the most beneficial candidate facts to conduct crowdsourcing and prune unnecessary questions. Our experiments show that our method improves the quality of knowledge bases significantly and outperforms state-of-the-art automatic methods under a reasonable crowdsourcing cost.


Author(s):  
Yujin Yuan ◽  
Liyuan Liu ◽  
Siliang Tang ◽  
Zhongfei Zhang ◽  
Yueting Zhuang ◽  
...  

Distant supervision leverages knowledge bases to automatically label instances, thus allowing us to train relation extractor without human annotations. However, the generated training data typically contain massive noise, and may result in poor performances with the vanilla supervised learning. In this paper, we propose to conduct multi-instance learning with a novel Cross-relation Cross-bag Selective Attention (C2SA), which leads to noise-robust training for distant supervised relation extractor. Specifically, we employ the sentence-level selective attention to reduce the effect of noisy or mismatched sentences, while the correlation among relations were captured to improve the quality of attention weights. Moreover, instead of treating all entity-pairs equally, we try to pay more attention to entity-pairs with a higher quality. Similarly, we adopt the selective attention mechanism to achieve this goal. Experiments with two types of relation extractor demonstrate the superiority of the proposed approach over the state-of-the-art, while further ablation studies verify our intuitions and demonstrate the effectiveness of our proposed two techniques.


Sign in / Sign up

Export Citation Format

Share Document