scholarly journals New Insights into Approaches to Evaluating Intention and Path for Network Multistep Attacks

2018 ◽  
Vol 2018 ◽  
pp. 1-13 ◽  
Author(s):  
Hao Hu ◽  
Yuling Liu ◽  
Yingjie Yang ◽  
Hongqi Zhang ◽  
Yuchen Zhang

The attack graph (AG) is an abstraction technique that reveals the ways an attacker can use to leverage vulnerabilities in a given network to violate security policies. The analyses developed to extract security-relevant properties are referred to as AG-based security evaluations. In recent years, many evaluation approaches have been explored. However, they are generally limited to the attacker’s “monotonicity” assumption, which needs further improvements to overcome the limitation. To address this issue, the stochastic mathematical model called absorbing Markov chain (AMC) is applied over the AG to give some new insights, namely, the expected success probability of attack intention (EAIP) and the expected attack path length (EAPL). Our evaluations provide the preferred mitigating target hosts and the vulnerabilities patching prioritization of middle hosts. Tests on the public datasets DARPA2000 and Defcon’s CTF23 both verify that our evaluations are available and reliable.

2021 ◽  
Vol 2132 (1) ◽  
pp. 012020
Author(s):  
Jinwei Yang ◽  
Yu Yang

Abstract Intrusion intent and path prediction are important for security administrators to gain insight into the possible threat behavior of attackers. Existing research has mainly focused on path prediction in ideal attack scenarios, yet the ideal attack path is not always the real path taken by an intruder. In order to accurately and comprehensively predict the path information of network intrusion, a multi-step attack path prediction method based on absorbing Markov chains is proposed. Firstly, the node state transfer probability normalization algorithm is designed by using the nil posteriority and absorption of state transfer in absorbing Markov chain, and it is proved that the complete attack graph can correspond to absorbing Markov chain, and the economic indexes of protection cost and attack benefit and the index quantification method are constructed, and the optimal security protection policy selection algorithm based on particle swarm algorithm is proposed, and finally the experimental verification of the model in protection Finally, we experimentally verify the feasibility and effectiveness of the model in protection policy decision-making, which can effectively reduce network security risks and provide more security protection guidance for timely response to network attack threats.


Information ◽  
2019 ◽  
Vol 10 (2) ◽  
pp. 75 ◽  
Author(s):  
Yuan Ping ◽  
Baocang Wang ◽  
Shengli Tian ◽  
Jingxian Zhou ◽  
Hui Ma

By introducing an easy knapsack-type problem, a probabilistic knapsack-type public key cryptosystem (PKCHD) is proposed. It uses a Chinese remainder theorem to disguise the easy knapsack sequence. Thence, to recover the trapdoor information, the implicit attacker has to solve at least two hard number-theoretic problems, namely integer factorization and simultaneous Diophantine approximation problems. In PKCHD, the encryption function is nonlinear about the message vector. Under the re-linearization attack model, PKCHD obtains a high density and is secure against the low-density subset sum attacks, and the success probability for an attacker to recover the message vector with a single call to a lattice oracle is negligible. The infeasibilities of other attacks on the proposed PKCHD are also investigated. Meanwhile, it can use the hardest knapsack vector as the public key if its density evaluates the hardness of a knapsack instance. Furthermore, PKCHD only performs quadratic bit operations which confirms the efficiency of encrypting a message and deciphering a given cipher-text.


Author(s):  
Taye Girma Debelee ◽  
Abrham Gebreselasie ◽  
Friedhelm Schwenker ◽  
Mohammadreza Amirian ◽  
Dereje Yohannes

In this paper, a modified adaptive K-means (MAKM) method is proposed to extract the region of interest (ROI) from the local and public datasets. The local image datasets are collected from Bethezata General Hospital (BGH) and the public datasets are from Mammographic Image Analysis Society (MIAS). The same image number is used for both datasets, 112 are abnormal and 208 are normal. Two texture features (GLCM and Gabor) from ROIs and one CNN based extracted features are considered in the experiment. CNN features are extracted using Inception-V3 pre-trained model after simple preprocessing and cropping. The quality of the features are evaluated individually and by fusing features to one another and five classifiers (SVM, KNN, MLP, RF, and NB) are used to measure the descriptive power of the features using cross-validation. The proposed approach was first evaluated on the local dataset and then applied to the public dataset. The results of the classifiers are measured using accuracy, sensitivity, specificity, kappa, computation time and AUC. The experimental analysis made using GLCM features from the two datasets indicates that GLCM features from BGH dataset outperformed that of MIAS dataset in all five classifiers. However, Gabor features from the two datasets scored the best result with two classifiers (SVM and MLP). For BGH and MIAS, SVM scored an accuracy of 99%, 97.46%, the sensitivity of 99.48%, 96.26% and specificity of 98.16%, 100% respectively. And MLP achieved an accuracy of 97%, 87.64%, the sensitivity of 97.40%, 96.65% and specificity of 96.26%, 75.73% respectively. Relatively maximum performance is achieved for feature fusion between Gabor and CNN based extracted features using MLP classifier. However, KNN, MLP, RF, and NB classifiers achieved almost 100% performance for GLCM texture features and SVM scored an accuracy of 96.88%, the sensitivity of 97.14% and specificity of 96.36%. As compared to other classifiers, NB has scored the least computation time in all experiments.


Author(s):  
Somak Bhattacharya ◽  
Samresh Malhotra ◽  
S. K. Ghosh

As networks continue to grow in size and complexity, automatic assessment of the security vulnerability becomes increasingly important. The typical means by which an attacker breaks into a network is through a series of exploits, where each exploit in the series satisfies the pre-condition for subsequent exploits and makes a causal relationship among them. Such a series of exploits constitutes an attack path where the set of all possible attack paths form an attack graph. Attack graphs reveal the threat by enumerating all possible sequences of exploits that can be followed to compromise a given critical resource. The contribution of this chapter is to identify the most probable attack path based on the attack surface measures of the individual hosts for a given network and also identify the minimum possible network securing options for a given attack graph in an automated fashion. The identified network securing options are exhaustive and the proposed approach aims at detecting cycles in forward reachable attack graphs. As a whole, the chapter deals with identification of probable attack path and risk mitigation which may facilitate in improving the overall security of an enterprise network.


2020 ◽  
Vol 10 (17) ◽  
pp. 5922 ◽  
Author(s):  
Yong Fang ◽  
Jian Gao ◽  
Zhonglin Liu ◽  
Cheng Huang

In the context of increasing cyber threats and attacks, monitoring and analyzing network security incidents in a timely and effective way is the key to ensuring network infrastructure security. As one of the world’s most popular social media sites, users post all kinds of messages on Twitter, from daily life to global news and political strategy. It can aggregate a large number of network security-related events promptly and provide a source of information flow about cyber threats. In this paper, for detecting cyber threat events on Twitter, we present a multi-task learning approach based on the natural language processing technology and machine learning algorithm of the Iterated Dilated Convolutional Neural Network (IDCNN) and Bidirectional Long Short-Term Memory (BiLSTM) to establish a highly accurate network model. Furthermore, we collect a network threat-related Twitter database from the public datasets to verify our model’s performance. The results show that the proposed model works well to detect cyber threat events from tweets and significantly outperform several baselines.


2020 ◽  
Vol 12 (22) ◽  
pp. 3818
Author(s):  
YuAn Wang ◽  
Liang Chen ◽  
Peng Wei ◽  
XiangChen Lu

Based on the hypothesis of the Manhattan world, we propose a tightly-coupled monocular visual-inertial odometry (VIO) system that combines structural features with point features and can run on a mobile phone in real-time. The back-end optimization is based on the sliding window method to improve computing efficiency. As the Manhattan world is abundant in the man-made environment, this regular world can use structural features to encode the orthogonality and parallelism concealed in the building to eliminate the accumulated rotation error. We define a structural feature as an orthogonal basis composed of three orthogonal vanishing points in the Manhattan world. Meanwhile, to extract structural features in real-time on the mobile phone, we propose a fast structural feature extraction method based on the known vertical dominant direction. Our experiments on the public datasets and self-collected dataset show that our system is superior to most existing open-source systems, especially in the situations where the images are texture-less, dark, and blurry.


Complexity ◽  
2018 ◽  
Vol 2018 ◽  
pp. 1-8 ◽  
Author(s):  
Hanning Yuan ◽  
Yanni Han ◽  
Ning Cai ◽  
Wei An

Inspired by the theory of physics field, in this paper, we propose a novel backbone network compression algorithm based on topology potential. With consideration of the network connectivity and backbone compression precision, the method is flexible and efficient according to various network characteristics. Meanwhile, we define a metric named compression ratio to evaluate the performance of backbone networks, which provides an optimal extraction granularity based on the contributions of degree number and topology connectivity. We apply our method to the public available Internet AS network and Hep-th network, which are the public datasets in the field of complex network analysis. Furthermore, we compare the obtained results with the metrics of precision ratio and recall ratio. All these results show that our algorithm is superior to the compared methods. Moreover, we investigate the characteristics in terms of degree distribution and self-similarity of the extracted backbone. It is proven that the compressed backbone network has a lot of similarity properties to the original network in terms of power-law exponent.


2015 ◽  
Vol 23 (3) ◽  
pp. 596-600 ◽  
Author(s):  
Taha A Kass-Hout ◽  
Zhiheng Xu ◽  
Matthew Mohebbi ◽  
Hans Nelsen ◽  
Adam Baker ◽  
...  

Objective The objective of openFDA is to facilitate access and use of big important Food and Drug Administration public datasets by developers, researchers, and the public through harmonization of data across disparate FDA datasets provided via application programming interfaces (APIs). Materials and Methods Using cutting-edge technologies deployed on FDA’s new public cloud computing infrastructure, openFDA provides open data for easier, faster (over 300 requests per second per process), and better access to FDA datasets; open source code and documentation shared on GitHub for open community contributions of examples, apps and ideas; and infrastructure that can be adopted for other public health big data challenges. Results Since its launch on June 2, 2014, openFDA has developed four APIs for drug and device adverse events, recall information for all FDA-regulated products, and drug labeling. There have been more than 20 million API calls (more than half from outside the United States), 6000 registered users, 20,000 connected Internet Protocol addresses, and dozens of new software (mobile or web) apps developed. A case study demonstrates a use of openFDA data to understand an apparent association of a drug with an adverse event. Conclusion With easier and faster access to these datasets, consumers worldwide can learn more about FDA-regulated products.


2016 ◽  
Vol 34 (6) ◽  
pp. 724-739 ◽  
Author(s):  
Walter Castelnovo ◽  
Gianluca Misuraca ◽  
Alberto Savoldelli

Most of the definitions of a “smart city” make a direct or indirect reference to improving performance as one of the main objectives of initiatives to make cities “smarter”. Several evaluation approaches and models have been put forward in literature and practice to measure smart cities. However, they are often normative or limited to certain aspects of cities’ “smartness”, and a more comprehensive and holistic approach seems to be lacking. Thus, building on a review of the literature and practice in the field, this paper aims to discuss the importance of adopting a holistic approach to the assessment of smart city governance and policy decision making. It also proposes a performance assessment framework that overcomes the limitations of existing approaches and contributes to filling the current gap in the knowledge base in this domain. One of the innovative elements of the proposed framework is its holistic approach to policy evaluation. It is designed to address a smart city’s specificities and can benefit from the active participation of citizens in assessing the public value of policy decisions and their sustainability over time. We focus our attention on the performance measurement of codesign and coproduction by stakeholders and social innovation processes related to public value generation. More specifically, we are interested in the assessment of both the citizen centricity of smart city decision making and the processes by which public decisions are implemented, monitored, and evaluated as regards their capability to develop truly “blended” value services—that is, simultaneously socially inclusive, environmentally friendly, and economically sustainable.


2021 ◽  
Vol 33 (5) ◽  
pp. 83-104
Author(s):  
Aleksandr Igorevich Getman ◽  
Maxim Nikolaevich Goryunov ◽  
Andrey Georgievich Matskevich ◽  
Dmitry Aleksandrovich Rybolovlev

The paper discusses the issues of training models for detecting computer attacks based on the use of machine learning methods. The results of the analysis of publicly available training datasets and tools for analyzing network traffic and identifying features of network sessions are presented sequentially. The drawbacks of existing tools and possible errors in the datasets formed with their help are noted. It is concluded that it is necessary to collect own training data in the absence of guarantees of the public datasets reliability and the limited use of pre-trained models in networks with characteristics that differ from the characteristics of the network in which the training traffic was collected. A practical approach to generating training data for computer attack detection models is proposed. The proposed solutions have been tested to evaluate the quality of model training on the collected data and the quality of attack detection in conditions of real network infrastructure.


Sign in / Sign up

Export Citation Format

Share Document