The vulnerability of the military SoS networks under different attack and defense strategies

Author(s):  
Wei-Xin Jin ◽  
Ping Song ◽  
Guo-Zhu Liu
2020 ◽  
Author(s):  
Tam ngoc Nguyen

We proposes a new scientific model that enables the ability to collect evidence, and explain the motivations behind people's cyber malicious/ethical behaviors. Existing models mainly focus on detecting already-committed actions and associated response strategies, which is not proactive. That is the reason why little has been done in order to prevent malicious behaviors early, despite the fact that issues like insider threats cost corporations billions of dollars annually, and its time to detection often lasts for more than a year.We address those problems by our main contributions of:+ A better model for ethical/malicious behavioral analysis with a strong focus on understanding people's motivations. + Research results regarding ethical behaviors of more than 200 participants, during the historic Covid-19 pandemic. + Novel attack and defense strategies based on validated model and survey results. + Strategies for continuous model development and integration, utilizing latest technologies such as natural language processing, and machine learning. We employed mixed-mode research approach of: integrating and combining proven behavioral science models, case studying of hackers, survey research, quantitative analysis, and qualitative analysis. For practical deployments, corporations may utilize our model in: improving HR processes and research, prioritizing plans based on the model's informed human behavioral metrics, better analysis in existing or potential cyber insider threat cases, generating more defense tactics in information warfare and so on.


Author(s):  
Ismail Melih Tas ◽  
Onur Ozbirecikli ◽  
Ugur Cagai ◽  
Erhan Taskin ◽  
Huseyin Tas

2020 ◽  
Vol 2020 ◽  
pp. 1-9 ◽  
Author(s):  
Lingyun Jiang ◽  
Kai Qiao ◽  
Ruoxi Qin ◽  
Linyuan Wang ◽  
Wanting Yu ◽  
...  

In image classification of deep learning, adversarial examples where input is intended to add small magnitude perturbations may mislead deep neural networks (DNNs) to incorrect results, which means DNNs are vulnerable to them. Different attack and defense strategies have been proposed to better research the mechanism of deep learning. However, those researches in these networks are only for one aspect, either an attack or a defense. There is in the improvement of offensive and defensive performance, and it is difficult to promote each other in the same framework. In this paper, we propose Cycle-Consistent Adversarial GAN (CycleAdvGAN) to generate adversarial examples, which can learn and approximate the distribution of the original instances and adversarial examples, especially promoting attackers and defenders to confront each other and improve their ability. For CycleAdvGAN, once the GeneratorA and D are trained, GA can generate adversarial perturbations efficiently for any instance, improving the performance of the existing attack methods, and GD can generate recovery adversarial examples to clean instances, defending against existing attack methods. We apply CycleAdvGAN under semiwhite-box and black-box settings on two public datasets MNIST and CIFAR10. Using the extensive experiments, we show that our method has achieved the state-of-the-art adversarial attack method and also has efficiently improved the defense ability, which made the integration of adversarial attack and defense come true. In addition, it has improved the attack effect only trained on the adversarial dataset generated by any kind of adversarial attack.


IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 168994-169009
Author(s):  
Hamid Al-Hamadi ◽  
Ing-Ray Chen ◽  
Ding-Chau Wang ◽  
Meshal Almashan

2005 ◽  
Vol 94 (18) ◽  
Author(s):  
Lazaros K. Gallos ◽  
Reuven Cohen ◽  
Panos Argyrakis ◽  
Armin Bunde ◽  
Shlomo Havlin

Sign in / Sign up

Export Citation Format

Share Document