scholarly journals A Defense Strategy Selection Method Based on the Cyberspace Wargame Model

2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Yuwen Zhu ◽  
Lei Yu ◽  
Houhua He ◽  
Yitong Meng

Network defenders always face the problem of how to use limited resources to make the most reasonable decision. The network attack-defense game model is an effective means to solve this problem. However, existing network attack-defense game models usually assume that defenders will no longer change defense strategies after deploying them. However, in an advanced network attack-defense confrontation, defenders usually redeploy defense strategies for different attack situations. Therefore, the existing network attack-defense game models are challenging to accurately describe the advanced network attack-defense process. To address the above challenges, this paper proposes a defense strategy selection method based on the network attack-defense wargame model. We model the advanced network attack-defense confrontation process as a turn-based wargame in which both attackers and defenders can continuously adjust their strategies in response to the attack-defense posture and use the Monte Carlo tree search method to solve the optimal defense strategy. Finally, a network example is used to illustrate the effectiveness of the model and method in selecting the optimal defense strategy.

2019 ◽  
Vol 2019 ◽  
pp. 1-14
Author(s):  
Xiaohu Liu ◽  
Hengwei Zhang ◽  
Yuchen Zhang ◽  
Lulu Shao ◽  
Jihong Han

Most network security research studies based on signaling games assume that either the attacker or the defender is the sender of the signal and the other party is the receiver of the signal. The attack and defense process is commonly modeled and analyzed from the perspective of one-way signal transmission. Aiming at the reality of two-way signal transmission in network attack and defense confrontation, we propose a method of active defense strategy selection based on a two-way signaling game. In this paper, a two-way signaling game model is constructed to analyze the network attack and defense processes. Based on the solution of a perfect Bayesian equilibrium, a defense strategy selection algorithm is presented. The feasibility and effectiveness of the method are verified using examples from real-world applications. In addition, the mechanism of the deception signal is analyzed, and conclusions for guiding the selection of active defense strategies are provided.


IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 19907-19921 ◽  
Author(s):  
Xiayang Chen ◽  
Xingtong Liu ◽  
Lei Zhang ◽  
Chaojing Tang

2021 ◽  
Vol 2021 ◽  
pp. 1-15
Author(s):  
Yanhua Liu ◽  
Hui Chen ◽  
Hao Zhang ◽  
Ximeng Liu

Evolutionary game theory is widely applied in network attack and defense. The existing network attack and defense analysis methods based on evolutionary games adopt the bounded rationality hypothesis. However, the existing research ignores that both sides of the game get more information about each other with the deepening of the network attack and defense game, which may cause the attacker to crack a certain type of defense strategy, resulting in an invalid defense strategy. The failure of the defense strategy reduces the accuracy and guidance value of existing methods. To solve the above problem, we propose a reward value learning mechanism (RLM). By analyzing previous game information, RLM automatically incentives or punishes the attack and defense reward values for the next stage, which reduces the probability of defense strategy failure. RLM is introduced into the dynamic network attack and defense process under incomplete information, and a multistage evolutionary game model with a learning mechanism is constructed. Based on the above model, we design the optimal defense strategy selection algorithm. Experimental results demonstrate that the evolutionary game model with RLM has better results in the value of reward and defense success rate than the evolutionary game model without RLM.


2011 ◽  
Vol 31 (3) ◽  
pp. 784-789 ◽  
Author(s):  
Chun-zi WANG ◽  
Guang-qiu HUANG

2021 ◽  
Vol 11 (3) ◽  
pp. 1291
Author(s):  
Bonwoo Gu ◽  
Yunsick Sung

Gomoku is a two-player board game that originated in ancient China. There are various cases of developing Gomoku using artificial intelligence, such as a genetic algorithm and a tree search algorithm. Alpha-Gomoku, Gomoku AI built with Alpha-Go’s algorithm, defines all possible situations in the Gomoku board using Monte-Carlo tree search (MCTS), and minimizes the probability of learning other correct answers in the duplicated Gomoku board situation. However, in the tree search algorithm, the accuracy drops, because the classification criteria are manually set. In this paper, we propose an improved reinforcement learning-based high-level decision approach using convolutional neural networks (CNN). The proposed algorithm expresses each state as One-Hot Encoding based vectors and determines the state of the Gomoku board by combining the similar state of One-Hot Encoding based vectors. Thus, in a case where a stone that is determined by CNN has already been placed or cannot be placed, we suggest a method for selecting an alternative. We verify the proposed method of Gomoku AI in GuPyEngine, a Python-based 3D simulation platform.


Sign in / Sign up

Export Citation Format

Share Document