scholarly journals Adversarial attack and defense methods for neural network based state estimation in smart grid

Author(s):  
Jiwei Tian ◽  
Buhong Wang ◽  
Jing Li ◽  
Charalambos Konstantinou
Electronics ◽  
2021 ◽  
Vol 10 (10) ◽  
pp. 1153
Author(s):  
Francesco Liberati ◽  
Emanuele Garone ◽  
Alessandro Di Giorgio

This paper presents a review of technical works in the field of cyber-physical attacks on the smart grid. The paper starts by discussing two reference mathematical frameworks proposed in the literature to model a smart grid under attack. Then, a review of cyber-physical attacks on the smart grid is presented, starting from works on false data injection attacks against state estimation. The aim is to present a systematic and quantitative discussion of the basic working principles of the attacks, also in terms of the inner smart grid vulnerabilities and dynamical properties exploited by the attack. The main contribution of the paper is the attempt to provide a unifying view, highlighting the fundamental aspects and the common working principles shared by the attack models, even when targeting different subsystems of the smart grid.


Electronics ◽  
2020 ◽  
Vol 10 (1) ◽  
pp. 52
Author(s):  
Richard Evan Sutanto ◽  
Sukho Lee

Several recent studies have shown that artificial intelligence (AI) systems can malfunction due to intentionally manipulated data coming through normal channels. Such kinds of manipulated data are called adversarial examples. Adversarial examples can pose a major threat to an AI-led society when an attacker uses them as means to attack an AI system, which is called an adversarial attack. Therefore, major IT companies such as Google are now studying ways to build AI systems which are robust against adversarial attacks by developing effective defense methods. However, one of the reasons why it is difficult to establish an effective defense system is due to the fact that it is difficult to know in advance what kind of adversarial attack method the opponent is using. Therefore, in this paper, we propose a method to detect the adversarial noise without knowledge of the kind of adversarial noise used by the attacker. For this end, we propose a blurring network that is trained only with normal images and also use it as an initial condition of the Deep Image Prior (DIP) network. This is in contrast to other neural network based detection methods, which require the use of many adversarial noisy images for the training of the neural network. Experimental results indicate the validity of the proposed method.


2021 ◽  
Vol 201 ◽  
pp. 107545
Author(s):  
Marcela A. da Silva ◽  
Thays Abreu ◽  
Carlos Roberto Santos-Júnior ◽  
Carlos R. Minussi

2021 ◽  
Vol 7 ◽  
pp. 159-166
Author(s):  
Guangdou Zhang ◽  
Jian Li ◽  
Dongsheng Cai ◽  
Qi Huang ◽  
Weihao Hu

Sign in / Sign up

Export Citation Format

Share Document