scholarly journals BBAEG: Towards BERT-based Biomedical Adversarial Example Generation for Text Classification

Author(s):  
Ishani Mondal
Author(s):  
Chunlong Fan ◽  
Cailong Li ◽  
Jici Zhang ◽  
Yiping Teng ◽  
Jianzhong Qiao

Neural network technology has achieved good results in many tasks, such as image classification. However, for some input examples of neural networks, after the addition of designed and imperceptible perturbations to the examples, these adversarial examples can change the output results of the original examples. For image classification problems, we derive low-dimensional attack perturbation solutions on multidimensional linear classifiers and extend them to multidimensional nonlinear neural networks. Based on this, a new adversarial example generation algorithm is designed to modify a specified number of pixels. The algorithm adopts a greedy iterative strategy, and gradually iteratively determines the importance and attack range of pixel points. Finally, experiments demonstrate that the algorithm-generated adversarial example is of good quality, and the effects of key parameters in the algorithm are also analyzed.


2020 ◽  
Vol 34 (05) ◽  
pp. 8384-8391
Author(s):  
Hui Liu ◽  
Yongzheng Zhang ◽  
Yipeng Wang ◽  
Zheng Lin ◽  
Yige Chen

Text classification is a basic task in natural language processing, but the small character perturbations in words can greatly decrease the effectiveness of text classification models, which is called character-level adversarial example attack. There are two main challenges in character-level adversarial examples defense, which are out-of-vocabulary words in word embedding model and the distribution difference between training and inference. Both of these two challenges make the character-level adversarial examples difficult to defend. In this paper, we propose a framework which jointly uses the character embedding and the adversarial stability training to overcome these two challenges. Our experimental results on five text classification data sets show that the models based on our framework can effectively defend character-level adversarial examples, and our models can defend 93.19% gradient-based adversarial examples and 94.83% natural adversarial examples, which outperforms the state-of-the-art defense models.


Sign in / Sign up

Export Citation Format

Share Document