Abstract
Neural network has made remarkable achievements in the field of image classification, but they are threatened by adversarial examples in the process of application, making the robustness of neural network classifiers face danger. Programs or software based on neural network image classifiers need to undergo rigorous robustness testing before use and promotion, in order to effectively reduce losses and security risks. To comprehensively test the robustness of neural network image classifiers and standardize the test process, starting from the two aspects of generated content and interference intensity, a variety of robustness test sets are constructed, and a robustness testing framework suitable for neural network classifiers is proposed. And the feasibility and effectiveness of the test framework and method are verified by testing LENET-5 and the model reinforced by the adversavial training.