scholarly journals Robustness Testing Framework For Neural Network Image Classifier

2021 ◽  
Vol 2078 (1) ◽  
pp. 012050
Author(s):  
Duo Li ◽  
Chaoqun Dong ◽  
Qianchao Liu

Abstract Neural network has made remarkable achievements in the field of image classification, but they are threatened by adversarial examples in the process of application, making the robustness of neural network classifiers face danger. Programs or software based on neural network image classifiers need to undergo rigorous robustness testing before use and promotion, in order to effectively reduce losses and security risks. To comprehensively test the robustness of neural network image classifiers and standardize the test process, starting from the two aspects of generated content and interference intensity, a variety of robustness test sets are constructed, and a robustness testing framework suitable for neural network classifiers is proposed. And the feasibility and effectiveness of the test framework and method are verified by testing LENET-5 and the model reinforced by the adversavial training.

Author(s):  
Chunlong Fan ◽  
Cailong Li ◽  
Jici Zhang ◽  
Yiping Teng ◽  
Jianzhong Qiao

Neural network technology has achieved good results in many tasks, such as image classification. However, for some input examples of neural networks, after the addition of designed and imperceptible perturbations to the examples, these adversarial examples can change the output results of the original examples. For image classification problems, we derive low-dimensional attack perturbation solutions on multidimensional linear classifiers and extend them to multidimensional nonlinear neural networks. Based on this, a new adversarial example generation algorithm is designed to modify a specified number of pixels. The algorithm adopts a greedy iterative strategy, and gradually iteratively determines the importance and attack range of pixel points. Finally, experiments demonstrate that the algorithm-generated adversarial example is of good quality, and the effects of key parameters in the algorithm are also analyzed.


2018 ◽  
Vol 23 (1) ◽  
pp. 52-62
Author(s):  
Vadim V. Romanuke

Abstract The present paper considers an open problem of setting hyperparameters for convolutional neural networks aimed at image classification. Since selecting filter spatial extents for convolutional layers is a topical problem, it is approximately solved by accumulating statistics of the neural network performance. The network architecture is taken on the basis of the MNIST database experience. The eight-layered architecture having four convolutional layers is nearly best suitable for classifying small and medium size images. Image databases are formed of grayscale images whose size range is 28 × 28 to 64 × 64 by step 2. Except for the filter spatial extents, the rest of those eight layer hyperparameters are unalterable, and they are chosen scrupulously based on rules of thumb. A sequence of possible filter spatial extents is generated for each size. Then sets of four filter spatial extents producing the best performance are extracted. The rule of this extraction that allows selecting the best filter spatial extents is formalized with two conditions. Mainly, difference between maximal and minimal extents must be as minimal as possible. No unit filter spatial extent is recommended. The secondary condition is that the filter spatial extents should constitute a non-increasing set. Validation on MNIST and CIFAR- 10 databases justifies such a solution, which can be extended for building convolutional neural network classifiers of colour and larger images.


Sign in / Sign up

Export Citation Format

Share Document