scholarly journals Short floating-point representation for convolutional neural network inference

2019 ◽  
Vol 16 (2) ◽  
pp. 20180909-20180909 ◽  
Author(s):  
Hyeong-Ju Kang
2021 ◽  
Vol 11 (11) ◽  
pp. 5235
Author(s):  
Nikita Andriyanov

The article is devoted to the study of convolutional neural network inference in the task of image processing under the influence of visual attacks. Attacks of four different types were considered: simple, involving the addition of white Gaussian noise, impulse action on one pixel of an image, and attacks that change brightness values within a rectangular area. MNIST and Kaggle dogs vs. cats datasets were chosen. Recognition characteristics were obtained for the accuracy, depending on the number of images subjected to attacks and the types of attacks used in the training. The study was based on well-known convolutional neural network architectures used in pattern recognition tasks, such as VGG-16 and Inception_v3. The dependencies of the recognition accuracy on the parameters of visual attacks were obtained. Original methods were proposed to prevent visual attacks. Such methods are based on the selection of “incomprehensible” classes for the recognizer, and their subsequent correction based on neural network inference with reduced image sizes. As a result of applying these methods, gains in the accuracy metric by a factor of 1.3 were obtained after iteration by discarding incomprehensible images, and reducing the amount of uncertainty by 4–5% after iteration by applying the integration of the results of image analyses in reduced dimensions.


1999 ◽  
Vol 11 (4) ◽  
pp. 853-862 ◽  
Author(s):  
Nicol N. Schraudolph

Neural network simulations often spend a large proportion of their time computing exponential functions. Since the exponentiation routines of typical math libraries are rather slow, their replacement with a fast approximation can greatly reduce the overall computation time. This article describes how exponentiation can be approximated by manipulating the components of a standard (IEEE-754) floating-point representation. This models the exponential function as well as a lookup table with linear interpolation, but is significantly faster and more compact.


2020 ◽  
Vol 7 (2) ◽  
pp. 869-879 ◽  
Author(s):  
Bo Mei ◽  
Yinhao Xiao ◽  
Ruinian Li ◽  
Hong Li ◽  
Xiuzhen Cheng ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document