A Comparative Study on Adversarial Noise Generation for Single Image Classification
2020 ◽
Vol 16
(1)
◽
pp. 75-87
Keyword(s):
Use Case
◽
With the rise of neural network-based classifiers, it is evident that these algorithms are here to stay. Even though various algorithms have been developed, these classifiers still remain vulnerable to misclassification attacks. This article outlines a new noise layer attack based on adversarial learning and compares the proposed method to other such attacking methodologies like Fast Gradient Sign Method, Jacobian-Based Saliency Map Algorithm and DeepFool. This work deals with comparing these algorithms for the use case of single image classification and provides a detailed analysis of how each algorithm compares to each other.
Keyword(s):
2019 ◽
Vol 8
(6)
◽
pp. 3208-3214
2019 ◽
Vol 11
(4)
◽
Keyword(s):
2018 ◽
Vol 30
(3)
◽
pp. 385
◽
Keyword(s):
Keyword(s):
2020 ◽
pp. 1-5
◽