scholarly journals The Effect of Class Definitions on the Transferability of Adversarial Attacks Against Forensic CNNs

2020 ◽  
Vol 2020 (4) ◽  
pp. 119-1-119-7
Author(s):  
Xinwei Zhao ◽  
Matthew C. Stamm

In recent years, convolutional neural networks (CNNs) have been widely used by researchers to perform forensic tasks such as image tampering detection. At the same time, adversarial attacks have been developed that are capable of fooling CNN-based classifiers. Understanding the transferability of adversarial attacks, i.e. an attacks ability to attack a different CNN than the one it was trained against, has important implications for designing CNNs that are resistant to attacks. While attacks on object recognition CNNs are believed to be transferrable, recent work by Barni et al. has shown that attacks on forensic CNNs have difficulty transferring to other CNN architectures or CNNs trained using different datasets. In this paper, we demonstrate that adversarial attacks on forensic CNNs are even less transferrable than previously thought even between virtually identical CNN architectures! We show that several common adversarial attacks against CNNs trained to identify image manipulation fail to transfer to CNNs whose only difference is in the class definitions (i.e. the same CNN architectures trained using the same data). We note that all formulations of class definitions contain the unaltered class. This has important implications for the future design of forensic CNNs that are robust to adversarial and anti-forensic attacks.

2017 ◽  
Vol 10 (28) ◽  
pp. 1351-1363 ◽  
Author(s):  
Paula C. Useche Murillo ◽  
Robinson Jimenez Moreno ◽  
Javier O. Pinzon Arenas

The following paper presents the development, operation and comparison of two methods of object recognition trained for the classification of surgical instrumentation, where a video sequence is used to capture scene information constantly, in order to allow the selection of some of the instruments according to the needs of the doctor. The methods used were Convolutional Neural Networks (CNN) and Haar classifiers, where the first was added a previous element detection stage, and the second one was conditioned to allow it not only to detect elements, but also to classify them. With the CNN an accuracy of 96.4% in the classification of the two categories of the first branch of the tree was reached, while for Haar classifiers 90% accuracy was achieved in the detection of one of the five instruments, whose classifier was the one that presented the best results.


2020 ◽  
Author(s):  
Ziyue Xiang ◽  
Daniel Ernesto Acuna

Abstract Background: Scientific image tampering is a problem that affects not only authors but also the general perception of the research community. Although previous researchers have developed methods to identify tampering in natural images, these methods may not thrive under the scientific setting as scientific images have different statistics, format, quality, and intentions. Methods: We propose a scientific-image specific tampering detection method based on noise inconsistencies, which is capable of learning and generalizing to different fields of science. We train and test our method on a new dataset of manipulated western blot and microscopy imagery, which aims at emulating problematic images in science. Results: With an average AUC score of 0.927 and an average F1 score of 0.770, it is shown that our method can detect various types of image manipulation in different scenarios robustly. It outperforms other existing general-purpose image tampering detection schemes. Conclusions: The experiment results shows that our method is capable of detecting manipulations in scientific images in a more reliable manner. We discuss applications beyond these two types of images and suggest next steps for making detection of problematic images a systematic step in peer review and science in general. Keywords: Scientific images; Digital image forensics; Noise inconsistency; Scientific image manipulation dataset


2021 ◽  
Vol 18 (3) ◽  
pp. 172988142110105
Author(s):  
Jnana Sai Abhishek Varma Gokaraju ◽  
Weon Keun Song ◽  
Min-Ho Ka ◽  
Somyot Kaitwanidvilai

The study investigated object detection and classification based on both Doppler radar spectrograms and vision images using two deep convolutional neural networks. The kinematic models for a walking human and a bird flapping its wings were incorporated into MATLAB simulations to create data sets. The dynamic simulator identified the final position of each ellipsoidal body segment taking its rotational motion into consideration in addition to its bulk motion at each sampling point to describe its specific motion naturally. The total motion induced a micro-Doppler effect and created a micro-Doppler signature that varied in response to changes in the input parameters, such as varying body segment size, velocity, and radar location. Micro-Doppler signature identification of the radar signals returned from the target objects that were animated by the simulator required kinematic modeling based on a short-time Fourier transform analysis of the signals. Both You Only Look Once V3 and Inception V3 were used for the detection and classification of the objects with different red, green, blue colors on black or white backgrounds. The results suggested that clear micro-Doppler signature image-based object recognition could be achieved in low-visibility conditions. This feasibility study demonstrated the application possibility of Doppler radar to autonomous vehicle driving as a backup sensor for cameras in darkness. In this study, the first successful attempt of animated kinematic models and their synchronized radar spectrograms to object recognition was made.


IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 43110-43136 ◽  
Author(s):  
Mingliang Gao ◽  
Jun Jiang ◽  
Guofeng Zou ◽  
Vijay John ◽  
Zheng Liu

Sign in / Sign up

Export Citation Format

Share Document