scholarly journals Scientific Image Tampering Detection Based On Noise Inconsistencies: A Method And Datasets

2020 ◽  
Author(s):  
Ziyue Xiang ◽  
Daniel Ernesto Acuna

Abstract Background: Scientific image tampering is a problem that affects not only authors but also the general perception of the research community. Although previous researchers have developed methods to identify tampering in natural images, these methods may not thrive under the scientific setting as scientific images have different statistics, format, quality, and intentions. Methods: We propose a scientific-image specific tampering detection method based on noise inconsistencies, which is capable of learning and generalizing to different fields of science. We train and test our method on a new dataset of manipulated western blot and microscopy imagery, which aims at emulating problematic images in science. Results: With an average AUC score of 0.927 and an average F1 score of 0.770, it is shown that our method can detect various types of image manipulation in different scenarios robustly. It outperforms other existing general-purpose image tampering detection schemes. Conclusions: The experiment results shows that our method is capable of detecting manipulations in scientific images in a more reliable manner. We discuss applications beyond these two types of images and suggest next steps for making detection of problematic images a systematic step in peer review and science in general. Keywords: Scientific images; Digital image forensics; Noise inconsistency; Scientific image manipulation dataset

Entropy ◽  
2015 ◽  
Vol 17 (12) ◽  
pp. 7948-7966 ◽  
Author(s):  
Bo Zhao ◽  
Guihe Qin ◽  
Pingping Liu

2020 ◽  
Vol 2020 (4) ◽  
pp. 119-1-119-7
Author(s):  
Xinwei Zhao ◽  
Matthew C. Stamm

In recent years, convolutional neural networks (CNNs) have been widely used by researchers to perform forensic tasks such as image tampering detection. At the same time, adversarial attacks have been developed that are capable of fooling CNN-based classifiers. Understanding the transferability of adversarial attacks, i.e. an attacks ability to attack a different CNN than the one it was trained against, has important implications for designing CNNs that are resistant to attacks. While attacks on object recognition CNNs are believed to be transferrable, recent work by Barni et al. has shown that attacks on forensic CNNs have difficulty transferring to other CNN architectures or CNNs trained using different datasets. In this paper, we demonstrate that adversarial attacks on forensic CNNs are even less transferrable than previously thought even between virtually identical CNN architectures! We show that several common adversarial attacks against CNNs trained to identify image manipulation fail to transfer to CNNs whose only difference is in the class definitions (i.e. the same CNN architectures trained using the same data). We note that all formulations of class definitions contain the unaltered class. This has important implications for the future design of forensic CNNs that are robust to adversarial and anti-forensic attacks.


2019 ◽  
Vol 10 (1) ◽  
pp. 54-63
Author(s):  
Shruti Singhania ◽  
Arju N.A ◽  
Raina Singh

Pictures are considered the most reliable form of media in journalism, research work, investigations, and intelligence reporting. With the rapid growth of ever-advancing technology and free applications on smartphones, sharing and transferring images is widely spread, which requires authentication and reliability. Copy-move forgery is considered a common image tampering type, where a part of the image is superimposed with another image. Such a tampering process occurs without leaving any obvious visual traces. In this study, an image tampering detection method was proposed by exploiting a convolutional neural network (CNN) for extracting the discriminative features from images and detects whether an image has been forged or not. The results established that the optimal number of epochs is 50 epochs using AlexNet-based CNN for classification-based tampering detection, with a 91% accuracy.


2021 ◽  
Vol 12 (2) ◽  
pp. 13-32
Author(s):  
Ali Ahmad Aminu ◽  
Nwojo Nnanna Agwu

Digital image tampering detection has been an active area of research in recent times due to the ease with which digital image can be modified to convey false or misleading information. To address this problem, several studies have proposed forensics algorithms for digital image tampering detection. While these approaches have shown remarkable improvement, most of them only focused on detecting a specific type of image tampering. The limitation of these approaches is that new forensic method must be designed for each new manipulation approach that is developed. Consequently, there is a need to develop methods capable of detecting multiple tampering operations. In this paper, we proposed a novel general purpose image tampering scheme based on CNNs and Local Optimal Oriented Pattern (LOOP) which is capable of detecting five types of image tampering in both binary and multiclass scenarios. Unlike the existing deep learning techniques which used constrained pre-processing layers to suppress the effect of image content in order to capture image tampering traces, our method uses LOOP features, which can effectively subdue the effect image content, thus, allowing the proposed CNNs to capture the needed features to distinguish among different types of image tampering. Through a number of detailed experiments, our results demonstrate that the proposed general purpose image tampering method can achieve high detection accuracies in individual and multiclass image tampering detections respectively and a comparative analysis of our results with the existing state of the arts reveals that the proposed model is more robust than most of the exiting methods.


Sign in / Sign up

Export Citation Format

Share Document