Passive method for rescale detection using quadrature mirror filter based higher order statistical features

Author(s):  
Gajanan K. Birajdar ◽  
Vijay H. Mankar

High resolution digital cameras and state-of-the-art image editing software tools has given rise to large amount of manipulated images leaving no traces of being subjected to any manipulation. Passive or blind forgery detection algorithms are used in order to determine its authenticity. In this paper, an algorithm is proposed that blindly detects global rescaling operation using the statistical models computed based on quadrature mirror filter (QMF) decomposition. Fuzzy entropy measure is employed to choose the relevant features and to remove non-important features whereas artificial neural network classifier is used for forgery detection. Experimental results are presented on grayscale and [Formula: see text]-component images of UCID database to prove the validity of the algorithm under different interpolation schemes. Results are provided for the detection of rescaled images with JPEG compression, arbitrary cropping and white Gaussian noise addition. Further, results are shown using USC-SIPI database to prove the robustness of the algorithm against the type of database.

2012 ◽  
Vol 4 (3) ◽  
pp. 20-32 ◽  
Author(s):  
Yongjian Hu ◽  
Chang-Tsun Li ◽  
Yufei Wang ◽  
Bei-bei Liu

Frame duplication is a common way of digital video forgeries. State-of-the-art approaches of duplication detection usually suffer from heavy computational load. In this paper, the authors propose a new algorithm to detect duplicated frames based on video sub-sequence fingerprints. The fingerprints employed are extracted from the DCT coefficients of the temporally informative representative images (TIRIs) of the sub-sequences. Compared with other similar algorithms, this study focuses on improving fingerprints representing video sub-sequences and introducing a simple metric for the matching of video sub-sequences. Experimental results show that the proposed algorithm overall outperforms three related duplication forgery detection algorithms in terms of computational efficiency, detection accuracy and robustness against common video operations like compression and brightness change.


2021 ◽  
Author(s):  
Jawad Khan

Due to the number of image editing tools available online, image tampering has been easy to execute. The quality of these tools has led these tamperings to steer clear from the naked eye. One such tampering method is called the Copy-Move tampering where a region of the image is copied and pasted elsewhere in the image. We propose a method to deal with this. First, the image is broken to blocks using discrete cosine transform. Next, the dimensionality is reduced using the gaussian RBF kernel PCA. Finally, a new iterative interest point detector is proposed and the image is then sent as input to a CNN that predicts whether the image has been forged or not. The experimental results showed that the algorithm gave an excellent percentage of accuracy, outperforming state of the art methods.


2016 ◽  
Vol 8 (4) ◽  
pp. 14-25 ◽  
Author(s):  
Jie Zhao ◽  
Qiuzi Wang ◽  
Jichang Guo ◽  
Lin Gao ◽  
Fusheng Yang

Currently, with the popularity of sophisticated image editing tools like Photoshop, it is becoming very difficult to discriminate between an authentic image and its manipulated version, which poses a serious social problem of debasing the credibility of photographic images as definite records of events. Passive image forgery detection technology, as one main branch of image forensics, has been regarded as the promising research interest due to its versatility and universality. Automatic computer forgery employs computer intelligent algorithms to forge an image in an automatic way, which is rather more complex than copy-move forgery since the source of duplicated region could be non-continuous. In this paper, the authors provide a comprehensive overview of the state-of-the-art passive detection methods for automatic computer forgery.


2018 ◽  
Vol 2018 ◽  
pp. 1-11 ◽  
Author(s):  
Yu Sun ◽  
Rongrong Ni ◽  
Yao Zhao

In order to solve the problem of high computational complexity in block-based methods for copy-move forgery detection, we divide image into texture part and smooth part to deal with them separately. Keypoints are extracted and matched in texture regions. Instead of using all the overlapping blocks, we use nonoverlapping blocks as candidates in smooth regions. Clustering blocks with similar color into a group can be regarded as a preprocessing operation. To avoid mismatching due to misalignment, we update candidate blocks by registration before projecting them into hash space. In this way, we can reduce computational complexity and improve the accuracy of matching at the same time. Experimental results show that the proposed method achieves better performance via comparing with the state-of-the-art copy-move forgery detection algorithms and exhibits robustness against JPEG compression, rotation, and scaling.


2021 ◽  
Vol 11 (12) ◽  
pp. 5656
Author(s):  
Yufan Zeng ◽  
Jiashan Tang

Graph neural networks (GNNs) have been very successful at solving fraud detection tasks. The GNN-based detection algorithms learn node embeddings by aggregating neighboring information. Recently, CAmouflage-REsistant GNN (CARE-GNN) is proposed, and this algorithm achieves state-of-the-art results on fraud detection tasks by dealing with relation camouflages and feature camouflages. However, stacking multiple layers in a traditional way defined by hop leads to a rapid performance drop. As the single-layer CARE-GNN cannot extract more information to fix the potential mistakes, the performance heavily relies on the only one layer. In order to avoid the case of single-layer learning, in this paper, we consider a multi-layer architecture which can form a complementary relationship with residual structure. We propose an improved algorithm named Residual Layered CARE-GNN (RLC-GNN). The new algorithm learns layer by layer progressively and corrects mistakes continuously. We choose three metrics—recall, AUC, and F1-score—to evaluate proposed algorithm. Numerical experiments are conducted. We obtain up to 5.66%, 7.72%, and 9.09% improvements in recall, AUC, and F1-score, respectively, on Yelp dataset. Moreover, we also obtain up to 3.66%, 4.27%, and 3.25% improvements in the same three metrics on the Amazon dataset.


2021 ◽  
Vol 39 (1B) ◽  
pp. 101-116
Author(s):  
Nada N. Kamal ◽  
Enas Tariq

Tilt correction is an essential step in the license plate recognition system (LPR). The main goal of this article is to provide a review of the various methods that are presented in the literature and used to correct different types of tilt that appear in the digital image of the license plates (LP). This theoretical survey will enable the researchers to have an overview of the available implemented tilt detection and correction algorithms. That’s how this review will simplify for the researchers the choice to determine which of the available rotation correction and detection algorithms to implement while designing their LPR system. This review also simplifies the decision for the researchers to choose whether to combine two or more of the existing algorithms or simply create a new efficient one. This review doesn’t recite the described models in the literature in a hard-narrative tale, but instead, it clarifies how the tilt correction stage is divided based on its initial steps. The steps include: locating the plate corners, finding the tilting angle of the plate, then, correcting its horizontal, vertical, and sheared inclination. For the tilt correction stage, this review clarifies how state-of-the-art literature handled each step individually. As a result, it has been noticed that line fitting, Hough transform, and Randon transform are the most used methods to correct the tilt of a LP.


Sign in / Sign up

Export Citation Format

Share Document