scholarly journals Analyzing Benford’s Law’s Powerful Applications in Image Forensics

2021 ◽  
Vol 11 (23) ◽  
pp. 11482
Author(s):  
Diana Crișan ◽  
Alexandru Irimia ◽  
Dan Gota ◽  
Liviu Miclea ◽  
Adela Puscasiu ◽  
...  

The Newcomb–Benford law states that in a set of natural numbers, the leading digit has a probability distribution that decays logarithmically. One of its major applications is the JPEG compression of images, a field of great interest for domains such as image forensics. In this article, we study JPEG compression from the point of view of Benford’s law. The article focuses on ways to detect fraudulent images and JPEG quality factors. Moreover, using the image’s luminance channel and JPEG coefficients, we describe a technique for determining the quality factor with which a JPEG image is compressed. The algorithm’s results are described in considerably more depth in the article’s final sections. Furthermore, the proposed idea is applicable to any procedure that involves the analysis of digital images and in which it is strongly suggested that the image authenticity be verified prior to beginning the analyzing process.

Author(s):  
Xi Zhao ◽  
Anthony T.S. Ho ◽  
Yun Q. Shi

In the past few years, semi-fragile watermarking has become increasingly important to verify the content of images and localise the tampered areas, while tolerating some non-malicious manipulations. In the literature, the majority of semi-fragile algorithms have applied a predetermined threshold to tolerate errors caused by JPEG compression. However, this predetermined threshold is typically fixed and cannot be easily adapted to different amounts of errors caused by unknown JPEG compression at different quality factors (QFs). In this paper, the authors analyse the relationship between QF and threshold, and propose the use of generalised Benford’s Law as an image forensics technique for semi-fragile watermarking. The results show an overall average QF correct detection rate of approximately 99%, when 5%, 20% and 30% of the pixels are subjected to image content tampering and compression using different QFs (ranging from 95 to 65). In addition, the authors applied different image enhancement techniques to these test images. The proposed image forensics method can adaptively adjust the threshold for images based on the estimated QF, improving accuracy rates in authenticating and localising the tampered regions for semi-fragile watermarking.


2010 ◽  
Vol 2 (2) ◽  
pp. 1-20 ◽  
Author(s):  
Xi Zhao ◽  
Anthony T. S. Ho ◽  
Yun Q. Shi

In the past few years, semi-fragile watermarking has become increasingly important to verify the content of images and localise the tampered areas, while tolerating some non-malicious manipulations. In the literature, the majority of semi-fragile algorithms have applied a predetermined threshold to tolerate errors caused by JPEG compression. However, this predetermined threshold is typically fixed and cannot be easily adapted to different amounts of errors caused by unknown JPEG compression at different quality factors (QFs). In this paper, the authors analyse the relationship between QF and threshold, and propose the use of generalised Benford’s Law as an image forensics technique for semi-fragile watermarking. The results show an overall average QF correct detection rate of approximately 99%, when 5%, 20% and 30% of the pixels are subjected to image content tampering and compression using different QFs (ranging from 95 to 65). In addition, the authors applied different image enhancement techniques to these test images. The proposed image forensics method can adaptively adjust the threshold for images based on the estimated QF, improving accuracy rates in authenticating and localising the tampered regions for semi-fragile watermarking.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Zhifeng Wang ◽  
Chi Zuo ◽  
Chunyan Zeng

Purpose Recently, the double joint photographic experts group (JPEG) compression detection tasks have been paid much more attention in the field of Web image forensics. Although there are several useful methods proposed for double JPEG compression detection when the quantization matrices are different in the primary and secondary compression processes, it is still a difficult problem when the quantization matrices are the same. Moreover, those methods for the different or the same quantization matrices are implemented in independent ways. The paper aims to build a new unified framework for detecting the doubly JPEG compression. Design/methodology/approach First, the Y channel of JPEG images is cut into 8 × 8 nonoverlapping blocks, and two groups of features that characterize the artifacts caused by doubly JPEG compression with the same and the different quantization matrices are extracted on those blocks. Then, the Riemannian manifold learning is applied for dimensionality reduction while preserving the local intrinsic structure of the features. Finally, a deep stack autoencoder network with seven layers is designed to detect the doubly JPEG compression. Findings Experimental results with different quality factors have shown that the proposed approach performs much better than the state-of-the-art approaches. Practical implications To verify the integrity and authenticity of Web images, the research of double JPEG compression detection is increasingly paid more attentions. Originality/value This paper aims to propose a unified framework to detect the double JPEG compression in the scenario whether the quantization matrix is different or not, which means this approach can be applied in more practical Web forensics tasks.


1995 ◽  
Vol 85 (5) ◽  
pp. 1359-1372
Author(s):  
Hsi-Ping Liu

Abstract Because of its simple form, a bandlimited, four-parameter anelastic model that yields nearly constant midband Q for low-loss materials is often used for calculating synthetic seismograms. The four parameters used in the literature to characterize anelastic behavior are τ1, τ2, Qm, and MR in the relaxation-function approach (s1 = 1/τ1 and s2 = 1/τ2 are angular frequencies defining the bandwidth, MR is the relaxed modulus, and Qm is approximately the midband quality factor when Qm ≫ 1); or τ1, τ2, Qm, and MR in the creep-function approach (s1 = 1/τ1 and s2 = 1/τ2 are angular frequencies defining the bandwidth, and Qm is approximately the midband quality factor when Qm ≫ 1). In practice, it is often the case that, for a particular medium, the quality factor Q(ω0) and phase velocity c(ω0) at an angular frequency ω0 (s1 < ω0 < s2; s1 < ω0 < s2) are known from field measurements. If values are assigned to τ1 and τ2 (τ2 < τ1), or to τ1 and τ2 (τ2 < τ1), then the two remaining parameters, Qm and MR, or Qm and MR, can be obtained from Q(ω0). However, for highly attenuative media, e.g., Q(ω0) ≦ 5, Q(ω) can become highly skewed and negative at low frequencies (for the relaxation-function approach) or at high frequencies (for the creep-function approach) if this procedure is followed. A negative Q(ω) is unacceptable because it implies an increase in energy for waves propagating in a homogeneous and attenuative medium. This article shows that given (τ1, τ2, ω0) or (τ1, τ2, ω0), a lower limit of Q(ω0) exists for a bandlimited, four-parameter anelastic model. In the relaxation-function approach, the minimum permissible Q(ω0) is given by ln [(1 + ω20τ21)/(1 + ω20τ22)]/{2 arctan [ω0(τ1 − τ2)/(1 + ω20τ1τ2)]}. In the creep-function approach, the minimum permissible Q(ω0) is given by {2 ln (τ1/τ2) − ln [(1 + ω20τ21)/(1 + ω20τ22)]}/{2 arctan [ω0(τ1 − τ2)/(1 + ω20τ1τ2)]}. The more general statement that, for a given set of relaxation mechanisms, a lower limit exists for Q(ω0) is also shown to hold. Because a nearly constant midband Q cannot be achieved for highly attenuative media using a four-parameter anelastic model, a bandlimited, six-parameter anelastic model that yields a nearly constant midband Q for such media is devised; an expression for the minimum permissible Q(ω0) is given. Six-parameter anelastic models with quality factors Q ∼ 5 and Q ∼ 16, constant to 6% over the frequency range 0.5 to 200 Hz, illustrate this result. In conformity with field observations that Q(ω) for near-surface earth materials is approximately constant over a wide frequency range, the bandlimited, six-parameter anelastic models are suitable for modeling wave propagation in highly attenuative media for bandlimited time functions in engineering and exploration seismology.


Author(s):  
S. A. Dobershtein ◽  
N. M. Zhilin ◽  
I. V. Veremeev

This paper presents the research of methods for decrease of the capacitance ratio in the STW-resonators without significant degradation of the quality factor by use of the external inductors and topology change: IDT division on parts and their series connection. The calculated and experimental data are presented for 416 MHz and 766 MHz STW-resonators with quality factors Q = 7000–7978. The capacitance ratio has been reduced from 1200 to 301.


Author(s):  
Yudistira Yudistira ◽  
Ahmad Subhan Yazid ◽  
Agung Fatwanto

FIFA 15 and Pro Evolution Soccer (PES) 15 are soccer games that are popular in Indonesia. Usability testing needs to be done to assess user interest and satisfaction with both and provide an overview of the comparison of them. The framework used for testing is McCall’S. The test combines operability matrix and training matrix to determine software quality. McCall’S was chosen because it has a reliable and comprehensive quality factor indicators. The results of the tests carried out were data on the operability level of PES15 games of 76.81% ± 15.76% and FIFA15 games of 70.65% ± 20.73%. Testing of training matrices produced 15.96 ± 21.74 seconds for PES15 and 78.29 ± 25.73 seconds for FIFA15 game training matrix. The data shows that reusability of PES15 is better than FIFA15.


1985 ◽  
Vol 50 (2) ◽  
pp. 397-406 ◽  
Author(s):  
Franco Montagna ◽  
Andrea Sorbi

When dealing with axiomatic theories from a recursion-theoretic point of view, the notion of r.e. preordering naturally arises. We agree that an r.e. preorder is a pair = 〈P, ≤P〉 such that P is an r.e. subset of the set of natural numbers (denoted by ω), ≤P is a preordering on P and the set {〈;x, y〉: x ≤Py} is r.e.. Indeed, if is an axiomatic theory, the provable implication of yields a preordering on the class of (Gödel numbers of) formulas of .Of course, if ≤P is a preordering on P, then it yields an equivalence relation ~P on P, by simply letting x ~Py iff x ≤Py and y ≤Px. Hence, in the case of P = ω, any preordering yields an equivalence relation on ω and consequently a numeration in the sense of [4]. It is also clear that any equivalence relation on ω (hence any numeration) can be regarded as a preordering on ω. In view of this connection, we sometimes apply to the theory of preorders some of the concepts from the theory of numerations (see also Eršov [6]).Our main concern will be in applications of these concepts to logic, in particular as regards sufficiently strong axiomatic theories (essentially the ones in which recursive functions are representable). From this point of view it seems to be of some interest to study some remarkable prelattices and Boolean prealgebras which arise from such theories. It turns out that these structures enjoy some rather surprising lattice-theoretic and universal recursion-theoretic properties.After making our main definitions in §1, we examine universal recursion-theoretic properties of some r.e. prelattices in §2.


Sign in / Sign up

Export Citation Format

Share Document