Crime Prevention Technologies and Applications for Advancing Criminal Investigation
Latest Publications


TOTAL DOCUMENTS

18
(FIVE YEARS 0)

H-INDEX

1
(FIVE YEARS 0)

Published By IGI Global

9781466617582, 9781466617599

Author(s):  
Michael Davis ◽  
Alice Sedsman

Cloud computing has been heralded as a new era in the evolution of information and communications technologies. ICT giants have invested heavily in developing technologies and mega server facilities, which allow end users to access web-based software applications and store their data off-site. Businesses using cloud computing services will benefit from reduced operating costs as they cut back on ICT infrastructure and personnel. Individuals will no longer need to buy and install software and will have universal access to their data through any internet-ready device. Yet, hidden amongst the host of benefits are inherent legal risks. The global nature of cloud computing raises questions about privacy, security, confidentiality and access to data. Current terms of use do not adequately address the multitude of legal issues unique to cloud computing. In the face of this legal uncertainty, end users should be educated about the risks involved in entering the cloud.


Author(s):  
Jin Liu ◽  
Hefei Ling ◽  
Fuhao Zou ◽  
WeiQi Yan ◽  
Zhengding Lu

In this paper, the authors investigate the prospect of using multi-resolution histograms (MRH) in conjunction with digital image forensics, particularly in the detection of two kinds of copy-move manipulations, i.e., cloning and splicing. To the best of the authors’ knowledge, this is the first work that uses the same feature in both cloning and splicing forensics. The experimental results show the simplicity and efficiency of using MRH for the purpose of clone detection and splicing detection.


Author(s):  
Xi Zhao ◽  
Anthony T.S. Ho ◽  
Yun Q. Shi

In the past few years, semi-fragile watermarking has become increasingly important to verify the content of images and localise the tampered areas, while tolerating some non-malicious manipulations. In the literature, the majority of semi-fragile algorithms have applied a predetermined threshold to tolerate errors caused by JPEG compression. However, this predetermined threshold is typically fixed and cannot be easily adapted to different amounts of errors caused by unknown JPEG compression at different quality factors (QFs). In this paper, the authors analyse the relationship between QF and threshold, and propose the use of generalised Benford’s Law as an image forensics technique for semi-fragile watermarking. The results show an overall average QF correct detection rate of approximately 99%, when 5%, 20% and 30% of the pixels are subjected to image content tampering and compression using different QFs (ranging from 95 to 65). In addition, the authors applied different image enhancement techniques to these test images. The proposed image forensics method can adaptively adjust the threshold for images based on the estimated QF, improving accuracy rates in authenticating and localising the tampered regions for semi-fragile watermarking.


Author(s):  
Kosta Haltis ◽  
Matthew J. Sorell ◽  
Russell Brinkworth

Biological vision systems are capable of discerning detail as well as detecting objects and motion in a wide range of highly variable lighting conditions that proves challenging to traditional cameras. In this paper, the authors describe the real-time implementation of a biological vision model using a high dynamic range video camera and a General Purpose Graphics Processing Unit. The effectiveness of this implementation is demonstrated in two surveillance applications: dynamic equalization of contrast for improved recognition of scene detail and the use of biologically-inspired motion processing for the detection of small or distant moving objects in a complex scene. A system based on this prototype could improve surveillance capability in any number of difficult situations.


Author(s):  
Natthawut Samphaiboon ◽  
Matthew N. Dailey

Steganography, or communication through covert channels, is desirable when the mere existence of an encrypted message might cause suspicion or provide useful information to eavesdroppers. Text is effective for steganography due to its ubiquity; however, text communication channels do not necessarily provide sufficient redundancy for covert communication. In this paper, the authors propose a novel steganographic embedding scheme for Thai plain text documents that exploits redundancies in the way particular vowel, diacritical, and tonal symbols are composed in TIS-620, the standard Thai character set. This paper provides a Thai text stegosystem following a provably secure construction that guarantees covertness, privacy, and integrity of the hiddentext message under meaningful attacks against computational adversaries. In an experimental evaluation, the authors find that the message embedding scheme allows 203 bytes of embedded hiddentext message per 100KB of covertext on average, and that the document modifications are not readily noticed by observers. The stegosystem is thus a practical and effective secure system for covert communication over Thai plain text channels.


Author(s):  
Roland Kwitt ◽  
Peter Meerwald ◽  
Andreas Uhl

In this paper, the authors adapt two blind detector structures for additive spread-spectrum image watermarking to the host signal characteristics of the Dual-Tree Complex Wavelet Transform (DT-CWT) domain coefficients. The research is motivated by the superior perceptual characteristics of the DT-CWT and its active use in watermarking. To improve the numerous existing watermarking schemes in which the host signal is modeled by a Gaussian distribution, the authors show that the Generalized Gaussian nature of Dual-Tree detail subband statistics can be exploited for better detector performance. This paper finds that the Rao detector is more practical than the likelihood-ratio test for their detection problem. The authors experimentally investigate the robustness of the proposed detectors under JPEG and JPEG2000 attacks and assess the perceptual quality of the watermarked images. The results demonstrate that their alterations allow significantly better blind watermark detection performance in the DT-CWT domain than the widely used linear-correlation detector. As only the detection side has to be modified, the proposed methods can be easily adopted in existing DT-CWT watermarking schemes.


Author(s):  
H. R. Chennamma ◽  
Lalitha Rangarajan

A digitally developed image is a viewable image (TIFF/JPG) produced by a camera’s sensor data (raw image) using computer software tools. Such images might use different colour space, demosaicing algorithms or by different post processing parameter settings which are not the one coded in the source camera. In this regard, the most reliable method of source camera identification is linking the given image with the sensor of camera. In this paper, the authors propose a novel approach for camera identification based on sensor’s readout noise. Readout noise is an important intrinsic characteristic of a digital imaging sensor (CCD or CMOS) and it cannot be removed. This paper quantitatively measures readout noise of the sensor from an image using the mean-standard deviation plot, while in order to evaluate the performance of the proposed approach, the authors tested against the images captured at two different exposure levels. Results show datasets containing 1200 images acquired from six different cameras of three different brands. The success of proposed method is corroborated through experiments.


Author(s):  
Roberto Caldelli ◽  
Irene Amerini ◽  
Francesco Picchioni

Digital images are generated by different sensors, understanding which kind of sensor has acquired a certain image could be crucial in many application scenarios where digital forensic techniques operate. In this paper a new methodology which permits to establish if a digital photo has been taken by a photo-camera or has been scanned by a scanner is presented. The specific geometrical features of the sensor pattern noise introduced by the sensor are investigated by resorting to a DFT (Discrete Fourier Transform) analysis and consequently the origin of the digital content is assessed. Experimental results are provided to witness the reliability of the proposed technique.


Author(s):  
Gary Edmond

This article examines the standards governing the admission of new types of expert evidence. Based on the rules of evidence and procedure in Australia, it explains how judges have been largely uninterested in the reliability of expert opinion evidence. Focused on the use of CCTV images and covert sound recordings for the purposes of identification, but relevant to other forensic sciences, the article explains the need for interest in the reliability of incriminating expert opinion evidence. It also explains why many of the traditional trial safeguards may not be particularly useful for identifying or explaining problems and complexities with scientific and technical evidence. In closing, the article argues that those developing new types of evidence and new techniques, whether identification-based or derived from IT, camera or computer forensics, need to be able to explain why it is that the court can have confidence in any opinions expressed.


Author(s):  
Jonathan Weir ◽  
Raymond Lau ◽  
WeiQi Yan

In this paper, the authors splice together an image which has been split up on a piece of paper by using duplication detection. The nearest pieces are connected using edge searching and matching and the pieces that have graphics or textures are matched using the edge shape and intersection between the two near pieces. Thus, the initial step is to mark the direction of each piece and put the pieces that have straight edges to the initial position to determine the profile of the whole image. The other image pieces are then fixed into the corresponding position by using the edge information, i.e., shape, residual trace and matching, after duplication or sub-duplication detection. In the following steps, the patches with different edge shapes are searched using edge duplication detection. With the reduction of rest pieces, the montage procedure will become easier and faster.


Sign in / Sign up

Export Citation Format

Share Document