Emerging Digital Forensics Applications for Crime Detection, Prevention, and Security
Latest Publications


TOTAL DOCUMENTS

16
(FIVE YEARS 0)

H-INDEX

2
(FIVE YEARS 0)

Published By IGI Global

9781466640061, 9781466640078

Author(s):  
Shunichi Ishihara

This study is one of the first likelihood ratio-based forensic text comparison studies in forensic authorship analysis. The likelihood-ratio-based evaluation of scientific evidence has started being adopted in many disciplines of forensic evidence comparison sciences, such as DNA, handwriting, fingerprints, footwear, voice recording, etc., and it is largely accepted that this is the way to ensure the maximum accountability and transparency of the process. Due to its convenience and low cost, short message service (SMS) has been a very popular medium of communication for quite some time. Unfortunately, however, SMS messages are sometimes used for reprehensible purposes, e.g., communication between drug dealers and buyers, or in illicit acts such as extortion, fraud, scams, hoaxes, and false reports of terrorist threats. In this study, the author performs a likelihood-ratio-based forensic text comparison of SMS messages focusing on lexical features. The likelihood ratios (LRs) are calculated in Aitken and Lucy’s (2004) multivariate kernel density procedure, and are calibrated. The validity of the system is assessed based on the magnitude of the LRs using the log-likelihood-ratio cost (Cllr). The strength of the derived LRs is graphically presented in Tippett plots. The results of the current study are compared with those of previous studies.


Author(s):  
Lynn Batten ◽  
Lei Pan ◽  
Nisar Khan

The need for an automated approach to forensic digital investigation has been recognized for some years, and several authors have developed frameworks in this direction. The aim of this paper is to assist the forensic investigator with the generation and testing of hypotheses in the analysis phase. In doing so, the authors present a new architecture which facilitates the move to automation of the investigative process; this new architecture draws together several important components of the literature on question and answer methodologies including the concept of ‘pivot’ word and sentence ranking. Their architecture is supported by a detailed case study demonstrating its practicality.


Author(s):  
George Grispos ◽  
Tim Storer ◽  
William Bradley Glisson

Cloud computing is a rapidly evolving information technology (IT) phenomenon. Rather than procure, deploy, and manage a physical IT infrastructure to host their software applications, organizations are increasingly deploying their infrastructure into remote, virtualized environments, often hosted and managed by third parties. This development has significant implications for digital forensic investigators, equipment vendors, law enforcement, as well as corporate compliance and audit departments, amongst other organizations. Much of digital forensic practice assumes careful control and management of IT assets (particularly data storage) during the conduct of an investigation. This paper summarises the key aspects of cloud computing and analyses how established digital forensic procedures will be invalidated in this new environment, as well as discussing and identifying several new research challenges addressing this changing context.


Author(s):  
John Haggerty ◽  
Mark C. Casson ◽  
Sheryllynne Haggerty ◽  
Mark J. Taylor

The increasing use of social media, applications or platforms that allow users to interact online, ensures that this environment will provide a useful source of evidence for the forensics examiner. Current tools for the examination of digital evidence find this data problematic as they are not designed for the collection and analysis of online data. Therefore, this paper presents a framework for the forensic analysis of user interaction with social media. In particular, it presents an inter-disciplinary approach for the quantitative analysis of user engagement to identify relational and temporal dimensions of evidence relevant to an investigation. This framework enables the analysis of large data sets from which a (much smaller) group of individuals of interest can be identified. In this way, it may be used to support the identification of individuals who might be ‘instigators’ of a criminal event orchestrated via social media, or a means of potentially identifying those who might be involved in the ‘peaks’ of activity. In order to demonstrate the applicability of the framework, this paper applies it to a case study of actors posting to a social media Web site.


Author(s):  
T. Gidwani ◽  
M. J. Argano ◽  
W. Yan ◽  
F. Issa

Analytics has emerged as an important area of study as it avoids further incidents or risks after the events have occurred; this is done by analysing computer events and making further statistics. The purpose of this survey is to gain knowledge for the authors’ own event knowledge database which will consist of how unusual events work and how they are related to other events. The algorithms mentioned in this paper have been used to build their future development, resulting in a knowledge database designed to be similar to an internet browser engine where it can search events and their relationships. The research and algorithms have helped the authors to decide on the technology they will be using for the knowledge database.


Author(s):  
Konstantinos Vlachopoulos ◽  
Emmanouil Magkos ◽  
Vassileios Chrissikopoulos

With the advent of Information and Communication Technologies, the means of committing a crime and the crime itself are constantly evolved. In addition, the boundaries between traditional crime and cybercrime are vague: a crime may not have a defined traditional or digital form since digital and physical evidence may coexist in a crime scene. Furthermore, various items found in a crime scene may worth be examined as both physical and digital evidence, which the authors consider as hybrid evidence. In this paper, a model for investigating such crime scenes with hybrid evidence is proposed. Their model unifies the procedures related to digital and physical evidence collection and examination, taking into consideration the unique characteristics of each form of evidence. The authors’ model can also be implemented in cases where only digital or physical evidence exist in a crime scene.


Author(s):  
Lifang Yu ◽  
Yun Q. Shi ◽  
Yao Zhao ◽  
Rongrong Ni ◽  
Gang Cao

In this paper, the authors examine embedding efficiency, which influences the most concerned performance of steganography, security, directly. Embedding efficiency is defined as the number of random message bits embedded per embedding change. Recently, matrix embedding has gained extensive attention because of its outstanding performance in boosting steganographic schemes’ embedding efficiency. Firstly, the authors evaluate embedding change not only on the number of changed coefficients but also on the varying magnitude of changed coefficients. Secondly, embedding efficiency of matrix embedding with different radixes is formularized and investigated. A conclusion is drawn that ternary matrix embedding can achieve the highest embedding efficiency.


Author(s):  
Chang Wang ◽  
Jiangqun Ni ◽  
Chuntao Wang ◽  
Ruiyu Zhang

Minimizing the embedding impact is a practically feasible philosophy in designing steganographic systems. The development of steganographic systems can be formulated as the construction of distortion profile reflecting the embedding impact and the design of syndrome coding based on a certain code. The authors devise a new distortion profile exploring both the block complexity and the distortion effect due to flipping and rounding errors, and incorporate it in the framework of syndrome trellis coding (STC) to propose a new JPEG steganographic scheme. The STC provides multiple candidate solutions to embed messages to a block of coefficients while the constructed content-adaptive distortion profile guides the determination of the best solution with minimal distortion effect. The total embedding distortion or impact would be significantly reduced and lead to the less detectability of steganalysis. Extensive experimental results demonstrate that the proposed JPEG steganographic scheme greatly increases the secure embedding capacity against steganalysis and shows significant superiority over some existing JPEG steganographic approaches.


Author(s):  
Wei Sun ◽  
Zhe-Ming Lu ◽  
Fa-Xin Yu ◽  
Rong-Jun Shen

Audio fingerprinting is the process to obtain a compact content-based signature that summarizes the essence of an audio clip. In general, existing audio fingerprinting schemes based on wavelet transforms are not robust against large linear speed changes. The authors present a novel framework for content-based audio retrieval based on the audio fingerprinting scheme that is robust against large linear speed changes. In the proposed scheme, 8 levels Daubechies wavelet decomposition is adopted for extracting time-frequency features and two fingerprint extraction algorithms are designed. The experimental results from this study are discussed further into the article.


Author(s):  
Fei Peng ◽  
Juan Liu ◽  
Min Long

Examining the identification of natural images (NI) and computer generated graphics (CG), a novel method is proposed based on hybrid features. Since the image acquisition pipelines are different, some differences exist in statistical, visual, and noise characteristics between natural images and computer generated graphics. Firstly, the mean, variance, kurtosis, skew-ness, and median of the histograms of grayscale image in the spatial and wavelet domain are selected as statistical features. Secondly, the fractal dimensions of grayscale image and wavelet sub-bands are extracted as visual features. Thirdly, considering the shortage of the photo response non-uniformity noise (PRNU) acquired from wavelet based de-noising filter, a pre-processing of Gaussian high pass filter is applied to the image before the extraction of PRNU, and the physical features are calculated from the enhanced PRNU. In the identification, a support vector machine (SVM) classifier is used in experiments and an average classification accuracy of 94.29% is achieved, where the classification accuracy for computer generated graphics is 97.3% and for natural images is 91.28%. Analysis and discussion show that the method is suitable for the identification of natural images and computer generated graphics and can achieve better identification accuracy than the existing methods with fewer dimensions of features.


Sign in / Sign up

Export Citation Format

Share Document