Digital Forensics and Forensic Investigations
Latest Publications


TOTAL DOCUMENTS

35
(FIVE YEARS 35)

H-INDEX

0
(FIVE YEARS 0)

Published By IGI Global

9781799830252, 9781799830269

Author(s):  
Yasen Aizezi ◽  
Anwar Jamal ◽  
Ruxianguli Abudurexiti ◽  
Mutalipu Muming

This paper mainly discusses the use of mutual information (MI) and Support Vector Machines (SVMs) for Uyghur Web text classification and digital forensics process of web text categorization: automatic classification and identification, conversion and pretreatment of plain text based on encoding features of various existing Uyghur Web documents etc., introduces the pre-paratory work for Uyghur Web text encoding. Focusing on the non-Uyghur characters and stop words in the web texts filtering, we put forward a Multi-feature Space Normalized Mutual Information (M-FNMI) algorithm and replace MI between single feature and category with mutual information (MI) between input feature combination and category so as to extract more accurate feature words; finally, we classify features with support vector machine (SVM) algorithm. The experimental result shows that this scheme has a high precision of classification and can provide criterion for digital forensics with specific purpose.


Author(s):  
Poonkodi Mariappan ◽  
Padhmavathi B. ◽  
Talluri Srinivasa Teja

Digital Forensic as it sounds coerce human mind primarily with exploration of crime. However in the contemporary world, digital forensic has evolved as an essential source of tools from data acquisition to legal action. Basically three stages are involved in digital forensic namely acquisition, analysis and reporting. Digital Forensic Research Workshop (DFRW) defined digital forensic as “Use of Scientifically derived and proven method towards the identification, collection, analysis, interpretation, documentation and presentation of digital evidence derived from digital sources for the purpose of facilitating or furthering the reconstruction of event to be criminal”. The hard problem in digital forensic is such that the acquired data need to be cleaned and is required to be intelligible for reading by human. As a solution to this complexity problem a number of tools are present which may be repeated until relevant data is obtained.


Author(s):  
Nourhene Ellouze ◽  
Slim Rekhis ◽  
Noureddine Boudriga

Healthcare applications are increasingly being used due to the safety and convenience brought to patients' life and healthcare professionals, respectively. Nevertheless, the use of weak authentication techniques and vulnerable communication protocols makes these applications threatened by specific classes of security attacks and e-crimes. The latter threaten the privacy, the safety and even the life of the persons using these applications, due to the fact that they handle sensitive information and implement complex and critical features. This chapter focuses on postmortem investigation of crimes on healthcare applications. After classifying crimes targeting healthcare applications, the requirements for the design of appropriate postmortem investigation system, are discussed. A literature review of proposals related to the investigation of crimes in healthcare applications together with a discussion of the advanced issues are also provided in this chapter.


Author(s):  
Mohammed S. Gadelrab ◽  
Ali A. Ghorbani

New computing and networking technologies have not only changed the way traditional crimes are committed but also introduced completely brand new “cyber” crimes. Cyber crime investigation and forensics is relatively a new field that can benefit from methods and tools from its predecessor, the traditional counterpart. This chapter explains the problem of cyber criminal profiling and why it differs from ordinary criminal profiling. It tries to provide an overview of the problem and the current approaches combined with a suggested solution. It also discusses some serious challenges that should be addressed to be able to produce reliable results and it finally presents some ideas for the future work.


Author(s):  
Ruxin Wang ◽  
Wei Lu ◽  
Jixian Li ◽  
Shijun Xiang ◽  
Xianfeng Zhao ◽  
...  

Image splicing detection is of fundamental importance in digital forensics and therefore has attracted increasing attention recently. In this article, a color image splicing detection approach is proposed based on Markov transition probability of quaternion component separation in quaternion discrete cosine transform (QDCT) domain and quaternion wavelet transform (QWT) domain. First, Markov features of the intra-block and inter-block between block QDCT coefficients are obtained from the real parts and three imaginary parts of QDCT coefficients, respectively. Then, additional Markov features are extracted from the luminance (Y) channel in the quaternion wavelet transform domain to characterize the dependency of position among quaternion wavelet sub-band coefficients. Finally, an ensemble classifier (EC) is exploited to classify the spliced and authentic color images. The experiment results demonstrate that the proposed approach can outperform some state-of-the-art methods.


Author(s):  
Guorui Sheng ◽  
Tiegang Gao

Seam-Carving is widely used for content-aware image resizing. To cope with the digital image forgery caused by Seam-Carving, a new detecting algorithm based on Benford's law is presented. The algorithm utilize the probabilities of the first digits of quantized DCT coefficients from individual AC modes to detect Seam-Carving images. The experimental result shows that the performance of proposed method is better than that of the method based on traditional Markov features and other existing methods.


Author(s):  
Nikolaos Serketzis ◽  
Vasilios Katos ◽  
Christos Ilioudis ◽  
Dimitrios Baltatzis ◽  
George J Pangalos

In this article, a DFR framework is proposed focusing on the prioritization, triaging and selection of Indicators of Compromise (IoC) to be used when investigating of security incidents. A core component of the framework is the contextualization of the IoCs to the underlying organization, which can be achieved with the use of clustering and classification algorithms and a local IoC database.


Author(s):  
Sani M. Abdullahi ◽  
Hongxia Wang ◽  
Asad Malik

Fingerprint minutiae is the unique representation of fingerprint image feature points as terminations and bifurcations. Therefore, generating a hash signature from these feature points will unarguably meet the desired properties of a robust hash signature and which will accurately fit in for fingerprint image content authentication purposes. This article proposes a novel minutiae and shape context-based fingerprint image hashing scheme. Fingerprint image minutiae points were extracted by incorporating their orientation and descriptors, then embedded into the shape context-based descriptors in order to generate a unique, compact, and robust hash signature. The robustness of the proposed scheme is determined by performing content preserving attacks, including noise addition, blurring and geometric distribution. Efficient results were achieved from the given attacks. Also, a series of evaluations on the performance comparison between the proposed and other state-of-art schemes has proven the approach to be robust and secure, by yielding a better result.


Author(s):  
Ning Wang

As existing methods cannot express, share, and reuse the digital evidence review information in a unified manner, a solution of digital evidence review elements knowledge base model based on ontology is presented. Firstly, combing with the multi-source heterogeneous characteristic of digital evidence review knowledge, classification and extraction are accomplished. Secondly, according to the principles of ontology construction, the digital evidence review elements knowledge base model which includes domain ontology, application ontology, and atomic ontology is established. Finally, model can effectively acquire digital evidence review knowledge by analyzing review scenario.


Author(s):  
Xinchao Huang ◽  
Zihan Liu ◽  
Wei Lu ◽  
Hongmei Liu ◽  
Shijun Xiang

Detecting digital audio forgeries is a significant research focus in the field of audio forensics. In this article, the authors focus on a special form of digital audio forgery—copy-move—and propose a fast and effective method to detect doctored audios. First, the article segments the input audio data into syllables by voice activity detection and syllable detection. Second, the authors select the points in the frequency domain as feature by applying discrete Fourier transform (DFT) to each audio segment. Furthermore, this article sorts every segment according to the features and gets a sorted list of audio segments. In the end, the article merely compares one segment with some adjacent segments in the sorted list so that the time complexity is decreased. After comparisons with other state of the art methods, the results show that the proposed method can identify the authentication of the input audio and locate the forged position fast and effectively.


Sign in / Sign up

Export Citation Format

Share Document