scholarly journals Integrity and Authenticity of Digital Images by Digital Forensic Analysis of Metadata

Author(s):  
Lizbardo Orellano Benancio ◽  
◽  
Ricardo Muñoz Canales ◽  
Paolo Rodriguez Leon ◽  
Enrique Lee Huamaní

Abstract—During various court hearings, the thesis that every authentic digital file has precise metadata of its creation date was questioned.In this way, the problem was raised which indicates, if the metadata of a digital file (Image) whose label records the date of creation by the recording device of a digital image file are accurate and reliable.For this reason, during the forensic analysis carried out in this work, a record of the metadata of five (05) digital image files from known sources is shown and where their characteristics have been detailed, in addition a record of the metadata of the images used that were later manipulated with image editing software with which metadata comparisons were made to show the labels that suffered modifications in their content.Finally, the obtaining of HASH code with the SHA - 256 algorithm is shown, for digital assurance, of the edited and original files whose comparison allows observing the changes in the content at a binary level. Keywords—Crime; Cybercrime; Digital Image; HASH; Metadata

Author(s):  
Shashidhar T. M. ◽  
K. B. Ramesh

Digital Image Forensic is significantly becoming popular owing to the increasing usage of the images as a media of information propagation. However, owing to the presence of various image editing tools and softwares, there is also an increasing threats over image content security. Reviewing the existing approaches of identify the traces or artifacts states that there is a large scope of optimization to be implmentation to further enhance teh processing. Therfore, this paper presents a novel framework that performs cost effective optmization of digital forensic tehnqiue with an idea of accurately localizing teh area of tampering as well as offers a capability to mitigate the attacks of various form. The study outcome shows that propsoed system offers better outcome in contrast to existing system to a significant scale to prove that minor novelty in design attribute could induce better improvement with respect to accuracy as well as resilience toward all potential image threats.


Sensors ◽  
2018 ◽  
Vol 18 (10) ◽  
pp. 3372 ◽  
Author(s):  
Esteban Armas Vega ◽  
Ana Sandoval Orozco ◽  
Luis García Villalba ◽  
Julio Hernandez-Castro

In the last few years, the world has witnessed a ground-breaking growth in the use of digital images and their applications in the modern society. In addition, image editing applications have downplayed the modification of digital photos and this compromises the authenticity and veracity of a digital image. These applications allow for tampering the content of the image without leaving visible traces. In addition to this, the easiness of distributing information through the Internet has caused society to accept everything it sees as true without questioning its integrity. This paper proposes a digital image authentication technique that combines the analysis of local texture patterns with the discrete wavelet transform and the discrete cosine transform to extract features from each of the blocks of an image. Subsequently, it uses a vector support machine to create a model that allows verification of the authenticity of the image. Experiments were performed with falsified images from public databases widely used in the literature that demonstrate the efficiency of the proposed method.


Leonardo ◽  
2001 ◽  
Vol 34 (2) ◽  
pp. 141-145 ◽  
Author(s):  
Johanna Drucker

Digital media gain their cultural authority in part because of the perception that they function on mathematical principles. The relationship between digital images and their encoded files, and in other cases, between digital images and the algorithms that generate them as display, lends itself to a conviction that the image and the file are mutually interchangeable. This relationship posits a connection of identicality between the file and the image according to which the mathematical basis and the image seem to share similar claims to truth. Since the history of images within Western culture is fraught with charges of deception and illusion, the question arises whether the ontological condition of the digital image, its very existence and identity, challenges this tradition. Or, by contrast, does the material instantiation of images, in their display or output, challenge the truth claims of the mathematically based digital file?


2021 ◽  
Vol 2 (2) ◽  
pp. 61-66
Author(s):  
Elizabeth Kovacs

This paper examines the concept that legitimate autographic identity may be granted to digital images created as a non-fungible token (NFT). The blockchain technology coded permanently into minted NFT’s keep track of the legitimacy of authorship and ownership, keeping them from being duplicated and removing them from the realm of allographic art. Questions arise of what ‘legitimacy’ and ‘ownership’ for a digital image—which are so easily reproduced and circulated—even look like. The main question that must be answered is whether the backend coding of a digital file is sufficient to alter its ontology into a token of one-of-a-kind autographic work, or if it only what is visible to the viewer of the image matters for its replicability and allographic ontological nature.


Author(s):  
D. P. Gangwar ◽  
Anju Pathania

This work presents a robust analysis of digital images to detect the modifications/ morphing/ editing signs by using the image’s exif metadata, thumbnail, camera traces, image markers, Huffman codec and Markers, Compression signatures etc. properties. The details of the whole methodology and findings are described in the present work. The main advantage of the methodology is that the whole analysis has been done by using software/tools which are easily available in open sources.


2011 ◽  
Vol 2 (1) ◽  
Author(s):  
Vina Chovan Epifania ◽  
Eko Sediyono

Abstract. Image File Searching Based on Color Domination. One characteristic of an image that can be used in image searching process is the composition of the colors. Color is a trait that is easily seen by man in the picture. The use of color as a searching parameter can provide a solution in an easier searching for images stored in computer memory. Color images have RGB values that can be computed and converted into HSL color space model. Use of HSL images model is very easy because it can be calculated using a percent, so that in each pixel of the image can be grouped and named, this can give a dominant values of the colors contained in one image. By obtaining these values, the image search can be done quickly just by using these values to a retrieval system image file. This article discusses the use of the HSL color space model to facilitate the searching for a digital image in the digital image data warehouse. From the test results of the application form, a searching is faster by using the colors specified by the user. Obstacles encountered were still searching with a choice of 15 basic colors available, with a limit of 33% dominance of the color image search was not found. This is due to the dominant color in each image has the most dominant value below 33%.   Keywords: RGB, HSL, image searching Abstrak. Salah satu ciri gambar yang dapat dipergunakan dalam proses pencarian gambar adalah komposisi warna. Warna adalah ciri yang mudah dilihat oleh manusia dalam citra gambar. Penggunaan warna sebagai parameter pencarian dapat memberikan solusi dalam memudahkan pencarian gambar yang tersimpan dalam memori komputer. Warna gambar memiliki nilai RGB yang dapat dihitung dan dikonversi ke dalam model HSL color space. Penggunaan model gambar HSL sangat mudah karena dapat dihitung dengan menggunakan persen, sehingga dalam setiap piksel gambar dapat dikelompokan dan diberi nama, hal ini dapat memberikan suatu nilai dominan dari warna yang terdapat dalam satu gambar. Dengan diperolehnya nilai tersebut, pencarian gambar dapat dilakukan dengan cepat hanya dengan menggunakan nilai tersebut pada sistem pencarian file gambar. Artikel ini membahas tentang penggunaan model HSL color space untuk mempermudah pencarian suatu gambar digital didalam gudang data gambar digital. Dari hasil uji aplikasi yang sudah dibuat, diperoleh pencarian yang lebih cepat dengan menggunakan pilihan warna yang ditentukan sendiri oleh pengguna. Kendala yang masih dijumpai adalah pencarian dengan pilihan 15 warna dasar yang tersedia, dengan batas dominasi warna 33% tidak ditemukan gambar yang dicari. Hal ini disebabkan warna dominan disetiap gambar kebanyakan memiliki nilai dominan di bawah 33%. Kata Kunci: RGB, HSL, pencarian gambar


Author(s):  
Lemcia Hutajulu ◽  
Hery Sunandar ◽  
Imam Saputra

Cryptography is used to protect the contents of information from anyone except those who have the authority or secret key to open information that has been encoded. Along with the development of technology and computers, the increase in computer crime has also increased, especially in image manipulation. There are many ways that people use to manipulate images that have a detrimental effect on others. The originality of a digital image is the authenticity of the image in terms of colors, shapes, objects and information without the slightest change from the other party. Nowadays many digital images circulating on the internet have been manipulated and even images have been used for material fraud in the competition, so we need a method that can detect the image is genuine or fake. In this study, the authors used the MD4 and SHA-384 methods to detect the originality of digital images, by using this method an image of doubtful authenticity can be found out that the image is authentic or fake.Keywords: Originality, Image, MD4 and SHA-384


2021 ◽  
Vol 10 (2) ◽  
pp. 85
Author(s):  
Juan Reinoso-Gordo ◽  
Antonio Gámiz-Gordo ◽  
Pedro Barrero-Ortega

Suitable graphic documentation is essential to ascertain and conserve architectural heritage. For the first time, accurate digital images are provided of a 16th-century wooden ceiling, composed of geometric interlacing patterns, in the Pinelo Palace in Seville. Today, this ceiling suffers from significant deformation. Although there are many publications on the digital documentation of architectural heritage, no graphic studies on this type of deformed ceilings have been presented. This study starts by providing data on the palace history concerning the design of geometric interlacing patterns in carpentry according to the 1633 book by López de Arenas, and on the ceiling consolidation in the 20th century. Images were then obtained using two complementary procedures: from a 3D laser scanner, which offers metric data on deformations; and from photogrammetry, which facilitates the visualisation of details. In this way, this type of heritage is documented in an innovative graphic approach, which is essential for its conservation and/or restoration with scientific foundations and also to disseminate a reliable digital image of the most beautiful ceiling of this Renaissance palace in southern Europe.


Data ◽  
2021 ◽  
Vol 6 (8) ◽  
pp. 87
Author(s):  
Sara Ferreira ◽  
Mário Antunes ◽  
Manuel E. Correia

Deepfake and manipulated digital photos and videos are being increasingly used in a myriad of cybercrimes. Ransomware, the dissemination of fake news, and digital kidnapping-related crimes are the most recurrent, in which tampered multimedia content has been the primordial disseminating vehicle. Digital forensic analysis tools are being widely used by criminal investigations to automate the identification of digital evidence in seized electronic equipment. The number of files to be processed and the complexity of the crimes under analysis have highlighted the need to employ efficient digital forensics techniques grounded on state-of-the-art technologies. Machine Learning (ML) researchers have been challenged to apply techniques and methods to improve the automatic detection of manipulated multimedia content. However, the implementation of such methods have not yet been massively incorporated into digital forensic tools, mostly due to the lack of realistic and well-structured datasets of photos and videos. The diversity and richness of the datasets are crucial to benchmark the ML models and to evaluate their appropriateness to be applied in real-world digital forensics applications. An example is the development of third-party modules for the widely used Autopsy digital forensic application. This paper presents a dataset obtained by extracting a set of simple features from genuine and manipulated photos and videos, which are part of state-of-the-art existing datasets. The resulting dataset is balanced, and each entry comprises a label and a vector of numeric values corresponding to the features extracted through a Discrete Fourier Transform (DFT). The dataset is available in a GitHub repository, and the total amount of photos and video frames is 40,588 and 12,400, respectively. The dataset was validated and benchmarked with deep learning Convolutional Neural Networks (CNN) and Support Vector Machines (SVM) methods; however, a plethora of other existing ones can be applied. Generically, the results show a better F1-score for CNN when comparing with SVM, both for photos and videos processing. CNN achieved an F1-score of 0.9968 and 0.8415 for photos and videos, respectively. Regarding SVM, the results obtained with 5-fold cross-validation are 0.9953 and 0.7955, respectively, for photos and videos processing. A set of methods written in Python is available for the researchers, namely to preprocess and extract the features from the original photos and videos files and to build the training and testing sets. Additional methods are also available to convert the original PKL files into CSV and TXT, which gives more flexibility for the ML researchers to use the dataset on existing ML frameworks and tools.


2020 ◽  
Vol 30 (1) ◽  
pp. 240-257
Author(s):  
Akula Suneetha ◽  
E. Srinivasa Reddy

Abstract In the data collection phase, the digital images are captured using sensors that often contaminated by noise (undesired random signal). In digital image processing task, enhancing the image quality and reducing the noise is a central process. Image denoising effectively preserves the image edges to a higher extend in the flat regions. Several adaptive filters (median filter, Gaussian filter, fuzzy filter, etc.) have been utilized to improve the smoothness of digital image, but these filters failed to preserve the image edges while removing noise. In this paper, a modified fuzzy set filter has been proposed to eliminate noise for restoring the digital image. Usually in fuzzy set filter, sixteen fuzzy rules are generated to find the noisy pixels in the digital image. In modified fuzzy set filter, a set of twenty-four fuzzy rules are generated with additional four pixel locations for determining the noisy pixels in the digital image. The additional eight fuzzy rules ease the process of finding the image pixels,whether it required averaging or not. In this scenario, the input digital images were collected from the underwater photography fish dataset. The efficiency of the modified fuzzy set filter was evaluated by varying degrees of Gaussian noise (0.01, 0.03, and 0.1 levels of Gaussian noise). For performance evaluation, Structural Similarity (SSIM), Mean Structural Similarity (MSSIM), Mean Square Error (MSE), Normalized Mean Square Error (NMSE), Universal Image Quality Index (UIQI), Peak Signal to Noise Ratio (PSNR), and Visual Information Fidelity (VIF) were used. The experimental results showed that the modified fuzzy set filter improved PSNR value up to 2-3 dB, MSSIM up to 0.12-0.03, and NMSE value up to 0.38-0.1 compared to the traditional filtering techniques.


Sign in / Sign up

Export Citation Format

Share Document