scholarly journals Image Forensic Tool (IFT)

2021 ◽  
Vol 13 (6) ◽  
pp. 1-15
Author(s):  
Digambar Pawar ◽  
Mayank Gajpal

Images now-a-days are often used as an authenticated proof for any cyber-crime. Images that do not remain genuine can mislead the court of law. The fast and dynamically growing technology doubts the trust in the integrity of images. Tampering mostly refers to adding or removing important features from an image without leaving any obvious trace. In earlier days, digital signatures were used to preserve the integrity, but now a days various tools are available to tamper digital signatures as well. Even in various state-of-the-art works in tamper detection, there are various restrictions in the type of inputs and the type of tampering detection. In this paper, the researchers propose a prototype model in the form of a tool that will retrieve all the image files from given digital evidence and detect tampering in the images. For various types of tampering, different tampering detection algorithms have been used. The proposed prototype will detect if tampering has been done or not and will classify the image files into groups based on the type of tampering.

2021 ◽  
Vol 11 (12) ◽  
pp. 5656
Author(s):  
Yufan Zeng ◽  
Jiashan Tang

Graph neural networks (GNNs) have been very successful at solving fraud detection tasks. The GNN-based detection algorithms learn node embeddings by aggregating neighboring information. Recently, CAmouflage-REsistant GNN (CARE-GNN) is proposed, and this algorithm achieves state-of-the-art results on fraud detection tasks by dealing with relation camouflages and feature camouflages. However, stacking multiple layers in a traditional way defined by hop leads to a rapid performance drop. As the single-layer CARE-GNN cannot extract more information to fix the potential mistakes, the performance heavily relies on the only one layer. In order to avoid the case of single-layer learning, in this paper, we consider a multi-layer architecture which can form a complementary relationship with residual structure. We propose an improved algorithm named Residual Layered CARE-GNN (RLC-GNN). The new algorithm learns layer by layer progressively and corrects mistakes continuously. We choose three metrics—recall, AUC, and F1-score—to evaluate proposed algorithm. Numerical experiments are conducted. We obtain up to 5.66%, 7.72%, and 9.09% improvements in recall, AUC, and F1-score, respectively, on Yelp dataset. Moreover, we also obtain up to 3.66%, 4.27%, and 3.25% improvements in the same three metrics on the Amazon dataset.


Sensors ◽  
2021 ◽  
Vol 21 (5) ◽  
pp. 1820
Author(s):  
Xiaotao Shao ◽  
Qing Wang ◽  
Wei Yang ◽  
Yun Chen ◽  
Yi Xie ◽  
...  

The existing pedestrian detection algorithms cannot effectively extract features of heavily occluded targets which results in lower detection accuracy. To solve the heavy occlusion in crowds, we propose a multi-scale feature pyramid network based on ResNet (MFPN) to enhance the features of occluded targets and improve the detection accuracy. MFPN includes two modules, namely double feature pyramid network (FPN) integrated with ResNet (DFR) and repulsion loss of minimum (RLM). We propose the double FPN which improves the architecture to further enhance the semantic information and contours of occluded pedestrians, and provide a new way for feature extraction of occluded targets. The features extracted by our network can be more separated and clearer, especially those heavily occluded pedestrians. Repulsion loss is introduced to improve the loss function which can keep predicted boxes away from the ground truths of the unrelated targets. Experiments carried out on the public CrowdHuman dataset, we obtain 90.96% AP which yields the best performance, 5.16% AP gains compared to the FPN-ResNet50 baseline. Compared with the state-of-the-art works, the performance of the pedestrian detection system has been boosted with our method.


2021 ◽  
Vol 39 (1B) ◽  
pp. 101-116
Author(s):  
Nada N. Kamal ◽  
Enas Tariq

Tilt correction is an essential step in the license plate recognition system (LPR). The main goal of this article is to provide a review of the various methods that are presented in the literature and used to correct different types of tilt that appear in the digital image of the license plates (LP). This theoretical survey will enable the researchers to have an overview of the available implemented tilt detection and correction algorithms. That’s how this review will simplify for the researchers the choice to determine which of the available rotation correction and detection algorithms to implement while designing their LPR system. This review also simplifies the decision for the researchers to choose whether to combine two or more of the existing algorithms or simply create a new efficient one. This review doesn’t recite the described models in the literature in a hard-narrative tale, but instead, it clarifies how the tilt correction stage is divided based on its initial steps. The steps include: locating the plate corners, finding the tilting angle of the plate, then, correcting its horizontal, vertical, and sheared inclination. For the tilt correction stage, this review clarifies how state-of-the-art literature handled each step individually. As a result, it has been noticed that line fitting, Hough transform, and Randon transform are the most used methods to correct the tilt of a LP.


2021 ◽  
Vol 11 (23) ◽  
pp. 11241
Author(s):  
Ling Li ◽  
Fei Xue ◽  
Dong Liang ◽  
Xiaofei Chen

Concealed objects detection in terahertz imaging is an urgent need for public security and counter-terrorism. So far, there is no public terahertz imaging dataset for the evaluation of objects detection algorithms. This paper provides a public dataset for evaluating multi-object detection algorithms in active terahertz imaging. Due to high sample similarity and poor imaging quality, object detection on this dataset is much more difficult than on those commonly used public object detection datasets in the computer vision field. Since the traditional hard example mining approach is designed based on the two-stage detector and cannot be directly applied to the one-stage detector, this paper designs an image-based Hard Example Mining (HEM) scheme based on RetinaNet. Several state-of-the-art detectors, including YOLOv3, YOLOv4, FRCN-OHEM, and RetinaNet, are evaluated on this dataset. Experimental results show that the RetinaNet achieves the best mAP and HEM further enhances the performance of the model. The parameters affecting the detection metrics of individual images are summarized and analyzed in the experiments.


Author(s):  
Chengbo Ai ◽  
Shi Qiu ◽  
Guiyang Xu

During the past two decades, subway systems have become one of the most dominant infrastructural developments in China at an unprecedented pace and scale. More than 60 metro lines in 25 cities have been completed, transporting more than 70 million passengers daily. Operating the subway systems safely and efficiently is a continuously pressing demand from both the management companies and the public. Although many automated or semi-automated methods for extracting critical components of the rail track systems, e.g. rail, fastener, sleeper, etc., have significantly improved the productivity of routine inspection, the unique challenges posed by the subway systems have hindered these existing methods from successful implementation because of the extremely low illumination in the underground environment, whereas additional artificial lighting often poses extremely uneven illumination. In this study, a generalized local illumination adaptation model using an anisotropic heat equation is proposed to dynamically adjust the acquired rail track images with extremely low and uneven illumination conditions. An integration flow is then proposed to seamlessly incorporate the proposed model into the state-of-the-art automated fastener detection algorithms. The results show that the proposed local illumination adaptation model can significantly improve the performance of the tested state-of-the-art fastener detection algorithms when they are applied to the images collected in the environment with extremely low and uneven illumination conditions, e.g. subway systems.


2014 ◽  
Vol 6 (3) ◽  
pp. 30-46
Author(s):  
Jia-Hong Li ◽  
Tzung-Her Chen ◽  
Wei-Bin Lee

Image authentication must be able to verify the origin and the integrity of digital images, and some research has made efforts in that. In this paper, we reveal a new type of malicious alteration which we call the “Tattooing Attack”. It can successfully alter the protected image if the collision of the authentication bits corresponding to the altered image and the original watermarking image can be found. To make our point, we chose Chang et al.'s image authentication scheme based on watermarking techniques for tampering detection as an example. The authors will analyze the reasons why the attack is successful, and then they delineate the conditions making the attack possible. Since the result can be generally applied into other schemes, the authors evaluate such schemes to examine the soundness of these conditions. Finally, a solution is provided for all tamper detection schemes suffering from the Tattooing Attack.


2017 ◽  
Vol 9 (4) ◽  
pp. 40-47
Author(s):  
Zhi Jun Liu

In the early stages of the digital investigation of cyber crime, digital evidence is inadequate, decentralized and fragmented. Cyber crime investigation model based on case characteristics is presented in this paper, to help determine investigation orientation and reduce investigation area. Firstly, purifying and filtering the digital evidence collected, classification and acquirement of event sets are accomplished. Secondly, a method of imperfect induction is applied to analyze the event sets and construct one or more premises, and combining with the case characteristics extracted from the legal requirements, inference and its reliability are given. Finally, through a case analysis of network pyramid sales, the initial practice shows the model is feasible and has a consulting value with cyber crime investigation.


Author(s):  
Gajanan K. Birajdar ◽  
Vijay H. Mankar

High resolution digital cameras and state-of-the-art image editing software tools has given rise to large amount of manipulated images leaving no traces of being subjected to any manipulation. Passive or blind forgery detection algorithms are used in order to determine its authenticity. In this paper, an algorithm is proposed that blindly detects global rescaling operation using the statistical models computed based on quadrature mirror filter (QMF) decomposition. Fuzzy entropy measure is employed to choose the relevant features and to remove non-important features whereas artificial neural network classifier is used for forgery detection. Experimental results are presented on grayscale and [Formula: see text]-component images of UCID database to prove the validity of the algorithm under different interpolation schemes. Results are provided for the detection of rescaled images with JPEG compression, arbitrary cropping and white Gaussian noise addition. Further, results are shown using USC-SIPI database to prove the robustness of the algorithm against the type of database.


2020 ◽  
Vol 2020 ◽  
pp. 1-13
Author(s):  
Jingzhi Lin ◽  
Zhenxing Qian ◽  
Zichi Wang ◽  
Xinpeng Zhang ◽  
Guorui Feng

This paper proposes a new steganography method for hiding data into dynamic GIF (Graphics Interchange Format) images. When using the STC framework, we propose a new algorithm of cost assignment according to the characteristics of dynamic GIF images, including the image palette and the correlation of interframes. We also propose a payload allocation algorithm for different frames. First, we reorder the palette of GIF images to reduce the modifications on pixel values when modifying the index values. As the different modifications on index values would result in different impacts on pixel values, we assign the elements with less impact on pixel values with small embedding costs. Meanwhile, small embedding costs are also assigned for the elements in the regions that the interframe changes are large enough. Finally, we calculate an appropriate payload for each frame using the embedding probability obtained from the proposed distortion function. Experimental results show that the proposed method has a better security performance than state-of-the-art works.


Electronics ◽  
2020 ◽  
Vol 9 (6) ◽  
pp. 1017 ◽  
Author(s):  
Abdulmohsen Almalawi ◽  
Adil Fahad ◽  
Zahir Tari ◽  
Asif Irshad Khan ◽  
Nouf Alzahrani ◽  
...  

Supervisory control and data acquisition (SCADA) systems monitor and supervise our daily infrastructure systems and industrial processes. Hence, the security of the information systems of critical infrastructures cannot be overstated. The effectiveness of unsupervised anomaly detection approaches is sensitive to parameter choices, especially when the boundaries between normal and abnormal behaviours are not clearly distinguishable. Therefore, the current approach in detecting anomaly for SCADA is based on the assumptions by which anomalies are defined; these assumptions are controlled by a parameter choice. This paper proposes an add-on anomaly threshold technique to identify the observations whose anomaly scores are extreme and significantly deviate from others, and then such observations are assumed to be ”abnormal”. The observations whose anomaly scores are significantly distant from ”abnormal” ones will be assumed as ”normal”. Then, the ensemble-based supervised learning is proposed to find a global and efficient anomaly threshold using the information of both ”normal”/”abnormal” behaviours. The proposed technique can be used for any unsupervised anomaly detection approach to mitigate the sensitivity of such parameters and improve the performance of the SCADA unsupervised anomaly detection approaches. Experimental results confirm that the proposed technique achieved a significant improvement compared to the state-of-the-art of two unsupervised anomaly detection algorithms.


Sign in / Sign up

Export Citation Format

Share Document