scholarly journals Forged Copy-Move Recognition Using Convolutional Neural Network

2021 ◽  
Vol 24 (1) ◽  
pp. 45-56
Author(s):  
Ayat Fadhel Homady Sewan ◽  
◽  
Mohammed Sahib Mahdi Altaei ◽  

Due to the extreme robust image editing techniques, digital images are subject to multiple manipulations and decreased costs for digital camera and smart phones. Therefore, image credibility is becoming questionable, specifically when images have strong value, such as news report and insurance claims in a crime court. Therefore, image forensic methods test the integrity of the images by applying various highly technical methods set out in the literature. The present work deals with one important research module is the recognition of forged part that applied on copy move forgery images. Two datasets MICC-F2000 and CoMoFoD are used, these datasets are usually adopted in the field of interest. The module concerned with recognizing which is the source image portion and which is the target one of that already detected. Thus, the two detected tampered parts of the image are recognized the original one from them, the other is then referred as forged or tampered part. The proposed module used the buster net of three neural networks that basically adopted the principle of training by using Convolution Neural Network (CNN) to extract the most important features in the images. The first and second networks are parallel working to detect and identify areas that have been tampered with, and then display them through two masks. While the last network classifier takes a copy of these two catchers to decide which is the source image portion from the two detected ones. The achieved recognition results were about F-score 98.98% even if the forged area is rotated or scaled or both of them. Also, the recognition results of the forged image part was 98% when using images do not contributed in the training phase, which refers to that the proposed module is more confident and reliable.

2020 ◽  
Vol 3 (1) ◽  
pp. 491-500
Author(s):  
Matin Ghaziani ◽  
Erhan İlhan Konukseven ◽  
Ahmet Buğra Koku

Road detection from the satellite images can be considered as a classification process in which pixels are divided into the road and non-road classes. In this research, an automatic road extraction using an artificial neural network (ANN) based on automatic information extraction from satellite images and self-adjusting of the hidden layer proposed. Parameters of non-urban road networks from satellite images using a histogram-based binary image segmentation technique are also presented. The segmentation method is implemented by determining a global threshold, which is obtained from a statistical analysis of a number of sample satellite images and their ground truths. The thresholding method is based on two major facts: first, the points corresponding to non-asphalt roads are brighter than other areas in non-urban images. Second, it is observed that in an aerial image, the area covered by roads is only a small fraction of total pixels. It is also observed that pixels corresponding to roads are generally populated at the very bright end of the image greyscale histogram. In this method, at first, the possible road pixels are selected by the proposed segmentation method. Then different parameters, including color, gradient, and entropy, are computed for each pixel from the source image. Finally, these features are used for the artificial neural network input. The results show that the accuracy of the proposed road extraction method is around 80%.


Author(s):  
Shashidhar T. M. ◽  
K. B. Ramesh

Digital Image Forensic is significantly becoming popular owing to the increasing usage of the images as a media of information propagation. However, owing to the presence of various image editing tools and softwares, there is also an increasing threats over image content security. Reviewing the existing approaches of identify the traces or artifacts states that there is a large scope of optimization to be implmentation to further enhance teh processing. Therfore, this paper presents a novel framework that performs cost effective optmization of digital forensic tehnqiue with an idea of accurately localizing teh area of tampering as well as offers a capability to mitigate the attacks of various form. The study outcome shows that propsoed system offers better outcome in contrast to existing system to a significant scale to prove that minor novelty in design attribute could induce better improvement with respect to accuracy as well as resilience toward all potential image threats.


Author(s):  
Rosalia Arum Kumalasanti ◽  

Humans are social beings who depend on social interaction. Social interaction that is often used is communication. Communication is one of the bridges to connect social relations between humans. Communication can be delivered in two ways, namely verbal or nonverbal. Handwriting is an example of nonverbal communication using paper and writing utensils. Each individual's writing has its own uniqueness so that handwriting often becomes the character or characteristic of the author. The handwriting pattern usually becomes a character for the writer so that people who recognize the writing will easily guess the ownership of the related handwriting. However, handwriting is often used by irresponsible people in the form of handwriting falsification. The acts of writing falcification often occur in the workplace or even in the field of education. This is one of the driving factors for creating a reliable system in tracking someone's handwriting based on their ownership. In this study, we will discuss the identification of a person's handwriting based on their ownership. The output of this research is in the form of ID from the author and accuracy in the form of percentage of system reliability in identifying. The results of this study are expected to have a good impact on all parties, in order to minimize plagiarism. Identification of handwriting to be built consists of two main processes, namely the training phase and the testing phase. At the training stage, the handwritten image is subjected to several processes, namely threshold, wavelet conversion, and then will be trained using the Backpropagation Artificial Neural Network. In the testing phase, the process is the same as in the training phase, but at the end of the process, a comparison will be made between the image data that has been stored during training with a comparison image. Backpropagation ANN can work optimally if it is trained using input data that has determined the size, learning rate, parameters, and the number of nodes on the network. It is expected that the offered method can work optimally so that it produces an accurate percentage in order to minimize handwriting falcification.


2016 ◽  
Vol 713 ◽  
pp. 10-13 ◽  
Author(s):  
A. de Luca ◽  
Zahra Sharif Khodaei ◽  
Francesco Caputo

The aim of this paper is to understand the effects of the damage criteria modelling on the training phase (performed by means of Finite Element simulations) of an artificial neural network (ANN) enabled to locate impacts onto a CFRP laminate. The developed FE models have been also used to investigate the intra-laminar damage mode, which, among different ones, has the most effects on the residual strength of the panel.


IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 83589-83599 ◽  
Author(s):  
Pingping Cao ◽  
Wenzhu Zhao ◽  
Sheng Liu ◽  
Li Shi ◽  
Hongwen Gao

2015 ◽  
Vol 671 ◽  
pp. 385-390 ◽  
Author(s):  
Sen Lin Yuan ◽  
Kai Lu ◽  
Yue Qi Zhong

In order to separate wool from cashmere efficiently, an identification method based on texture analysis was proposed in this paper. The microscopic images captured by CCD digital camera were preprocessed as the texture image. Improved Tamura texture feature were employed to analyzing the final texture images and to attaining the texture parameters. Through a large number of samples, the mathematical modeling was completed by using neural network. Experiment results indicate that texture analysis can be a feasible method to identify cashmere and wool.


2015 ◽  
Vol 8 (3) ◽  
pp. 1055-1071 ◽  
Author(s):  
R. G. Sivira ◽  
H. Brogniez ◽  
C. Mallet ◽  
Y. Oussar

Abstract. A statistical method trained and optimized to retrieve seven-layer relative humidity (RH) profiles is presented and evaluated with measurements from radiosondes. The method makes use of the microwave payload of the Megha-Tropiques platform, namely the SAPHIR sounder and the MADRAS imager. The approach, based on a generalized additive model (GAM), embeds both the physical and statistical characteristics of the inverse problem in the training phase, and no explicit thermodynamical constraint – such as a temperature profile or an integrated water vapor content – is provided to the model at the stage of retrieval. The model is built for cloud-free conditions in order to avoid the cases of scattering of the microwave radiation in the 18.7–183.31 GHz range covered by the payload. Two instrumental configurations are tested: a SAPHIR-MADRAS scheme and a SAPHIR-only scheme to deal with the stop of data acquisition of MADRAS in January 2013 for technical reasons. A comparison to learning machine algorithms (artificial neural network and support-vector machine) shows equivalent performance over a large realistic set, promising low errors (biases < 2.2%RH) and scatters (correlations > 0.8) throughout the troposphere (150–900 hPa). A comparison to radiosonde measurements performed during the international field experiment CINDY/DYNAMO/AMIE (winter 2011–2012) confirms these results for the mid-tropospheric layers (correlations between 0.6 and 0.92), with an expected degradation of the quality of the estimates at the surface and top layers. Finally a rapid insight of the estimated large-scale RH field from Megha-Tropiques is presented and compared to ERA-Interim.


2020 ◽  
Vol 9 (3) ◽  
pp. 988-995
Author(s):  
Muayed S. AL-Huseiny ◽  
Noor Khudhair Abbas ◽  
Ahmed S. Sajit

Arrhythmia is the prime indicator of serious heart issues, and, hence, it is essential to be detected properly for early phase treatment. This article presents an approach for the diagnosis of cardiac disorders via the recognition of 17 types of arrhythmia. The proposed approach includes building a convolution neural network (2D-CNN) which is trained by using images of Electrocardiograph (ECG) signals collected from the MIH-BIH database. The ECGs are first converted into images. This step serves twofold: first, CNN is best suited for classifying image data and thus reduces preprocessing, and second, most ECG recordings are still being produced on thermal paper which can then be captured as image. Next, 2D-CNN is trained and validated. Test results show that the proposed method achieves classification accuracy of 96.67% and error of 0.004%. in addition to the superior accuracy achieved by this method compared to the previous literature, this approach enjoys reduced processing time and complexity apart from the training phase, also by dealing with images this method offers high degree of versatility and can be integrated as utility within other applications or wearables.


Satellite images are important for developing and protected environmental resources that can be used for flood detection. The satellite image of before-flooding and after-flooding to be segmented and feature with integration of deeply LRNN and CNN networks for giving high accuracy. It is also important for learning LRNN and CNN is able to find the feature of flooding regions sufficiently and, it will influence the effectiveness of flood relief. The CNNs and LRNNs consists of two set are training set and testing set. The before flooding and after flooding of satellite images to be extract and segment formed by testing and training phase of data patches. All patches are trained by LRNN where changes occur or any misdetection of flooded region to extract accurately without delay. This proposed method obtain accuracy of system is 99% of flood region detections.


2021 ◽  
Vol 7 (2) ◽  
pp. 57-74
Author(s):  
Lamyaa Gamal EL-Deen Taha ◽  
A. I. Ramzi ◽  
A. Syarawi ◽  
A. Bekheet

Until recently, the most highly accurate digital surface models were obtained from airborne lidar. With the development of a new generation of large format digital photogrammetric aerial camera, a fully digital photogrammetric workflow became possible. Digital airborne images are sources for elevation extraction and orthophoto generation. This research concerned with the generation of digital surface models and orthophotos as applications from high-resolution images.  In this research, the following steps were performed. A Benchmark data of LIDAR and digital aerial camera have been used.  Firstly, image orientation, AT have been performed. Then the automatic digital surface model DSM generation has been produced from the digital aerial camera. Thirdly true digital ortho has been generated from the digital aerial camera also orthoimage will be generated using LIDAR digital elevation model (DSM). Leica Photogrammetric Suite (LPS) module of Erdsa Imagine 2014 software was utilized for processing. Then the resulted orthoimages from both techniques were mosaicked. The results show that automatic digital surface model DSM that been produced from digital aerial camera method has very high dense photogrammetric 3D point clouds compared to the LIDAR 3D point clouds. It was found that the true orthoimage produced from the second approach is better than the true orthoimage produced from the first approach. The five approaches were tested for classification of the best-orthorectified image mosaic using subpixel based (neural network) and pixel-based ( minimum distance and maximum likelihood).Multicues were extracted such as texture(entropy-mean),Digital elevation model, Digital surface model ,normalized digital surface model (nDSM) and intensity image. The contributions of the individual cues used in the classification have been evaluated. It was found that the best cue integration is intensity (pan) +nDSM+ entropy followed by intensity (pan) +nDSM+mean then intensity image +mean+ entropy after that DSM )image and two texture measures (mean and entropy) followed by the colour image. The integration with height data increases the accuracy. Also, it was found that the integration with entropy texture increases the accuracy. Resulted in fifteen cases of classification it was found that maximum likelihood classifier is the best followed by minimum distance then neural network classifier. We attribute this to the fine resolution of the digital camera image. Subpixel classifier (neural network) is not suitable for classifying aerial digital camera images. 


Sign in / Sign up

Export Citation Format

Share Document