Deep Convolutional Neural Network with ResNet-50 Learning algorithm for Copy-Move Forgery Detection

Author(s):  
Vaishali Sharma ◽  
Neetu Singh
Cancers ◽  
2021 ◽  
Vol 13 (4) ◽  
pp. 652 ◽  
Author(s):  
Carlo Augusto Mallio ◽  
Andrea Napolitano ◽  
Gennaro Castiello ◽  
Francesco Maria Giordano ◽  
Pasquale D'Alessio ◽  
...  

Background: Coronavirus disease 2019 (COVID-19) pneumonia and immune checkpoint inhibitor (ICI) therapy-related pneumonitis share common features. The aim of this study was to determine on chest computed tomography (CT) images whether a deep convolutional neural network algorithm is able to solve the challenge of differential diagnosis between COVID-19 pneumonia and ICI therapy-related pneumonitis. Methods: We enrolled three groups: a pneumonia-free group (n = 30), a COVID-19 group (n = 34), and a group of patients with ICI therapy-related pneumonitis (n = 21). Computed tomography images were analyzed with an artificial intelligence (AI) algorithm based on a deep convolutional neural network structure. Statistical analysis included the Mann–Whitney U test (significance threshold at p < 0.05) and the receiver operating characteristic curve (ROC curve). Results: The algorithm showed low specificity in distinguishing COVID-19 from ICI therapy-related pneumonitis (sensitivity 97.1%, specificity 14.3%, area under the curve (AUC) = 0.62). ICI therapy-related pneumonitis was identified by the AI when compared to pneumonia-free controls (sensitivity = 85.7%, specificity 100%, AUC = 0.97). Conclusions: The deep learning algorithm is not able to distinguish between COVID-19 pneumonia and ICI therapy-related pneumonitis. Awareness must be increased among clinicians about imaging similarities between COVID-19 and ICI therapy-related pneumonitis. ICI therapy-related pneumonitis can be applied as a challenge population for cross-validation to test the robustness of AI models used to analyze interstitial pneumonias of variable etiology.


2019 ◽  
Vol 8 (2) ◽  
pp. 4605-4613

This Raspberry Pi Single Board Computer-Based Cataract Detection System using Deep Convolutional Neural Network through GoogLeNet Transfer Learning and MATLAB digital image processing paradigm based on Lens Opacities Classification System III with Python application, which would capture the image of the eyes of cataract patients to detect the type of cataract without using dilating drops. Additionally, the system could also determine the severity, grade, color or area, and hardness of cataract. It would also display, save, search and print the partial diagnosis that can be done to the patients. Descriptive quantitative research, Waterfall System Development Life Cycle and Evolutionary Prototyping Models was used as the methodologies of this study. Cataract patients and ophthalmologists of one of the eye clinics in City of Biñan, Laguna, as well as engineers and information technology professionals tested the system and also served as respondents to the conducted survey. Obtained results indicated that the detection of cataract and its characteristics using the system were accurate and reliable, which has a significant difference from the current eye examination for cataract. Generally, this would be a modern cataract detection system for all Cataract patients


2019 ◽  
Vol 17 (06) ◽  
pp. 1950039
Author(s):  
Bifang He ◽  
Jian Huang ◽  
Heng Chen

Plant exclusive virus-derived small interfering RNAs (vsiRNAs) regulate various biological processes, especially important in antiviral immunity. The identification of plant vsiRNAs is important for understanding the biogenesis and function mechanisms of vsiRNAs and further developing anti-viral plants. In this study, we extracted plant vsiRNA sequences from the PVsiRNAdb database. We then utilized deep convolutional neural network (CNN) to develop a deep learning algorithm for predicting plant vsiRNAs based on vsiRNA sequence composition, known as PVsiRNAPred. The key part of PVsiRNAPred is the CNN module, which automatically learns hierarchical representations of vsiRNA sequences related to vsiRNA profiles in plants. When evaluated using an independent testing dataset, the accuracy of the model was 65.70%, which was higher than those of five conventional machine learning method-based classifiers. In addition, PVsiRNAPred obtained a sensitivity of 67.11%, specificity of 64.26% and Matthews correlation coefficient (MCC) of 0.31, and the area under the receiver operating characteristic (ROC) curve (AUC) of PVsiRNAPred was 0.71 in the independent test. The permutation test with 1000 shuffles resulted in a [Formula: see text] value [Formula: see text]. The above results reveal that PVsiRNAPred has favorable generalization capabilities. We hope PVsiRNAPred, the first bioinformatics algorithm for predicting plant vsiRNAs, will allow efficient discovery of new vsiRNAs.


Diagnostics ◽  
2022 ◽  
Vol 12 (1) ◽  
pp. 181
Author(s):  
Anna Landsmann ◽  
Jann Wieler ◽  
Patryk Hejduk ◽  
Alexander Ciritsis ◽  
Karol Borkowski ◽  
...  

The aim of this study was to investigate the potential of a machine learning algorithm to accurately classify parenchymal density in spiral breast-CT (BCT), using a deep convolutional neural network (dCNN). In this retrospectively designed study, 634 examinations of 317 patients were included. After image selection and preparation, 5589 images from 634 different BCT examinations were sorted by a four-level density scale, ranging from A to D, using ACR BI-RADS-like criteria. Subsequently four different dCNN models (differences in optimizer and spatial resolution) were trained (70% of data), validated (20%) and tested on a “real-world” dataset (10%). Moreover, dCNN accuracy was compared to a human readout. The overall performance of the model with lowest resolution of input data was highest, reaching an accuracy on the “real-world” dataset of 85.8%. The intra-class correlation of the dCNN and the two readers was almost perfect (0.92) and kappa values between both readers and the dCNN were substantial (0.71–0.76). Moreover, the diagnostic performance between the readers and the dCNN showed very good correspondence with an AUC of 0.89. Artificial Intelligence in the form of a dCNN can be used for standardized, observer-independent and reliable classification of parenchymal density in a BCT examination.


2020 ◽  
Vol 3 (2) ◽  
pp. 177-178
Author(s):  
John Jowil D. Orquia ◽  
El Jireh Bibangco

Manual Fruit classification is the traditional way of classifying fruits. It is manual contact-labor that is time-consuming and often results in lesser productivity, inconsistency, and sometimes damaging the fruits (Prabha & Kumar, 2012). Thus, new technologies such as deep learning paved the way for a faster and more efficient method of fruit classification (Faridi & Aboonajmi, 2017). A deep convolutional neural network, or deep learning, is a machine learning algorithm that contains several layers of neural networks stacked together to create a more complex model capable of solving complex problems. The utilization of state-of-the-art pre-trained deep learning models such as AlexNet, GoogLeNet, and ResNet-50 was widely used. However, such models were not explicitly trained for fruit classification (Dyrmann, Karstoft, & Midtiby, 2016). The study aimed to create a new deep convolutional neural network and compared its performance to fine-tuned models based on accuracy, precision, sensitivity, and specificity.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
V. Vinolin ◽  
M. Sucharitha

PurposeWith the advancements in photo editing software, it is possible to generate fake images, degrading the trust in digital images. Forged images, which appear like authentic images, can be created without leaving any visual clues about the alteration in the image. Image forensic field has introduced several forgery detection techniques, which effectively distinguish fake images from the original ones, to restore the trust in digital images. Among several forgery images, spliced images involving human faces are more unsafe. Hence, there is a need for a forgery detection approach to detect the spliced images.Design/methodology/approachThis paper proposes a Taylor-rider optimization algorithm-based deep convolutional neural network (Taylor-ROA-based DeepCNN) for detecting spliced images. Initially, the human faces in the spliced images are detected using the Viola–Jones algorithm, from which the 3-dimensional (3D) shape of the face is established using landmark-based 3D morphable model (L3DMM), which estimates the light coefficients. Then, the distance measures, such as Bhattacharya, Seuclidean, Euclidean, Hamming, Chebyshev and correlation coefficients are determined from the light coefficients of the faces. These form the feature vector to the proposed Taylor-ROA-based DeepCNN, which determines the spliced images.FindingsExperimental analysis using DSO-1, DSI-1, real dataset and hybrid dataset reveal that the proposed approach acquired the maximal accuracy, true positive rate (TPR) and true negative rate (TNR) of 99%, 98.88% and 96.03%, respectively, for DSO-1 dataset. The proposed method reached the performance improvement of 24.49%, 8.92%, 6.72%, 4.17%, 0.25%, 0.13%, 0.06%, and 0.06% in comparison to the existing methods, such as Kee and Farid's, shape from shading (SFS), random guess, Bo Peng et al., neural network, FOA-SVNN, CNN-based MBK, and Manoj Kumar et al., respectively, in terms of accuracy.Originality/valueThe Taylor-ROA is developed by integrating the Taylor series in rider optimization algorithm (ROA) for optimally tuning the DeepCNN.


Sign in / Sign up

Export Citation Format

Share Document