Copy-move image forged information detection and localisation in digital images using deep convolutional network

2021 ◽  
pp. 016555152110500
Author(s):  
Tanzila Saba ◽  
Amjad Rehman ◽  
Tariq Sadad ◽  
Zahid Mehmood

Image tempering is one of the significant issues in the modern era. The use of powerful tools for image editing with advanced technology and its widespread on social media raised questions on data integrity. Currently, the protection of images is uncertain and a severe concern, mainly when it transfers over the Internet. Thus, it is essential to detect an anomaly in images through artificial intelligence techniques. The simple way of image forgery is called copy-move, where a part of an image is replicated in the same image to hide unwanted content of the image. However, image processing through handcrafted features usually looks for pattern concerns with duplicate content, limiting their employment for huge data classification. On the other side, deep learning approaches achieve promising results, but their performance depends on training data with fine-tuning of hyperparameters. Thus, we proposed a custom convolutional neural network (CNN) architecture with a pre-trained model ResNet101 through a transfer learning approach. For this purpose, both models are trained on five different datasets. In both cases, the impact of the model is evaluated through accuracy, precision, recall, F-score and achieved the highest 98.4% accuracy using the Coverage dataset.

Database ◽  
2019 ◽  
Vol 2019 ◽  
Author(s):  
Tao Chen ◽  
Mingfen Wu ◽  
Hexi Li

Abstract The automatic extraction of meaningful relations from biomedical literature or clinical records is crucial in various biomedical applications. Most of the current deep learning approaches for medical relation extraction require large-scale training data to prevent overfitting of the training model. We propose using a pre-trained model and a fine-tuning technique to improve these approaches without additional time-consuming human labeling. Firstly, we show the architecture of Bidirectional Encoder Representations from Transformers (BERT), an approach for pre-training a model on large-scale unstructured text. We then combine BERT with a one-dimensional convolutional neural network (1d-CNN) to fine-tune the pre-trained model for relation extraction. Extensive experiments on three datasets, namely the BioCreative V chemical disease relation corpus, traditional Chinese medicine literature corpus and i2b2 2012 temporal relation challenge corpus, show that the proposed approach achieves state-of-the-art results (giving a relative improvement of 22.2, 7.77, and 38.5% in F1 score, respectively, compared with a traditional 1d-CNN classifier). The source code is available at https://github.com/chentao1999/MedicalRelationExtraction.


2017 ◽  
Author(s):  
Mario Valerio Giuffrida ◽  
Hanno Scharr ◽  
Sotirios A Tsaftaris

AbstractIn recent years, there has been an increasing interest in image-based plant phenotyping, applying state-of-the-art machine learning approaches to tackle challenging problems, such as leaf segmentation (a multi-instance problem) and counting. Most of these algorithms need labelled data to learn a model for the task at hand. Despite the recent release of a few plant phenotyping datasets, large annotated plant image datasets for the purpose of training deep learning algorithms are lacking. One common approach to alleviate the lack of training data is dataset augmentation. Herein, we propose an alternative solution to dataset augmentation for plant phenotyping, creating artificial images of plants using generative neural networks. We propose the Arabidopsis Rosette Image Generator (through) Adversarial Network: a deep convolutional network that is able to generate synthetic rosette-shaped plants, inspired by DC-GAN (a recent adversarial network model using convolutional layers). Specifically, we trained the network using A1, A2, and A4 of the CVPPP 2017 LCC dataset, containing Arabidopsis Thaliana plants. We show that our model is able to generate realistic 128 × 128 colour images of plants. We train our network conditioning on leaf count, such that it is possible to generate plants with a given number of leaves suitable, among others, for training regression based models. We propose a new Ax dataset of artificial plants images, obtained by our ARIGAN. We evaluate this new dataset using a state-of-the-art leaf counting algorithm, showing that the testing error is reduced when Ax is used as part of the training data.


In the recent past, Deep Learning models [1] are predominantly being used in Object Detection algorithms due to their accurate Image Recognition capability. These models extract features from the input images and videos [2] for identification of objects present in them. Various applications of these models include Image Processing, Video analysis, Speech Recognition, Biomedical Image Analysis, Biometric Recognition, Iris Recognition, National Security applications, Cyber Security, Natural Language Processing [3], Weather Forecasting applications, Renewable Energy Generation Scheduling etc. These models utilize the concept of Convolutional Neural Network (CNN) [3], which constitutes several layers of artificial neurons. The accuracy of Deep Learning models [1] depends on various parameters such as ‘Learning-rate’, ‘Training batch size’, ‘Validation batch size’, ‘Activation Function’, ‘Drop-out rate’ etc. These parameters are known as Hyper-Parameters. Object detection accuracy depends on selection of Hyperparameters and these in-turn decides the optimum accuracy. Hence, finding the best values for these parameters is a challenging task. Fine-Tuning is a process used for selection of a suitable Hyper-Parameter value for improvement of object detection accuracy. Selection of an inappropriate Hyper-Parameter value, leads to Over-Fitting or Under-Fitting of data. Over-Fitting is a case, when training data is larger than the required, which results in learning noise and inaccurate object detection. Under-fitting is a case, when the model is unable to capture the trend of the data and which leads to more erroneous results in testing or training data. In this paper, a balance between Over-fitting and Under-fitting is achieved by varying the ‘Learning rate’ of various Deep Learning models. Four Deep Learning Models such as VGG16, VGG19, InceptionV3 and Xception are considered in this paper for analysis purpose. The best zone of Learning-rate for each model, in respect of maximum Object Detection accuracy, is analyzed. In this paper a dataset of 70 object classes is taken and the prediction accuracy is analyzed by changing the ‘Learning-rate’ and keeping the rest of the Hyper-Parameters constant. This paper mainly concentrates on the impact of ‘Learning-rate’ on accuracy and identifies an optimum accuracy zone in Object Detection


Author(s):  
C. Qiu ◽  
P. Gamba ◽  
M. Schmitt ◽  
X. X. Zhu

Abstract. Man-made impervious surfaces, indicating the human footprint on Earth, are an environmental concern because it leads to a chain of events that modifies urban air and water resources. To better map man-made impervious surfaces in any region of interest (ROI), we propose a framework for learning to map impervious areas in any ROIs from Sentinel-2 images with noisy reference data, using a pre-trained fully convolutional network (FCN). The FCN is first trained with reference data only available in Europe, which is able to provide reasonable mapping results even in areas outside of Europe. The proposed framework, aiming to achieve an improvement over the preliminary predictions for a specific ROI, consists of two steps: noisy training data pre-processing and model fine-tuning with robust loss functions. The framework is validated over four test areas located in different continents with a measurable improvement over several baseline results. It has been shown that a better impervious mapping result can be achieved through a simple fine-tuning with noisy training data, and label updating through robust loss functions allows to further enhance the performances. In addition, by analyzing and comparing the mapping results to baselines, it can be highlighted that the improvement is mainly coming from a decreased omission error. This study can also provide insights for similar tasks, such as large-scale land cover/land use classification when accurate reference data is not available for training.


2012 ◽  
Vol 82 (3) ◽  
pp. 216-222 ◽  
Author(s):  
Venkatesh Iyengar ◽  
Ibrahim Elmadfa

The food safety security (FSS) concept is perceived as an early warning system for minimizing food safety (FS) breaches, and it functions in conjunction with existing FS measures. Essentially, the function of FS and FSS measures can be visualized in two parts: (i) the FS preventive measures as actions taken at the stem level, and (ii) the FSS interventions as actions taken at the root level, to enhance the impact of the implemented safety steps. In practice, along with FS, FSS also draws its support from (i) legislative directives and regulatory measures for enforcing verifiable, timely, and effective compliance; (ii) measurement systems in place for sustained quality assurance; and (iii) shared responsibility to ensure cohesion among all the stakeholders namely, policy makers, regulators, food producers, processors and distributors, and consumers. However, the functional framework of FSS differs from that of FS by way of: (i) retooling the vulnerable segments of the preventive features of existing FS measures; (ii) fine-tuning response systems to efficiently preempt the FS breaches; (iii) building a long-term nutrient and toxicant surveillance network based on validated measurement systems functioning in real time; (iv) focusing on crisp, clear, and correct communication that resonates among all the stakeholders; and (v) developing inter-disciplinary human resources to meet ever-increasing FS challenges. Important determinants of FSS include: (i) strengthening international dialogue for refining regulatory reforms and addressing emerging risks; (ii) developing innovative and strategic action points for intervention {in addition to Hazard Analysis and Critical Control Points (HACCP) procedures]; and (iii) introducing additional science-based tools such as metrology-based measurement systems.


2020 ◽  
Vol 2 (5) ◽  
pp. 115-119
Author(s):  
M. V. SAVINA ◽  
◽  
A. A. STEPANOV ◽  
I.A. STEPANOV ◽  
◽  
...  

The article highlights the problems of the impact of "digitalization" of society on the formation and transformation of human capital, and above all, the development of new competencies, knowledge and skills. The main components of human capital in the modern era, the features of the formal and informal educational process are clarified and disclosed. The necessity of minimizing the precariat class is proved. The main directions of qualitative improvement of human capital adequate to the challenges of the digital age and globalization are defined.


2019 ◽  
Vol 11 (3) ◽  
pp. 284 ◽  
Author(s):  
Linglin Zeng ◽  
Shun Hu ◽  
Daxiang Xiang ◽  
Xiang Zhang ◽  
Deren Li ◽  
...  

Soil moisture mapping at a regional scale is commonplace since these data are required in many applications, such as hydrological and agricultural analyses. The use of remotely sensed data for the estimation of deep soil moisture at a regional scale has received far less emphasis. The objective of this study was to map the 500-m, 8-day average and daily soil moisture at different soil depths in Oklahoma from remotely sensed and ground-measured data using the random forest (RF) method, which is one of the machine-learning approaches. In order to investigate the estimation accuracy of the RF method at both a spatial and a temporal scale, two independent soil moisture estimation experiments were conducted using data from 2010 to 2014: a year-to-year experiment (with a root mean square error (RMSE) ranging from 0.038 to 0.050 m3/m3) and a station-to-station experiment (with an RMSE ranging from 0.044 to 0.057 m3/m3). Then, the data requirements, importance factors, and spatial and temporal variations in estimation accuracy were discussed based on the results using the training data selected by iterated random sampling. The highly accurate estimations of both the surface and the deep soil moisture for the study area reveal the potential of RF methods when mapping soil moisture at a regional scale, especially when considering the high heterogeneity of land-cover types and topography in the study area.


Diagnostics ◽  
2021 ◽  
Vol 11 (6) ◽  
pp. 1052
Author(s):  
Leang Sim Nguon ◽  
Kangwon Seo ◽  
Jung-Hyun Lim ◽  
Tae-Jun Song ◽  
Sung-Hyun Cho ◽  
...  

Mucinous cystic neoplasms (MCN) and serous cystic neoplasms (SCN) account for a large portion of solitary pancreatic cystic neoplasms (PCN). In this study we implemented a convolutional neural network (CNN) model using ResNet50 to differentiate between MCN and SCN. The training data were collected retrospectively from 59 MCN and 49 SCN patients from two different hospitals. Data augmentation was used to enhance the size and quality of training datasets. Fine-tuning training approaches were utilized by adopting the pre-trained model from transfer learning while training selected layers. Testing of the network was conducted by varying the endoscopic ultrasonography (EUS) image sizes and positions to evaluate the network performance for differentiation. The proposed network model achieved up to 82.75% accuracy and a 0.88 (95% CI: 0.817–0.930) area under curve (AUC) score. The performance of the implemented deep learning networks in decision-making using only EUS images is comparable to that of traditional manual decision-making using EUS images along with supporting clinical information. Gradient-weighted class activation mapping (Grad-CAM) confirmed that the network model learned the features from the cyst region accurately. This study proves the feasibility of diagnosing MCN and SCN using a deep learning network model. Further improvement using more datasets is needed.


Polymers ◽  
2021 ◽  
Vol 13 (11) ◽  
pp. 1865
Author(s):  
Rida Tajau ◽  
Rosiah Rohani ◽  
Mohd Sofian Alias ◽  
Nurul Huda Mudri ◽  
Khairul Azhar Abdul Halim ◽  
...  

In countries that are rich with oil palm, the use of palm oil to produce bio-based acrylates and polyol can be the most eminent raw materials used for developing new and advanced natural polymeric materials involving radiation technique, like coating resins, nanoparticles, scaffold, nanocomposites, and lithography for different branches of the industry. The presence of hydrocarbon chains, carbon double bonds, and ester bonds in palm oil allows it to open up the possibility of fine-tuning its unique structures in the development of novel materials. Cross-linking, reversible addition-fragmentation chain transfer (RAFT), polymerization, grafting, and degradation are among the radiation mechanisms triggered by gamma, electron beam, ultraviolet, or laser irradiation sources. These radiation techniques are widely used in the development of polymeric materials because they are considered as the most versatile, inexpensive, easy, and effective methods. Therefore, this review summarized and emphasized on several recent studies that have reported on emerging radiation processing technologies for the production of radiation curable palm oil-based polymeric materials with a promising future in certain industries and biomedical applications. This review also discusses the rich potential of biopolymeric materials for advanced technology applications.


Sign in / Sign up

Export Citation Format

Share Document