A Fast Robustness Quantification Method for Evaluating Typical Deep Learning Models by Generally Image Processing

Author(s):  
Haocong Li ◽  
Yunjia Cheng ◽  
Wei Ren ◽  
Tianqing Zhu
Plants ◽  
2021 ◽  
Vol 10 (10) ◽  
pp. 1977
Author(s):  
Arturo Yee-Rendon ◽  
Irineo Torres-Pacheco ◽  
Angelica Sarahy Trujillo-Lopez ◽  
Karen Paola Romero-Bringas ◽  
Jesus Roberto Millan-Almaraz

Recently, deep-learning techniques have become the foundations for many breakthroughs in the automated identification of plant diseases. In the agricultural sector, many recent visual-computer approaches use deep-learning models. In this approach, a novel predictive analytics methodology to identify Tobacco Mosaic Virus (TMV) and Pepper Huasteco Yellow Vein Virus (PHYVV) visual symptoms on Jalapeño pepper (Capsicum annuum L.) leaves by using image-processing and deep-learning classification models is presented. The proposed image-processing approach is based on the utilization of Normalized Red-Blue Vegetation Index (NRBVI) and Normalized Green-Blue Vegetation Index (NGBVI) as new RGB-based vegetation indices, and its subsequent Jet pallet colored version NRBVI-Jet NGBVI-Jet as pre-processing algorithms. Furthermore, four standard pre-trained deep-learning architectures, Visual Geometry Group-16 (VGG-16), Xception, Inception v3, and MobileNet v2, were implemented for classification purposes. The objective of this methodology was to find the most accurate combination of vegetation index pre-processing algorithms and pre-trained deep- learning classification models. Transfer learning was applied to fine tune the pre-trained deep- learning models and data augmentation was also applied to prevent the models from overfitting. The performance of the models was evaluated using Top-1 accuracy, precision, recall, and F1-score using test data. The results showed that the best model was an Xception-based model that uses the NGBVI dataset. This model reached an average Top-1 test accuracy of 98.3%. A complete analysis of the different vegetation index representations using models based on deep-learning architectures is presented along with the study of the learning curves of these deep-learning models during the training phase.


2019 ◽  
Vol 8 (2S11) ◽  
pp. 3700-3705

The extraordinary research in the field of unsupervised machine learning has made the non-technical media to expect to see Robot Lords overthrowing humans in near future. Whatever might be the media exaggeration, but the results of recent advances in the research of Deep Learning applications are so beautiful that it has become very difficult to differentiate between the man-made content and computer-made content. This paper tries to establish a ground for new researchers with different real-time applications of Deep Learning. This paper is not a complete study of all applications of Deep Learning, rather it focuses on some of the highly researched themes and popular applications in domains such Image Processing, Sound/Speech Processing, and Video Processing.


2020 ◽  
Author(s):  
Andrew Shepley ◽  
Greg Falzon ◽  
Paul Meek ◽  
Paul Kwan

AbstractA time-consuming challenge faced by camera trap practitioners all over the world is the extraction of meaningful data from images to inform ecological management. The primary methods of image processing used by practitioners includes manual analysis and citizen science. An increasingly popular alternative is automated image classification software. However, most automated solutions are not sufficiently robust to be deployed on a large scale. Key challenges include limited access to images for each species and lack of location invariance when transferring models between sites. This prevents optimal use of ecological data and results in significant expenditure of time and resources to annotate and retrain deep learning models.In this study, we aimed to (a) assess the value of publicly available non-iconic FlickR images in the training of deep learning models for camera trap object detection, (b) develop an out-of-the-box location invariant automated camera trap image processing solution for ecologist using deep transfer learning and (c) explore the use of small subsets of camera trap images in optimisation of a FlickR trained deep learning model for high precision ecological object detection.We collected and annotated a dataset of images of “pigs” (Sus scrofa and Phacochoerus africanus) from the consumer image sharing website FlickR. These images were used to achieve transfer learning using a RetinaNet model in the task of object detection. We compared the performance of this model to the performance of models trained on combinations of camera trap images obtained from five different projects, each characterised by 5 different geographical regions. Furthermore, we explored optimisation of the FlickR model via infusion of small subsets of camera trap images to increase robustness in difficult images.In most cases, the mean Average Precision (mAP) of the FlickR trained model when tested on out of sample camera trap sites (67.21-91.92%) was significantly higher than the mAP achieved by models trained on only one geographical location (4.42-90.8%) and rivalled the mAP of models trained on mixed camera trap datasets (68.96-92.75%). The infusion of camera trap images into the FlickR training further improved AP by 5.10-22.32% to 83.60-97.02%.Ecology researchers can use FlickR images in the training of automated deep learning solutions for camera trap image processing to significantly reduce time and resource expenditure by allowing the development of location invariant, highly robust out-of-the-box solutions. This would allow AI technologies to be deployed on a large scale in ecological applications.


Author(s):  
Chandra Pal Kushwah

Image segmentation for applications like scene understanding, medical image analysis, robotic vision, video tracking, improving reality, and image compression is a key subject of image processing and image evaluation. Semantic segmentation is an integral aspect of image comprehension and is essential for image processing tasks. Semantic segmentation is a complex process in computer vision applications. Many techniques have been developed, from self-sufficient cars, human interaction, robotics, medical science, agriculture, and so on, to tackle the issue.In a short period, satellite imagery will provide a lot of large-scale knowledge about the earth's surfaces, saving time. With the growth & development of satellite image sensors, the recorded object resolution was improved with advanced image processing techniques. Improving the performance of deep learning models in a broad range of vision applications, important work has recently been carried out to evaluate approaches for deep learning models in image segmentation.In this paper,a detailed overview provides onImage segmentation and describes its techniques likeregion, edge, feature, threshold, and model-based. Also, provide Semantic Segmentation, Satellite imageries, and Deep learning & its Techniques like-DNN, CNN, RNN, RBM, and so on.CNN is one of the efficient deep learning techniques among all of them that can be usedwith the U-net model in further work.


Author(s):  
Gerardo Cazzato ◽  
Anjali Oak ◽  
Asim Mustafa Khan ◽  
. Jayesh

Aims: The aim of the study is to justify the need of deep learning predictive model in obtaining molecular phenotypes of overall cancer survival. Study Design: The study is based on the secondary qualitative data analysis through usage of systematic review. Methodology: A qualitative study has been conducted to analyse the necessity of deep learning.  It also includes the need for deep learning models to obtain the imaging of the cancer cells. In the study, a detailed discussion on deep learning has been made. The analysis of the primary sources has been obtained by evaluating the quality of the resources in the study. The study also comprises of a thematic analysis that enlightens the benefits of deep learning. The study is based on the analysis of 14 primary research-based articles out of 112 quantitative articles and structuring of a systematic review from the collected data. Results: The morphological and physiological changes that occur in the cancerous cells have been clearly evaluated in the research. The result signifies the prediction can be made by implementing deep learning in terms of cancer survival. Advancements in terms of technology in the medical field can thus be improved with the help of the deep learning process. It states the advancements of the deep learning models that are helpful in predicting the model of cancer to determine survival rate. Conclusion: Deep learning is a process that is considered to be a subset of artificial intelligence. Deep learning programmes are meant to be performed for complex learning models. Although there is difference in the concept of deep learning and image processing still artificial intelligence brings both together so as to ensure better performance in image processing. The need for deep learning models has become invasive, and it helps to build a strong ground for cancer survival.


2020 ◽  
Author(s):  
Dean Sumner ◽  
Jiazhen He ◽  
Amol Thakkar ◽  
Ola Engkvist ◽  
Esben Jannik Bjerrum

<p>SMILES randomization, a form of data augmentation, has previously been shown to increase the performance of deep learning models compared to non-augmented baselines. Here, we propose a novel data augmentation method we call “Levenshtein augmentation” which considers local SMILES sub-sequence similarity between reactants and their respective products when creating training pairs. The performance of Levenshtein augmentation was tested using two state of the art models - transformer and sequence-to-sequence based recurrent neural networks with attention. Levenshtein augmentation demonstrated an increase performance over non-augmented, and conventionally SMILES randomization augmented data when used for training of baseline models. Furthermore, Levenshtein augmentation seemingly results in what we define as <i>attentional gain </i>– an enhancement in the pattern recognition capabilities of the underlying network to molecular motifs.</p>


2019 ◽  
Author(s):  
Mohammad Rezaei ◽  
Yanjun Li ◽  
Xiaolin Li ◽  
Chenglong Li

<b>Introduction:</b> The ability to discriminate among ligands binding to the same protein target in terms of their relative binding affinity lies at the heart of structure-based drug design. Any improvement in the accuracy and reliability of binding affinity prediction methods decreases the discrepancy between experimental and computational results.<br><b>Objectives:</b> The primary objectives were to find the most relevant features affecting binding affinity prediction, least use of manual feature engineering, and improving the reliability of binding affinity prediction using efficient deep learning models by tuning the model hyperparameters.<br><b>Methods:</b> The binding site of target proteins was represented as a grid box around their bound ligand. Both binary and distance-dependent occupancies were examined for how an atom affects its neighbor voxels in this grid. A combination of different features including ANOLEA, ligand elements, and Arpeggio atom types were used to represent the input. An efficient convolutional neural network (CNN) architecture, DeepAtom, was developed, trained and tested on the PDBbind v2016 dataset. Additionally an extended benchmark dataset was compiled to train and evaluate the models.<br><b>Results: </b>The best DeepAtom model showed an improved accuracy in the binding affinity prediction on PDBbind core subset (Pearson’s R=0.83) and is better than the recent state-of-the-art models in this field. In addition when the DeepAtom model was trained on our proposed benchmark dataset, it yields higher correlation compared to the baseline which confirms the value of our model.<br><b>Conclusions:</b> The promising results for the predicted binding affinities is expected to pave the way for embedding deep learning models in virtual screening and rational drug design fields.


2020 ◽  
Author(s):  
Saeed Nosratabadi ◽  
Amir Mosavi ◽  
Puhong Duan ◽  
Pedram Ghamisi ◽  
Ferdinand Filip ◽  
...  

This paper provides a state-of-the-art investigation of advances in data science in emerging economic applications. The analysis was performed on novel data science methods in four individual classes of deep learning models, hybrid deep learning models, hybrid machine learning, and ensemble models. Application domains include a wide and diverse range of economics research from the stock market, marketing, and e-commerce to corporate banking and cryptocurrency. Prisma method, a systematic literature review methodology, was used to ensure the quality of the survey. The findings reveal that the trends follow the advancement of hybrid models, which, based on the accuracy metric, outperform other learning algorithms. It is further expected that the trends will converge toward the advancements of sophisticated hybrid deep learning models.


Sign in / Sign up

Export Citation Format

Share Document