Fine-tuning convolutional neural network with transfer learning for semantic segmentation of ground-level oilseed rape images in a field with high weed pressure

2019 ◽  
Vol 167 ◽  
pp. 105091 ◽  
Author(s):  
Alwaseela Abdalla ◽  
Haiyan Cen ◽  
Liang Wan ◽  
Reem Rashid ◽  
Haiyong Weng ◽  
...  
2021 ◽  
pp. 1-10
Author(s):  
Gayatri Pattnaik ◽  
Vimal K. Shrivastava ◽  
K. Parvathi

Pests are major threat to economic growth of a country. Application of pesticide is the easiest way to control the pest infection. However, excessive utilization of pesticide is hazardous to environment. The recent advances in deep learning have paved the way for early detection and improved classification of pest in tomato plants which will benefit the farmers. This paper presents a comprehensive analysis of 11 state-of-the-art deep convolutional neural network (CNN) models with three configurations: transfers learning, fine-tuning and scratch learning. The training in transfer learning and fine tuning initiates from pre-trained weights whereas random weights are used in case of scratch learning. In addition, the concept of data augmentation has been explored to improve the performance. Our dataset consists of 859 tomato pest images from 10 categories. The results demonstrate that the highest classification accuracy of 94.87% has been achieved in the transfer learning approach by DenseNet201 model with data augmentation.


2021 ◽  
Vol 7 ◽  
pp. e560
Author(s):  
Ethan Ocasio ◽  
Tim Q. Duong

Background While there is no cure for Alzheimer’s disease (AD), early diagnosis and accurate prognosis of AD may enable or encourage lifestyle changes, neurocognitive enrichment, and interventions to slow the rate of cognitive decline. The goal of our study was to develop and evaluate a novel deep learning algorithm to predict mild cognitive impairment (MCI) to AD conversion at three years after diagnosis using longitudinal and whole-brain 3D MRI. Methods This retrospective study consisted of 320 normal cognition (NC), 554 MCI, and 237 AD patients. Longitudinal data include T1-weighted 3D MRI obtained at initial presentation with diagnosis of MCI and at 12-month follow up. Whole-brain 3D MRI volumes were used without a priori segmentation of regional structural volumes or cortical thicknesses. MRIs of the AD and NC cohort were used to train a deep learning classification model to obtain weights to be applied via transfer learning for prediction of MCI patient conversion to AD at three years post-diagnosis. Two (zero-shot and fine tuning) transfer learning methods were evaluated. Three different convolutional neural network (CNN) architectures (sequential, residual bottleneck, and wide residual) were compared. Data were split into 75% and 25% for training and testing, respectively, with 4-fold cross validation. Prediction accuracy was evaluated using balanced accuracy. Heatmaps were generated. Results The sequential convolutional approach yielded slightly better performance than the residual-based architecture, the zero-shot transfer learning approach yielded better performance than fine tuning, and CNN using longitudinal data performed better than CNN using a single timepoint MRI in predicting MCI conversion to AD. The best CNN model for predicting MCI conversion to AD at three years after diagnosis yielded a balanced accuracy of 0.793. Heatmaps of the prediction model showed regions most relevant to the network including the lateral ventricles, periventricular white matter and cortical gray matter. Conclusions This is the first convolutional neural network model using longitudinal and whole-brain 3D MRIs without extracting regional brain volumes or cortical thicknesses to predict future MCI to AD conversion at 3 years after diagnosis. This approach could lead to early prediction of patients who are likely to progress to AD and thus may lead to better management of the disease.


2021 ◽  
Vol 8 (1) ◽  
pp. 29
Author(s):  
Sandra Pozzer ◽  
Marcos Paulo Vieira de Souza ◽  
Bata Hena ◽  
Reza Khoshkbary Rezayiye ◽  
Setayesh Hesam ◽  
...  

This study investigates the semantic segmentation of common concrete defects when using different imaging modalities. One pre-trained Convolutional Neural Network (CNN) model was trained via transfer learning and tested to detect concrete defect indications, such as cracks, spalling, and internal voids. The model’s performance was compared using datasets of visible, thermal, and fused images. The data were collected from four different concrete structures and built using four infrared cameras that have different sensitivities and resolutions, with imaging campaigns conducted during autumn, summer, and winter periods. Although specific defects can be detected in monomodal images, the results demonstrate that a larger number of defect classes can be accurately detected using multimodal fused images with the same viewpoint and resolution of the single-sensor image.


2021 ◽  
Vol 5 (2) ◽  
pp. 81-91
Author(s):  
Elok Iedfitra Haksoro ◽  
Abas Setiawan

Not all mushrooms are edible because some are poisonous. The edible or poisonous mushrooms can be identified by paying attention to the morphological characteristics of mushrooms, such as shape, color, and texture. There is an issue: some poisonous mushrooms have morphological features that are very similar to edible mushrooms. It can lead to the misidentification of mushrooms. This work aims to recognize edible or poisonous mushrooms using a Deep Learning approach, typically Convolutional Neural Networks. Because the training process will take a long time, Transfer Learning was applied to accelerate the learning process. Transfer learning uses an existing model as a base model in our neural network by transferring information from the related domain. There are Four base models are used, namely MobileNets, MobileNetV2, ResNet50, and VGG19. Each base model will be subjected to several experimental scenarios, such as setting the different learning rate values for pre-training and fine-tuning. The results show that the Convolutional Neural Network with transfer learning method can recognize edible or poisonous mushrooms with more than 86% accuracy. Moreover, the best accuracy result is 92.19% obtained from the base model of MobileNetsV2 with a learning rate of 0,00001 at the pre-training stage and 0,0001 at the fine-tuning stage.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Emre Kiyak ◽  
Gulay Unal

Purpose The paper aims to address the tracking algorithm based on deep learning and four deep learning tracking models developed. They compared with each other to prevent collision and to obtain target tracking in autonomous aircraft. Design/methodology/approach First, to follow the visual target, the detection methods were used and then the tracking methods were examined. Here, four models (deep convolutional neural networks (DCNN), deep convolutional neural networks with fine-tuning (DCNNFN), transfer learning with deep convolutional neural network (TLDCNN) and fine-tuning deep convolutional neural network with transfer learning (FNDCNNTL)) were developed. Findings The training time of DCNN took 9 min 33 s, while the accuracy percentage was calculated as 84%. In DCNNFN, the training time of the network was calculated as 4 min 26 s and the accuracy percentage was 91%. The training of TLDCNN) took 34 min and 49 s and the accuracy percentage was calculated as 95%. With FNDCNNTL, the training time of the network was calculated as 34 min 33 s and the accuracy percentage was nearly 100%. Originality/value Compared to the results in the literature ranging from 89.4% to 95.6%, using FNDCNNTL, better results were found in the paper.


2022 ◽  
Vol 23 (1) ◽  
Author(s):  
Tulika Kakati ◽  
Dhruba K. Bhattacharyya ◽  
Jugal K. Kalita ◽  
Trina M. Norden-Krichmar

Abstract Background A limitation of traditional differential expression analysis on small datasets involves the possibility of false positives and false negatives due to sample variation. Considering the recent advances in deep learning (DL) based models, we wanted to expand the state-of-the-art in disease biomarker prediction from RNA-seq data using DL. However, application of DL to RNA-seq data is challenging due to absence of appropriate labels and smaller sample size as compared to number of genes. Deep learning coupled with transfer learning can improve prediction performance on novel data by incorporating patterns learned from other related data. With the emergence of new disease datasets, biomarker prediction would be facilitated by having a generalized model that can transfer the knowledge of trained feature maps to the new dataset. To the best of our knowledge, there is no Convolutional Neural Network (CNN)-based model coupled with transfer learning to predict the significant upregulating (UR) and downregulating (DR) genes from both trained and untrained datasets. Results We implemented a CNN model, DEGnext, to predict UR and DR genes from gene expression data obtained from The Cancer Genome Atlas database. DEGnext uses biologically validated data along with logarithmic fold change values to classify differentially expressed genes (DEGs) as UR and DR genes. We applied transfer learning to our model to leverage the knowledge of trained feature maps to untrained cancer datasets. DEGnext’s results were competitive (ROC scores between 88 and 99$$\%$$ % ) with those of five traditional machine learning methods: Decision Tree, K-Nearest Neighbors, Random Forest, Support Vector Machine, and XGBoost. DEGnext was robust and effective in terms of transferring learned feature maps to facilitate classification of unseen datasets. Additionally, we validated that the predicted DEGs from DEGnext were mapped to significant Gene Ontology terms and pathways related to cancer. Conclusions DEGnext can classify DEGs into UR and DR genes from RNA-seq cancer datasets with high performance. This type of analysis, using biologically relevant fine-tuning data, may aid in the exploration of potential biomarkers and can be adapted for other disease datasets.


2021 ◽  
Vol 12 (25) ◽  
pp. 73
Author(s):  
Francesca Matrone ◽  
Massimo Martini

<p class="VARAbstract">The growing availability of three-dimensional (3D) data, such as point clouds, coming from Light Detection and Ranging (LiDAR), Mobile Mapping Systems (MMSs) or Unmanned Aerial Vehicles (UAVs), provides the opportunity to rapidly generate 3D models to support the restoration, conservation, and safeguarding activities of cultural heritage (CH). The so-called scan-to-BIM process can, in fact, benefit from such data, and they can themselves be a source for further analyses or activities on the archaeological and built heritage. There are several ways to exploit this type of data, such as Historic Building Information Modelling (HBIM), mesh creation, rasterisation, classification, and semantic segmentation. The latter, referring to point clouds, is a trending topic not only in the CH domain but also in other fields like autonomous navigation, medicine or retail. Precisely in these sectors, the task of semantic segmentation has been mainly exploited and developed with artificial intelligence techniques. In particular, machine learning (ML) algorithms, and their deep learning (DL) subset, are increasingly applied and have established a solid state-of-the-art in the last half-decade. However, applications of DL techniques on heritage point clouds are still scarce; therefore, we propose to tackle this framework within the built heritage field. Starting from some previous tests with the Dynamic Graph Convolutional Neural Network (DGCNN), in this contribution close attention is paid to: i) the investigation of fine-tuned models, used as a transfer learning technique, ii) the combination of external classifiers, such as Random Forest (RF), with the artificial neural network, and iii) the evaluation of the data augmentation results for the domain-specific ArCH dataset. Finally, after taking into account the main advantages and criticalities, considerations are made on the possibility to profit by this methodology also for non-programming or domain experts.</p><p>Highlights:</p><ul><li><p>Semantic segmentation of built heritage point clouds through deep neural networks can provide performances comparable to those of more consolidated state-of-the-art ML classifiers.</p></li><li><p>Transfer learning approaches, as fine-tuning, can considerably reduce computational time also for CH domain-specific datasets, as well as improve metrics for some challenging categories (i.e. windows or mouldings).</p></li><li><p>Data augmentation techniques do not significantly improve overall performances.</p></li></ul>


Sign in / Sign up

Export Citation Format

Share Document