scholarly journals Sequential transfer learning based on hierarchical clustering for improved performance in deep learning based food segmentation

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Mia S. N. Siemon ◽  
A. S. M. Shihavuddin ◽  
Gitte Ravn-Haren

AbstractAccurately segmenting foods from optical images is a challenging task, yet becoming possible with the help of recent advances in Deep Learning based solutions. Automated identification of food items opens up possibilities of useful applications like nutrition intake monitoring. Given large variations in food choices, Deep Learning based solutions still struggle to generate human level accuracy. In this work, we propose a novel Sequential Transfer Learning method using Hierarchical Clustering. This novel approach simulates a step by step problem solving framework based on clustering of similar types of foods. The proposed approach provides up to 6% gain in accuracy compared to traditional network training and generated a robust model performing better in challenging unseen cases. This approach is also tested for segmenting foods in Danish school children meals for dietary intake monitoring as an application.

Diagnostics ◽  
2021 ◽  
Vol 11 (11) ◽  
pp. 2109
Author(s):  
Skandha S. Sanagala ◽  
Andrew Nicolaides ◽  
Suneet K. Gupta ◽  
Vijaya K. Koppula ◽  
Luca Saba ◽  
...  

Background and Purpose: Only 1–2% of the internal carotid artery asymptomatic plaques are unstable as a result of >80% stenosis. Thus, unnecessary efforts can be saved if these plaques can be characterized and classified into symptomatic and asymptomatic using non-invasive B-mode ultrasound. Earlier plaque tissue characterization (PTC) methods were machine learning (ML)-based, which used hand-crafted features that yielded lower accuracy and unreliability. The proposed study shows the role of transfer learning (TL)-based deep learning models for PTC. Methods: As pertained weights were used in the supercomputer framework, we hypothesize that transfer learning (TL) provides improved performance compared with deep learning. We applied 11 kinds of artificial intelligence (AI) models, 10 of them were augmented and optimized using TL approaches—a class of Atheromatic™ 2.0 TL (AtheroPoint™, Roseville, CA, USA) that consisted of (i–ii) Visual Geometric Group-16, 19 (VGG16, 19); (iii) Inception V3 (IV3); (iv–v) DenseNet121, 169; (vi) XceptionNet; (vii) ResNet50; (viii) MobileNet; (ix) AlexNet; (x) SqueezeNet; and one DL-based (xi) SuriNet-derived from UNet. We benchmark 11 AI models against our earlier deep convolutional neural network (DCNN) model. Results: The best performing TL was MobileNet, with accuracy and area-under-the-curve (AUC) pairs of 96.10 ± 3% and 0.961 (p < 0.0001), respectively. In DL, DCNN was comparable to SuriNet, with an accuracy of 95.66% and 92.7 ± 5.66%, and an AUC of 0.956 (p < 0.0001) and 0.927 (p < 0.0001), respectively. We validated the performance of the AI architectures with established biomarkers such as greyscale median (GSM), fractal dimension (FD), higher-order spectra (HOS), and visual heatmaps. We benchmarked against previously developed Atheromatic™ 1.0 ML and showed an improvement of 12.9%. Conclusions: TL is a powerful AI tool for PTC into symptomatic and asymptomatic plaques.


Diagnostics ◽  
2020 ◽  
Vol 10 (6) ◽  
pp. 417 ◽  
Author(s):  
Mohammad Farukh Hashmi ◽  
Satyarth Katiyar ◽  
Avinash G Keskar ◽  
Neeraj Dhanraj Bokde ◽  
Zong Woo Geem

Pneumonia causes the death of around 700,000 children every year and affects 7% of the global population. Chest X-rays are primarily used for the diagnosis of this disease. However, even for a trained radiologist, it is a challenging task to examine chest X-rays. There is a need to improve the diagnosis accuracy. In this work, an efficient model for the detection of pneumonia trained on digital chest X-ray images is proposed, which could aid the radiologists in their decision making process. A novel approach based on a weighted classifier is introduced, which combines the weighted predictions from the state-of-the-art deep learning models such as ResNet18, Xception, InceptionV3, DenseNet121, and MobileNetV3 in an optimal way. This approach is a supervised learning approach in which the network predicts the result based on the quality of the dataset used. Transfer learning is used to fine-tune the deep learning models to obtain higher training and validation accuracy. Partial data augmentation techniques are employed to increase the training dataset in a balanced way. The proposed weighted classifier is able to outperform all the individual models. Finally, the model is evaluated, not only in terms of test accuracy, but also in the AUC score. The final proposed weighted classifier model is able to achieve a test accuracy of 98.43% and an AUC score of 99.76 on the unseen data from the Guangzhou Women and Children’s Medical Center pneumonia dataset. Hence, the proposed model can be used for a quick diagnosis of pneumonia and can aid the radiologists in the diagnosis process.


2020 ◽  
Vol 135 ◽  
pp. 244-248 ◽  
Author(s):  
Liansheng Wang ◽  
Yudi Jiao ◽  
Ying Qiao ◽  
Nianyin Zeng ◽  
Rongshan Yu

2021 ◽  
Author(s):  
Lidia Cleetus ◽  
Raji Sukumar ◽  
Hemalatha N

In this paper, a detection tool has been built for the detection and identification of the diseases and pests found in the crops at its earliest stage. For this, various deep learning architectures were experimented to see which one of those would help in building a more accurate and an efficient detection model. The deep learning architectures used in this study were Convolutional Neural Network, VGG16, InceptionV3, and Xception. VGG16, InceptionV3, and Xception are categorized as the pre-trained models based on CNN architecture. They follow the concept of transfer learning. Transfer learning is a technique which makes use of the learnings of the models previously trained on a base data and applies it to the present dataset. This is an efficient technique which gives us rapid results and improved performance. Two plant datasets have been used here for disease and insects. The results of the algorithms were then compared. Most successful one has been the Xception model which obtained 82.89 for disease and 77.9 for pests.


2021 ◽  
Vol 9 (12) ◽  
pp. 1408
Author(s):  
Liqian Wang ◽  
Shuzhen Fan ◽  
Yunxia Liu ◽  
Yongfu Li ◽  
Cheng Fei ◽  
...  

The ocean connects all continents and is an important space for human activities. Ship detection with electro-optical images has shown great potential due to the abundant imaging spectrum and, hence, strongly supports human activities in the ocean. A suitable imaging spectrum can obtain effective images in complex marine environments, which is the premise of ship detection. This paper provides an overview of ship detection methods with electro-optical images in marine environments. Ship detection methods with sea–sky backgrounds include traditional and deep learning methods. Traditional ship detection methods comprise the following steps: preprocessing, sea–sky line (SSL) detection, region of interest (ROI) extraction, and identification. The use of deep learning is promising in ship detection; however, it requires a large amount of labeled data to build a robust model, and its targeted optimization for ship detection in marine environments is not sufficient.


2022 ◽  
Vol 14 (2) ◽  
pp. 355
Author(s):  
Zhen Cheng ◽  
Guanying Huo ◽  
Haisen Li

Due to the strong speckle noise caused by the seabed reverberation which makes it difficult to extract discriminating and noiseless features of a target, recognition and classification of underwater targets using side-scan sonar (SSS) images is a big challenge. Moreover, unlike classification of optical images which can use a large dataset to train the classifier, classification of SSS images usually has to exploit a very small dataset for training, which may cause classifier overfitting. Compared with traditional feature extraction methods using descriptors—such as Haar, SIFT, and LBP—deep learning-based methods are more powerful in capturing discriminating features. After training on a large optical dataset, e.g., ImageNet, direct fine-tuning method brings improvement to the sonar image classification using a small-size SSS image dataset. However, due to the different statistical characteristics between optical images and sonar images, transfer learning methods—e.g., fine-tuning—lack cross-domain adaptability, and therefore cannot achieve very satisfactory results. In this paper, a multi-domain collaborative transfer learning (MDCTL) method with multi-scale repeated attention mechanism (MSRAM) is proposed for improving the accuracy of underwater sonar image classification. In the MDCTL method, low-level characteristic similarity between SSS images and synthetic aperture radar (SAR) images, and high-level representation similarity between SSS images and optical images are used together to enhance the feature extraction ability of the deep learning model. Using different characteristics of multi-domain data to efficiently capture useful features for the sonar image classification, MDCTL offers a new way for transfer learning. MSRAM is used to effectively combine multi-scale features to make the proposed model pay more attention to the shape details of the target excluding the noise. Experimental results of classification show that, in using multi-domain data sets, the proposed method is more stable with an overall accuracy of 99.21%, bringing an improvement of 4.54% compared with the fine-tuned VGG19. Results given by diverse visualization methods also demonstrate that the method is more powerful in feature representation by using the MDCTL and MSRAM.


Author(s):  
Zinah Mohsin Arkah ◽  
Dalya S. Al-Dulaimi ◽  
Ahlam R. Khekan

<p>Skin cancer is an example of the most dangerous disease. Early diagnosis of skin cancer can save many people’s lives. Manual classification methods are time-consuming and costly. Deep learning has been proposed for the automated classification of skin cancer. Although deep learning showed impressive performance in several medical imaging tasks, it requires a big number of images to achieve a good performance. The skin cancer classification task suffers from providing deep learning with sufficient data due to the expensive annotation process and required experts. One of the most used solutions is transfer learning of pre-trained models of the ImageNet dataset. However, the learned features of pre-trained models are different from skin cancer image features. To end this, we introduce a novel approach of transfer learning by training the pre-trained models of the ImageNet (VGG, GoogleNet, and ResNet50) on a large number of unlabelled skin cancer images, first. We then train them on a small number of labeled skin images. Our experimental results proved that the proposed method is efficient by achieving an accuracy of 84% with ResNet50 when directly trained with a small number of labeled skin and 93.7% when trained with the proposed approach.</p>


2020 ◽  
Author(s):  
Pathikkumar Patel ◽  
Bhargav Lad ◽  
Jinan Fiaidhi

During the last few years, RNN models have been extensively used and they have proven to be better for sequence and text data. RNNs have achieved state-of-the-art performance levels in several applications such as text classification, sequence to sequence modelling and time series forecasting. In this article we will review different Machine Learning and Deep Learning based approaches for text data and look at the results obtained from these methods. This work also explores the use of transfer learning in NLP and how it affects the performance of models on a specific application of sentiment analysis.


Sign in / Sign up

Export Citation Format

Share Document