Pneumonia Identification with Self-supervised Learning and Transfer Learning

Author(s):  
Yuting Long
Electronics ◽  
2021 ◽  
Vol 10 (15) ◽  
pp. 1807
Author(s):  
Sascha Grollmisch ◽  
Estefanía Cano

Including unlabeled data in the training process of neural networks using Semi-Supervised Learning (SSL) has shown impressive results in the image domain, where state-of-the-art results were obtained with only a fraction of the labeled data. The commonality between recent SSL methods is that they strongly rely on the augmentation of unannotated data. This is vastly unexplored for audio data. In this work, SSL using the state-of-the-art FixMatch approach is evaluated on three audio classification tasks, including music, industrial sounds, and acoustic scenes. The performance of FixMatch is compared to Convolutional Neural Networks (CNN) trained from scratch, Transfer Learning, and SSL using the Mean Teacher approach. Additionally, a simple yet effective approach for selecting suitable augmentation methods for FixMatch is introduced. FixMatch with the proposed modifications always outperformed Mean Teacher and the CNNs trained from scratch. For the industrial sounds and music datasets, the CNN baseline performance using the full dataset was reached with less than 5% of the initial training data, demonstrating the potential of recent SSL methods for audio data. Transfer Learning outperformed FixMatch only for the most challenging dataset from acoustic scene classification, showing that there is still room for improvement.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Sebastian Otálora ◽  
Niccolò Marini ◽  
Henning Müller ◽  
Manfredo Atzori

Abstract Background One challenge to train deep convolutional neural network (CNNs) models with whole slide images (WSIs) is providing the required large number of costly, manually annotated image regions. Strategies to alleviate the scarcity of annotated data include: using transfer learning, data augmentation and training the models with less expensive image-level annotations (weakly-supervised learning). However, it is not clear how to combine the use of transfer learning in a CNN model when different data sources are available for training or how to leverage from the combination of large amounts of weakly annotated images with a set of local region annotations. This paper aims to evaluate CNN training strategies based on transfer learning to leverage the combination of weak and strong annotations in heterogeneous data sources. The trade-off between classification performance and annotation effort is explored by evaluating a CNN that learns from strong labels (region annotations) and is later fine-tuned on a dataset with less expensive weak (image-level) labels. Results As expected, the model performance on strongly annotated data steadily increases as the percentage of strong annotations that are used increases, reaching a performance comparable to pathologists ($$\kappa = 0.691 \pm 0.02$$ κ = 0.691 ± 0.02 ). Nevertheless, the performance sharply decreases when applied for the WSI classification scenario with $$\kappa = 0.307 \pm 0.133$$ κ = 0.307 ± 0.133 . Moreover, it only provides a lower performance regardless of the number of annotations used. The model performance increases when fine-tuning the model for the task of Gleason scoring with the weak WSI labels $$\kappa = 0.528 \pm 0.05$$ κ = 0.528 ± 0.05 . Conclusion Combining weak and strong supervision improves strong supervision in classification of Gleason patterns using tissue microarrays (TMA) and WSI regions. Our results contribute very good strategies for training CNN models combining few annotated data and heterogeneous data sources. The performance increases in the controlled TMA scenario with the number of annotations used to train the model. Nevertheless, the performance is hindered when the trained TMA model is applied directly to the more challenging WSI classification problem. This demonstrates that a good pre-trained model for prostate cancer TMA image classification may lead to the best downstream model if fine-tuned on the WSI target dataset. We have made available the source code repository for reproducing the experiments in the paper: https://github.com/ilmaro8/Digital_Pathology_Transfer_Learning


Images generated from a variety of sources and foundations today can pose difficulty for a user to interpret similarity in them or analyze them for further use because of their segmentation policies. This unconventionality can generate many errors, because of which the previously used traditional methodologies such as supervised learning techniques less resourceful, which requires huge quantity of labelled training data which mirrors the desired target data. This paper thus puts forward the mechanism of an alternative technique i.e. transfer learning to be used in image diagnosis so that efficiency and accuracy among images can be achieved. This type of mechanism deals with variation in the desired and actual data used for training and the outlier sensitivity, which ultimately enhances the predictions by giving better results in various areas, thus leaving the traditional methodologies behind. The following analysis further discusses about three types of transfer classifiers which can be applied using only small volume of training data sets and their contrast with the traditional method which requires huge quantities of training data having attributes with slight changes. The three different separators were compared amongst them and also together from the traditional methodology being used for a very common application used in our daily life. Also, commonly occurring problems such as the outlier sensitivity problem were taken into consideration and measures were taken to recognise and improvise them. On further research it was observed that the performance of transfer learning exceeds that of the conventional supervised learning approaches being used for small amount of characteristic training data provided reducing the stratification errors to a great extent


2011 ◽  
Vol 09 (04) ◽  
pp. 521-540 ◽  
Author(s):  
REIJI TERAMOTO ◽  
TSUYOSHI KATO

In the drug discovery process, the metabolic fate of drugs is crucially important to prevent drug–drug interactions. Therefore, P450 isozyme selectivity prediction is an important task for screening drugs of appropriate metabolism profiles. Recently, large-scale activity data of five P450 isozymes (CYP1A2 CYP2C9, CYP3A4, CYP2D6, and CYP2C19) have been obtained using quantitative high-throughput screening with a bioluminescence assay. Although some isozymes share similar selectivities, conventional supervised learning algorithms independently learn a prediction model from each P450 isozyme. They are unable to exploit the other P450 isozyme activity data to improve the predictive performance of each P450 isozyme's selectivity. To address this issue, we apply transfer learning that uses activity data of the other isozymes to learn a prediction model from multiple P450 isozymes. After using the large-scale P450 isozyme selectivity dataset for five P450 isozymes, we evaluate the model's predictive performance. Experimental results show that, overall, our algorithm outperforms conventional supervised learning algorithms such as support vector machine (SVM), Weighted k-nearest neighbor classifier, Bagging, Adaboost, and latent semantic indexing (LSI). Moreover, our results show that the predictive performance of our algorithm is improved by exploiting the multiple P450 isozyme activity data in the learning process. Our algorithm can be an effective tool for P450 selectivity prediction for new chemical entities using multiple P450 isozyme activity data.


2020 ◽  
Author(s):  
Xingyi Yang ◽  
Xuehai He ◽  
Yuxiao Liang ◽  
Yue Yang ◽  
Shanghang Zhang ◽  
...  

Pretraining has become a standard technique in computer vision and natural language processing, which usually helps to improve performance substantially. Previously, the most dominant pretraining method is transfer learning (TL), which uses labeled data to learn a good representation network. Recently, a new pretraining approach -- self-supervised learning (SSL) -- has demonstrated promising results on a wide range of applications. SSL does not require annotated labels. It is purely conducted on input data by solving auxiliary tasks defined on the input data examples. The current reported results show that in certain applications, SSL outperforms TL and the other way around in other applications. There has not been a clear understanding on what properties of data and tasks render one approach outperforms the other. Without an informed guideline, ML researchers have to try both methods to find out which one is better empirically. It is usually time-consuming to do so. In this work, we aim to address this problem. We perform a comprehensive comparative study between SSL and TL regarding which one works better under different properties of data and tasks, including domain difference between source and target tasks, the amount of pretraining data, class imbalance in source data, and usage of target data for additional pretraining, etc. The insights distilled from our comparative studies can help ML researchers decide which method to use based on the properties of their applications.


2021 ◽  
Author(s):  
Ruhollah Taghizadeh-Mehrjardi ◽  
Razieh Sheikhpour ◽  
Norair Toomanian ◽  
Thomas Scholten

<p>The most critical aspect of application of digital soil mapping is its limited transferability. Modelling soil properties for regions where no or only sparse soil information is available is highly uncertain, when using the low-cost geo-spatial environmental covariates alone. To overcome this drawback, transfer learning has been introduced in different environmental sciences, including soil science. The general idea behind extrapolation of soil information with transfer learning in soil science is that the target area to transfer to is alike, e.g. in terms of soil-forming factors, and the same machine learning rules can be applied. Supervised machine learning, so far, has been used to transfer the soil information from the reference to the target areas with very similar environmental characteristics between both. Hence, it is unclear how machine learning can perform for other target regions with different environmental characteristics. Semi-supervised learning is an approach to machine learning that combines a small amount of labeled data (reference area) with a large amount of unlabeled data (target area) during training. In this study, we explored if semi-supervised learning could improve the transferability of digital soil mapping relative to supervised learning methods. Soil data for two arid regions and associated environmental covariates were obtained. Semi-supervised learning and supervised learning models were trained based on the data in the reference area and then tested based on the data in the target area. The results of this study indicated the higher power of semi-supervised learning for transferring soil information from one area to another in comparison to the supervised learning method.   </p>


2021 ◽  
Vol 12 (5) ◽  
pp. 1-19
Author(s):  
Xingjian Li ◽  
Haoyi Xiong ◽  
Zeyu Chen ◽  
Jun Huan ◽  
Cheng-Zhong Xu ◽  
...  

Ensemble learning is a widely used technique to train deep convolutional neural networks (CNNs) for improved robustness and accuracy. While existing algorithms usually first train multiple diversified networks and then assemble these networks as an aggregated classifier, we propose a novel learning paradigm, namely, “In-Network Ensemble” ( INE ) that incorporates the diversity of multiple models through training a SINGLE deep neural network. Specifically, INE segments the outputs of the CNN into multiple independent classifiers, where each classifier is further fine-tuned with better accuracy through a so-called diversified knowledge distillation process . We then aggregate the fine-tuned independent classifiers using an Averaging-and-Softmax operator to obtain the final ensemble classifier. Note that, in the supervised learning settings, INE starts the CNN training from random, while, under the transfer learning settings, it also could start with a pre-trained model to incorporate the knowledge learned from additional datasets. Extensive experiments have been done using eight large-scale real-world datasets, including CIFAR, ImageNet, and Stanford Cars, among others, as well as common deep network architectures such as VGG, ResNet, and Wide ResNet. We have evaluated the method under two tasks: supervised learning and transfer learning. The results show that INE outperforms the state-of-the-art algorithms for deep ensemble learning with improved accuracy.


2021 ◽  
Vol 11 (7) ◽  
pp. 3043
Author(s):  
Sungho Shin ◽  
Jongwon Kim ◽  
Yeonguk Yu ◽  
Seongju Lee ◽  
Kyoobin Lee

We propose the implementation of transfer learning from natural images to audio-based images using self-supervised learning schemes. Through self-supervised learning, convolutional neural networks (CNNs) can learn the general representation of natural images without labels. In this study, a convolutional neural network was pre-trained with natural images (ImageNet) via self-supervised learning; subsequently, it was fine-tuned on the target audio samples. Pre-training with the self-supervised learning scheme significantly improved the sound classification performance when validated on the following benchmarks: ESC-50, UrbanSound8k, and GTZAN. The network pre-trained via self-supervised learning achieved a similar level of accuracy as those pre-trained using a supervised method that require labels. Therefore, we demonstrated that transfer learning from natural images contributes to improvements in audio-related tasks, and self-supervised learning with natural images is adequate for pre-training scheme in terms of simplicity and effectiveness.


Sign in / Sign up

Export Citation Format

Share Document