A Data-Efficient Approach for Automated Classification of OCT Images Using Generative Adversarial Network

2020 ◽  
Vol 4 (1) ◽  
pp. 1-4
Author(s):  
Vineeta Das ◽  
Samarendra Dandapat ◽  
Prabin Kumar Bora
Information ◽  
2021 ◽  
Vol 12 (6) ◽  
pp. 249
Author(s):  
Xin Jin ◽  
Yuanwen Zou ◽  
Zhongbing Huang

The cell cycle is an important process in cellular life. In recent years, some image processing methods have been developed to determine the cell cycle stages of individual cells. However, in most of these methods, cells have to be segmented, and their features need to be extracted. During feature extraction, some important information may be lost, resulting in lower classification accuracy. Thus, we used a deep learning method to retain all cell features. In order to solve the problems surrounding insufficient numbers of original images and the imbalanced distribution of original images, we used the Wasserstein generative adversarial network-gradient penalty (WGAN-GP) for data augmentation. At the same time, a residual network (ResNet) was used for image classification. ResNet is one of the most used deep learning classification networks. The classification accuracy of cell cycle images was achieved more effectively with our method, reaching 83.88%. Compared with an accuracy of 79.40% in previous experiments, our accuracy increased by 4.48%. Another dataset was used to verify the effect of our model and, compared with the accuracy from previous results, our accuracy increased by 12.52%. The results showed that our new cell cycle image classification system based on WGAN-GP and ResNet is useful for the classification of imbalanced images. Moreover, our method could potentially solve the low classification accuracy in biomedical images caused by insufficient numbers of original images and the imbalanced distribution of original images.


Author(s):  
Cara Murphy ◽  
John Kerekes

The classification of trace chemical residues through active spectroscopic sensing is challenging due to the lack of physics-based models that can accurately predict spectra. To overcome this challenge, we leveraged the field of domain adaptation to translate data from the simulated to the measured domain for training a classifier. We developed the first 1D conditional generative adversarial network (GAN) to perform spectrum-to-spectrum translation of reflectance signatures. We applied the 1D conditional GAN to a library of simulated spectra and quantified the improvement in classification accuracy on real data using the translated spectra for training the classifier. Using the GAN-translated library, the average classification accuracy increased from 0.622 to 0.723 on real chemical reflectance data, including data from chemicals not included in the GAN training set.


2020 ◽  
Vol 13 (6) ◽  
pp. 2949-2964
Author(s):  
Jussi Leinonen ◽  
Alexis Berne

Abstract. The increasing availability of sensors imaging cloud and precipitation particles, like the Multi-Angle Snowflake Camera (MASC), has resulted in datasets comprising millions of images of falling snowflakes. Automated classification is required for effective analysis of such large datasets. While supervised classification methods have been developed for this purpose in recent years, their ability to generalize is limited by the representativeness of their labeled training datasets, which are affected by the subjective judgment of the expert and require significant manual effort to derive. An alternative is unsupervised classification, which seeks to divide a dataset into distinct classes without expert-provided labels. In this paper, we introduce an unsupervised classification scheme based on a generative adversarial network (GAN) that learns to extract the key features from the snowflake images. Each image is then associated with a distribution of points in the feature space, and these distributions are used as the basis of K-medoids classification and hierarchical clustering. We found that the classification scheme is able to separate the dataset into distinct classes, each characterized by a particular size, shape and texture of the snowflake image, providing signatures of the microphysical properties of the snowflakes. This finding is supported by a comparison of the results to an existing supervised scheme. Although training the GAN is computationally intensive, the classification process proceeds directly from images to classes with minimal human intervention and therefore can be repeated for other MASC datasets with minor manual effort. As the algorithm is not specific to snowflakes, we also expect this approach to be relevant to other applications.


2021 ◽  
Vol 11 (5) ◽  
pp. 1334-1340
Author(s):  
K. Gokul Kannan ◽  
T. R. Ganesh Babu

Generative Adversarial Network (GAN) is neural network architecture, widely used in many computer vision applications such as super-resolution image generation, art creation and image to image translation. A conventional GAN model consists of two sub-models; generative model and discriminative model. The former one generates new samples based on an unsupervised learning task, and the later one classifies them into real or fake. Though GAN is most commonly used for training generative models, it can be used for developing a classifier model. The main objective is to extend the effectiveness of GAN into semi-supervised learning, i.e., for the classification of fundus images to diagnose glaucoma. The discriminator model in the conventional GAN is improved via transfer learning to predict n + 1 classes by training the model for both supervised classification (n classes) and unsupervised classification (fake or real). Both models share all feature extraction layers and differ in the output layers. Thus any update in one of the model will impact both models. Results show that the semi-supervised GAN performs well than a standalone Convolution Neural Networks (CNNs) model.


Author(s):  
Saba Saleem ◽  
Javeria Amin ◽  
Muhammad Sharif ◽  
Muhammad Almas Anjum ◽  
Muhammad Iqbal ◽  
...  

AbstractWhite blood cells (WBCs) are a portion of the immune system which fights against germs. Leukemia is the most common blood cancer which may lead to death. It occurs due to the production of a large number of immature WBCs in the bone marrow that destroy healthy cells. To overcome the severity of this disease, it is necessary to diagnose the shapes of immature cells at an early stage that ultimately reduces the modality rate of the patients. Recently different types of segmentation and classification methods are presented based upon deep-learning (DL) models but still have some limitations. This research aims to propose a modified DL approach for the accurate segmentation of leukocytes and their classification. The proposed technique includes two core steps: preprocessing-based classification and segmentation. In preprocessing, synthetic images are generated using a generative adversarial network (GAN) and normalized by color transformation. The optimal deep features are extracted from each blood smear image using pretrained deep models i.e., DarkNet-53 and ShuffleNet. More informative features are selected by principal component analysis (PCA) and fused serially for classification. The morphological operations based on color thresholding with the deep semantic method are utilized for leukemia segmentation of classified cells. The classification accuracy achieved with ALL-IDB and LISC dataset is 100% and 99.70% for the classification of leukocytes i.e., blast, no blast, basophils, neutrophils, eosinophils, lymphocytes, and monocytes, respectively. Whereas semantic segmentation achieved 99.10% and 98.60% for average and global accuracy, respectively. The proposed method achieved outstanding outcomes as compared to the latest existing research works.


2021 ◽  
Vol 8 ◽  
Author(s):  
Umesh C. Sharma ◽  
Kanhao Zhao ◽  
Kyle Mentkowski ◽  
Swati D. Sonkawade ◽  
Badri Karthikeyan ◽  
...  

Contrast-enhanced cardiac magnetic resonance imaging (MRI) is routinely used to determine myocardial scar burden and make therapeutic decisions for coronary revascularization. Currently, there are no optimized deep-learning algorithms for the automated classification of scarred vs. normal myocardium. We report a modified Generative Adversarial Network (GAN) augmentation method to improve the binary classification of myocardial scar using both pre-clinical and clinical approaches. For the initial training of the MobileNetV2 platform, we used the images generated from a high-field (9.4T) cardiac MRI of a mouse model of acute myocardial infarction (MI). Once the system showed 100% accuracy for the classification of acute MI in mice, we tested the translational significance of this approach in 91 patients with an ischemic myocardial scar, and 31 control subjects without evidence of myocardial scarring. To obtain a comparable augmentation dataset, we rotated scar images 8-times and control images 72-times, generating a total of 6,684 scar images and 7,451 control images. In humans, the use of Progressive Growing GAN (PGGAN)-based augmentation showed 93% classification accuracy, which is far superior to conventional automated modules. The use of other attention modules in our CNN further improved the classification accuracy by up to 5%. These data are of high translational significance and warrant larger multicenter studies in the future to validate the clinical implications.


Sign in / Sign up

Export Citation Format

Share Document