scholarly journals A Hybrid Capsule Network for Pneumonia Detection Using Image Augmentation Based on Generative Adversarial Network

2021 ◽  
Vol 38 (3) ◽  
pp. 619-627
Author(s):  
Kazim Firildak ◽  
Muhammed Fatih Talu

Pneumonia, featured by inflammation of the air sacs in one or both lungs, is usually detected by examining chest X-ray images. This paper probes into the classification models that can distinguish between normal and pneumonia images. As is known, trained networks like AlexNet and GoogleNet are deep network architectures, which are widely adopted to solve many classification problems. They have been adapted to the target datasets, and employed to classify new data generated through transfer learning. However, the classical architectures are not accurate enough for the diagnosis of pneumonia. Therefore, this paper designs a capsule network with high discrimination capability, and trains the network on Kaggle’s online pneumonia dataset, which contains chest X-ray images of many adults and children. The original dataset consists of 1,583 normal images, and 4,273 pneumonia images. Then, two data augmentation approaches were applied to the dataset, and their effects on classification accuracy were compared in details. The model parameters were optimized through five different experiments. The results show that the highest classification accuracy (93.91% even on small images) was achieved by the capsule network, coupled with data augmentation by generative adversarial network (GAN), using optimized parameters. This network outperformed the classical strategies.

Information ◽  
2021 ◽  
Vol 12 (6) ◽  
pp. 249
Author(s):  
Xin Jin ◽  
Yuanwen Zou ◽  
Zhongbing Huang

The cell cycle is an important process in cellular life. In recent years, some image processing methods have been developed to determine the cell cycle stages of individual cells. However, in most of these methods, cells have to be segmented, and their features need to be extracted. During feature extraction, some important information may be lost, resulting in lower classification accuracy. Thus, we used a deep learning method to retain all cell features. In order to solve the problems surrounding insufficient numbers of original images and the imbalanced distribution of original images, we used the Wasserstein generative adversarial network-gradient penalty (WGAN-GP) for data augmentation. At the same time, a residual network (ResNet) was used for image classification. ResNet is one of the most used deep learning classification networks. The classification accuracy of cell cycle images was achieved more effectively with our method, reaching 83.88%. Compared with an accuracy of 79.40% in previous experiments, our accuracy increased by 4.48%. Another dataset was used to verify the effect of our model and, compared with the accuracy from previous results, our accuracy increased by 12.52%. The results showed that our new cell cycle image classification system based on WGAN-GP and ResNet is useful for the classification of imbalanced images. Moreover, our method could potentially solve the low classification accuracy in biomedical images caused by insufficient numbers of original images and the imbalanced distribution of original images.


2020 ◽  
Vol 11 ◽  
Author(s):  
Luning Bi ◽  
Guiping Hu

Traditionally, plant disease recognition has mainly been done visually by human. It is often biased, time-consuming, and laborious. Machine learning methods based on plant leave images have been proposed to improve the disease recognition process. Convolutional neural networks (CNNs) have been adopted and proven to be very effective. Despite the good classification accuracy achieved by CNNs, the issue of limited training data remains. In most cases, the training dataset is often small due to significant effort in data collection and annotation. In this case, CNN methods tend to have the overfitting problem. In this paper, Wasserstein generative adversarial network with gradient penalty (WGAN-GP) is combined with label smoothing regularization (LSR) to improve the prediction accuracy and address the overfitting problem under limited training data. Experiments show that the proposed WGAN-GP enhanced classification method can improve the overall classification accuracy of plant diseases by 24.4% as compared to 20.2% using classic data augmentation and 22% using synthetic samples without LSR.


Agronomy ◽  
2021 ◽  
Vol 11 (8) ◽  
pp. 1500
Author(s):  
Chenglong Wang ◽  
Zhifeng Xiao

The performance of fruit surface defect detection is easily affected by factors such as noisy background and foliage occlusion. In this study, we choose lychee as a fruit type to investigate its surface quality. Lychees are hard to preserve and have to be stored at low temperatures to keep fresh. Additionally, the surface of lychees is subject to scratches and cracks during harvesting/processing. To explore the feasibility of the automation of defective surface detection for lychees, we build a dataset with 3743 samples divided into three categories, namely, mature, defects, and rot. The original dataset suffers an imbalanced distribution issue. To address it, we adopt a transformer-based generative adversarial network (GAN) as a means of data augmentation that can effectively enhance the original training set with more and diverse samples to rebalance the three categories. In addition, we investigate three deep convolutional neural network (DCNN) models, including SSD-MobileNet V2, Faster RCNN-ResNet50, and Faster RCNN-Inception-ResNet V2, trained under different settings for an extensive comparison study. The results show that all three models demonstrate consistent performance gains in mean average precision (mAP), with the application of GAN-based augmentation. The rebalanced dataset also reduces the inter-category discrepancy, allowing a DCNN model to be trained equally across categories. In addition, the qualitative results show that models trained under the augmented setting can better identify the critical regions and the object boundary, leading to gains in mAP. Lastly, we conclude that the most cost-effective model, SSD-MobileNet V2, presents a comparable mAP (91.81%) and a superior inference speed (102 FPS), suitable for real-time detection in industrial-level applications.


Sensors ◽  
2021 ◽  
Vol 21 (24) ◽  
pp. 8219
Author(s):  
Amin Ul Haq ◽  
Jian Ping Li ◽  
Sultan Ahmad ◽  
Shakir Khan ◽  
Mohammed Ali Alshara ◽  
...  

COVID-19 is a transferable disease that is also a leading cause of death for a large number of people worldwide. This disease, caused by SARS-CoV-2, spreads very rapidly and quickly affects the respiratory system of the human being. Therefore, it is necessary to diagnosis this disease at the early stage for proper treatment, recovery, and controlling the spread. The automatic diagnosis system is significantly necessary for COVID-19 detection. To diagnose COVID-19 from chest X-ray images, employing artificial intelligence techniques based methods are more effective and could correctly diagnosis it. The existing diagnosis methods of COVID-19 have the problem of lack of accuracy to diagnosis. To handle this problem we have proposed an efficient and accurate diagnosis model for COVID-19. In the proposed method, a two-dimensional Convolutional Neural Network (2DCNN) is designed for COVID-19 recognition employing chest X-ray images. Transfer learning (TL) pre-trained ResNet-50 model weight is transferred to the 2DCNN model to enhanced the training process of the 2DCNN model and fine-tuning with chest X-ray images data for final multi-classification to diagnose COVID-19. In addition, the data augmentation technique transformation (rotation) is used to increase the data set size for effective training of the R2DCNNMC model. The experimental results demonstrated that the proposed (R2DCNNMC) model obtained high accuracy and obtained 98.12% classification accuracy on CRD data set, and 99.45% classification accuracy on CXI data set as compared to baseline methods. This approach has a high performance and could be used for COVID-19 diagnosis in E-Healthcare systems.


2021 ◽  
Author(s):  
Wenyu Zhang

Abstract Background: Cardiothoracic diseases are a serious threat to human health and chest X-ray images have great reference value for the diagnosis and treatment. However, it is difficult for professional doctors to accurately diagnose cardiothoracic diseases through chest X-ray images sometimes and there will be different understanding based on human subjective differences, which will affect the judgment and treatment of diseases. Therefore, it is very necessary to develop a high-precision neural network to recognize chest X-ray images of cardiothoracic diseases.Methods: In this work, we cross-transfer the information extracted by the residual block and by the adaptive structure to different levels, which avoids the reduction of the adaptive function by residual structure and improves the recognition performance of the model. To evaluate the recognition ability of ACRnet, VGG16, InceptionV2, ResNet101 and CliqueNet are used for comparison. In addition, we use the deep convolution generative adversarial network (DCGAN) to expand the original dataset.Result: We find ACRnet has better recognition ability than other networks in identifying cardiomegaly, emphysema and normal. Besides, ACRnet's recognition ability has been greatly improved after data expansion. Conclusions: The experimental result indicates that emphysema and cardiomegaly can be effectively identified by ACRnet. Using DCGAN technology to expand the dataset can further improve the recognition ability of the model.


Sensors ◽  
2021 ◽  
Vol 21 (21) ◽  
pp. 7294
Author(s):  
Hyunwoo Cho ◽  
Haesol Park ◽  
Ig-Jae Kim ◽  
Junghyun Cho

Custom inspection using X-ray imaging is a very promising application of modern pattern recognition technology. However, the lack of data or renewal of tariff items makes the application of such technology difficult. In this paper, we present a data augmentation technique based on a new image-to-image translation method to deal with these difficulties. Unlike the conventional methods that convert a semantic label image into a realistic image, the proposed method takes a texture map with a special modification as an additional input of a generative adversarial network to reproduce domain-specific characteristics, such as background clutter or sensor-specific noise patterns. The proposed method was validated by applying it to backscatter X-ray (BSX) vehicle data augmentation. The Fréchet inception distance (FID) of the result indicates the visual quality of the translated image was significantly improved from the baseline when the texture parameters were used. Additionally, in terms of data augmentation, the experimental results of classification, segmentation, and detection show that the use of the translated image data, along with the real data consistently, improved the performance of the trained models. Our findings show that detailed depiction of the texture in translated images is crucial for data augmentation. Considering the comparatively few studies that have examined custom inspections of container scale goods, such as cars, we believe that this study will facilitate research on the automation of container screening, and the security of aviation and ports.


2020 ◽  
Vol 10 (23) ◽  
pp. 8415
Author(s):  
Jeongmin Lee ◽  
Younkyoung Yoon ◽  
Junseok Kwon

We propose a novel generative adversarial network for class-conditional data augmentation (i.e., GANDA) to mitigate data imbalance problems in image classification tasks. The proposed GANDA generates minority class data by exploiting majority class information to enhance the classification accuracy of minority classes. For stable GAN training, we introduce a new denoising autoencoder initialization with explicit class conditioning in the latent space, which enables the generation of definite samples. The generated samples are visually realistic and have a high resolution. Experimental results demonstrate that the proposed GANDA can considerably improve classification accuracy, especially when datasets are highly imbalanced on standard benchmark datasets (i.e., MNIST and CelebA). Our generated samples can be easily used to train conventional classifiers to enhance their classification accuracy.


2020 ◽  
Vol 13 (1) ◽  
pp. 8
Author(s):  
Sagar Kora Venu ◽  
Sridhar Ravula

Medical image datasets are usually imbalanced due to the high costs of obtaining the data and time-consuming annotations. Training a deep neural network model on such datasets to accurately classify the medical condition does not yield the desired results as they often over-fit the majority class samples’ data. Data augmentation is often performed on the training data to address the issue by position augmentation techniques such as scaling, cropping, flipping, padding, rotation, translation, affine transformation, and color augmentation techniques such as brightness, contrast, saturation, and hue to increase the dataset sizes. Radiologists generally use chest X-rays for the diagnosis of pneumonia. Due to patient privacy concerns, access to such data is often protected. In this study, we performed data augmentation on the Chest X-ray dataset to generate artificial chest X-ray images of the under-represented class through generative modeling techniques such as the Deep Convolutional Generative Adversarial Network (DCGAN). With just 1341 chest X-ray images labeled as Normal, artificial samples were created by retaining similar characteristics to the original data with this technique. Evaluating the model resulted in a Fréchet Distance of Inception (FID) score of 1.289. We further show the superior performance of a CNN classifier trained on the DCGAN augmented dataset.


Diagnostics ◽  
2021 ◽  
Vol 11 (5) ◽  
pp. 895
Author(s):  
Yash Karbhari ◽  
Arpan Basu ◽  
Zong-Woo Geem ◽  
Gi-Tae Han ◽  
Ram Sarkar

COVID-19 is a disease caused by the SARS-CoV-2 virus. The COVID-19 virus spreads when a person comes into contact with an affected individual. This is mainly through drops of saliva or nasal discharge. Most of the affected people have mild symptoms while some people develop acute respiratory distress syndrome (ARDS), which damages organs like the lungs and heart. Chest X-rays (CXRs) have been widely used to identify abnormalities that help in detecting the COVID-19 virus. They have also been used as an initial screening procedure for individuals highly suspected of being infected. However, the availability of radiographic CXRs is still scarce. This can limit the performance of deep learning (DL) based approaches for COVID-19 detection. To overcome these limitations, in this work, we developed an Auxiliary Classifier Generative Adversarial Network (ACGAN), to generate CXRs. Each generated X-ray belongs to one of the two classes COVID-19 positive or normal. To ensure the goodness of the synthetic images, we performed some experimentation on the obtained images using the latest Convolutional Neural Networks (CNNs) to detect COVID-19 in the CXRs. We fine-tuned the models and achieved more than 98% accuracy. After that, we also performed feature selection using the Harmony Search (HS) algorithm, which reduces the number of features while retaining classification accuracy. We further release a GAN-generated dataset consisting of 500 COVID-19 radiographic images.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Saman Motamed ◽  
Patrik Rogalla ◽  
Farzad Khalvati

AbstractCOVID-19 spread across the globe at an immense rate and has left healthcare systems incapacitated to diagnose and test patients at the needed rate. Studies have shown promising results for detection of COVID-19 from viral bacterial pneumonia in chest X-rays. Automation of COVID-19 testing using medical images can speed up the testing process of patients where health care systems lack sufficient numbers of the reverse-transcription polymerase chain reaction tests. Supervised deep learning models such as convolutional neural networks need enough labeled data for all classes to correctly learn the task of detection. Gathering labeled data is a cumbersome task and requires time and resources which could further strain health care systems and radiologists at the early stages of a pandemic such as COVID-19. In this study, we propose a randomized generative adversarial network (RANDGAN) that detects images of an unknown class (COVID-19) from known and labelled classes (Normal and Viral Pneumonia) without the need for labels and training data from the unknown class of images (COVID-19). We used the largest publicly available COVID-19 chest X-ray dataset, COVIDx, which is comprised of Normal, Pneumonia, and COVID-19 images from multiple public databases. In this work, we use transfer learning to segment the lungs in the COVIDx dataset. Next, we show why segmentation of the region of interest (lungs) is vital to correctly learn the task of classification, specifically in datasets that contain images from different resources as it is the case for the COVIDx dataset. Finally, we show improved results in detection of COVID-19 cases using our generative model (RANDGAN) compared to conventional generative adversarial networks for anomaly detection in medical images, improving the area under the ROC curve from 0.71 to 0.77.


Sign in / Sign up

Export Citation Format

Share Document