scholarly journals Unsupervised Deep Anomaly Detection in Chest Radiographs

Author(s):  
Takahiro Nakao ◽  
Shouhei Hanaoka ◽  
Yukihiro Nomura ◽  
Masaki Murata ◽  
Tomomi Takenaga ◽  
...  

AbstractThe purposes of this study are to propose an unsupervised anomaly detection method based on a deep neural network (DNN) model, which requires only normal images for training, and to evaluate its performance with a large chest radiograph dataset. We used the auto-encoding generative adversarial network (α-GAN) framework, which is a combination of a GAN and a variational autoencoder, as a DNN model. A total of 29,684 frontal chest radiographs from the Radiological Society of North America Pneumonia Detection Challenge dataset were used for this study (16,880 male and 12,804 female patients; average age, 47.0 years). All these images were labeled as “Normal,” “No Opacity/Not Normal,” or “Opacity” by board-certified radiologists. About 70% (6,853/9,790) of the Normal images were randomly sampled as the training dataset, and the rest were randomly split into the validation and test datasets in a ratio of 1:2 (7,610 and 15,221). Our anomaly detection system could correctly visualize various lesions including a lung mass, cardiomegaly, pleural effusion, bilateral hilar lymphadenopathy, and even dextrocardia. Our system detected the abnormal images with an area under the receiver operating characteristic curve (AUROC) of 0.752. The AUROCs for the abnormal labels Opacity and No Opacity/Not Normal were 0.838 and 0.704, respectively. Our DNN-based unsupervised anomaly detection method could successfully detect various diseases or anomalies in chest radiographs by training with only the normal images.


Diagnostics ◽  
2020 ◽  
Vol 10 (7) ◽  
pp. 456 ◽  
Author(s):  
Tomoyuki Fujioka ◽  
Kazunori Kubota ◽  
Mio Mori ◽  
Yuka Kikuchi ◽  
Leona Katsuta ◽  
...  

We aimed to use generative adversarial network (GAN)-based anomaly detection to diagnose images of normal tissue, benign masses, or malignant masses on breast ultrasound. We retrospectively collected 531 normal breast ultrasound images from 69 patients. Data augmentation was performed and 6372 (531 × 12) images were available for training. Efficient GAN-based anomaly detection was used to construct a computational model to detect anomalous lesions in images and calculate abnormalities as an anomaly score. Images of 51 normal tissues, 48 benign masses, and 72 malignant masses were analyzed for the test data. The sensitivity, specificity, and area under the receiver operating characteristic curve (AUC) of this anomaly detection model were calculated. Malignant masses had significantly higher anomaly scores than benign masses (p < 0.001), and benign masses had significantly higher scores than normal tissues (p < 0.001). Our anomaly detection model had high sensitivities, specificities, and AUC values for distinguishing normal tissues from benign and malignant masses, with even greater values for distinguishing normal tissues from malignant masses. GAN-based anomaly detection shows high performance for the detection and diagnosis of anomalous lesions in breast ultrasound images.



2021 ◽  
Vol 116 ◽  
pp. 107969
Author(s):  
Dongyue Chen ◽  
Lingyi Yue ◽  
Xingya Chang ◽  
Ming Xu ◽  
Tong Jia


IEEE Access ◽  
2021 ◽  
pp. 1-1
Author(s):  
Chen Xie ◽  
Kecheng Yang ◽  
Anni Wang ◽  
Chunxu Chen ◽  
Wei Li








2020 ◽  
Vol 10 (15) ◽  
pp. 5032
Author(s):  
Xiaochang Wu ◽  
Xiaolin Tian

Medical image segmentation is a classic challenging problem. The segmentation of parts of interest in cardiac medical images is a basic task for cardiac image diagnosis and guided surgery. The effectiveness of cardiac segmentation directly affects subsequent medical applications. Generative adversarial networks have achieved outstanding success in image segmentation compared with classic neural networks by solving the oversegmentation problem. Cardiac X-ray images are prone to weak edges, artifacts, etc. This paper proposes an adaptive generative adversarial network for cardiac segmentation to improve the segmentation rate of X-ray images by generative adversarial networks. The adaptive generative adversarial network consists of three parts: a feature extractor, a discriminator and a selector. In this method, multiple generators are trained in the feature extractor. The discriminator scores the features of different dimensions. The selector selects the appropriate features and adjusts the network for the next iteration. With the help of the discriminator, this method uses multinetwork joint feature extraction to achieve network adaptivity. This method allows features of multiple dimensions to be combined to perform joint training of the network to enhance its generalization ability. The results of cardiac segmentation experiments on X-ray chest radiographs show that this method has higher segmentation accuracy and less overfitting than other methods. In addition, the proposed network is more stable.



Sign in / Sign up

Export Citation Format

Share Document