scholarly journals Auto-GAN: Self-Supervised Collaborative Learning for Medical Image Synthesis

2020 ◽  
Vol 34 (07) ◽  
pp. 10486-10493
Author(s):  
Bing Cao ◽  
Han Zhang ◽  
Nannan Wang ◽  
Xinbo Gao ◽  
Dinggang Shen

In various clinical scenarios, medical image is crucial in disease diagnosis and treatment. Different modalities of medical images provide complementary information and jointly helps doctors to make accurate clinical decision. However, due to clinical and practical restrictions, certain imaging modalities may be unavailable nor complete. To impute missing data with adequate clinical accuracy, here we propose a framework called self-supervised collaborative learning to synthesize missing modality for medical images. The proposed method comprehensively utilize all available information correlated to the target modality from multi-source-modality images to generate any missing modality in a single model. Different from the existing methods, we introduce an auto-encoder network as a novel, self-supervised constraint, which provides target-modality-specific information to guide generator training. In addition, we design a modality mask vector as the target modality label. With experiments on multiple medical image databases, we demonstrate a great generalization ability as well as specialty of our method compared with other state-of-the-arts.

2018 ◽  
Vol 7 (3.33) ◽  
pp. 115 ◽  
Author(s):  
Myung Jae Lim ◽  
Da Eun Kim ◽  
Dong Kun Chung ◽  
Hoon Lim ◽  
Young Man Kwon

Breast cancer is a highly contagious disease that has killed many people all over the world. It can be fully recovered from early detection. To enable the early detection of the breast cancer, it is very important to classify accurately whether it is breast cancer or not. Recently, the deep learning approach method on the medical images such as these histopathologic images of the breast cancer is showing higher level of accuracy and efficiency compared to the conventional methods. In this paper, the breast cancer histopathological image that is difficult to be distinguished was analyzed visually. And among the deep learning algorithms, the CNN(Convolutional Neural Network) specialized for the image was used to perform comparative analysis on whether it is breast cancer or not. Among the CNN algorithms, VGG16 and InceptionV3 were used, and transfer learning was used for the effective application of these algorithms.The data used in this paper is breast cancer histopathological image dataset classifying the benign and malignant of BreakHis. In the 2-class classification task, InceptionV3 achieved 98% accuracy. It is expected that this deep learning approach method will support the development of disease diagnosis through medical images.  


2021 ◽  
Vol 1 (1) ◽  
Author(s):  
Xiang Li ◽  
Yuchen Jiang ◽  
Juan J. Rodriguez-Andina ◽  
Hao Luo ◽  
Shen Yin ◽  
...  

AbstractDeep learning techniques have promoted the rise of artificial intelligence (AI) and performed well in computer vision. Medical image analysis is an important application of deep learning, which is expected to greatly reduce the workload of doctors, contributing to more sustainable health systems. However, most current AI methods for medical image analysis are based on supervised learning, which requires a lot of annotated data. The number of medical images available is usually small and the acquisition of medical image annotations is an expensive process. Generative adversarial network (GAN), an unsupervised method that has become very popular in recent years, can simulate the distribution of real data and reconstruct approximate real data. GAN opens some exciting new ways for medical image generation, expanding the number of medical images available for deep learning methods. Generated data can solve the problem of insufficient data or imbalanced data categories. Adversarial training is another contribution of GAN to medical imaging that has been applied to many tasks, such as classification, segmentation, or detection. This paper investigates the research status of GAN in medical images and analyzes several GAN methods commonly applied in this area. The study addresses GAN application for both medical image synthesis and adversarial learning for other medical image tasks. The open challenges and future research directions are also discussed.


Author(s):  
Huan Yang ◽  
Pengjiang Qian

Medical images have always occupied a very important position in modern medical diagnosis. They are standard tools for doctors to carry out clinical diagnosis. However, nowadays, most clinical diagnosis relies on the doctors' professional knowledge and personal experience, which can be easily affected by many factors. In order to reduce the diagnosis errors caused by human subjective differences and improve the accuracy and reliability of the diagnosis results, a practical and reliable method is to use artificial intelligence technology to assist computer-aided diagnosis (CAD). With the help of powerful computer storage capabilities and advanced artificial intelligence algorithms, CAD can make up for the shortcomings of traditional manual diagnosis and realize efficient, intelligent diagnosis. This paper reviews GAN-based medical image synthesis methods, introduces the basic architecture and important improvements of GAN, lists some representative application examples, and finally makes a summary and discussion.


Author(s):  
Yin Xu ◽  
Yan Li ◽  
Byeong-Seok Shin

Abstract With recent advances in deep learning research, generative models have achieved great achievements and play an increasingly important role in current industrial applications. At the same time, technologies derived from generative methods are also under a wide discussion with researches, such as style transfer, image synthesis and so on. In this work, we treat generative methods as a possible solution to medical image augmentation. We proposed a context-aware generative framework, which can successfully change the gray scale of CT scans but almost without any semantic loss. By producing target images that with specific style / distribution, we greatly increased the robustness of segmentation model after adding generations into training set. Besides, we improved 2– 4% pixel segmentation accuracy over original U-NET in terms of spine segmentation. Lastly, we compared generations produced by networks when using different feature extractors (Vgg, ResNet and DenseNet) and made a detailed analysis on their performances over style transfer.


Diagnostics ◽  
2021 ◽  
Vol 11 (8) ◽  
pp. 1384
Author(s):  
Yin Dai ◽  
Yifan Gao ◽  
Fayu Liu

Over the past decade, convolutional neural networks (CNN) have shown very competitive performance in medical image analysis tasks, such as disease classification, tumor segmentation, and lesion detection. CNN has great advantages in extracting local features of images. However, due to the locality of convolution operation, it cannot deal with long-range relationships well. Recently, transformers have been applied to computer vision and achieved remarkable success in large-scale datasets. Compared with natural images, multi-modal medical images have explicit and important long-range dependencies, and effective multi-modal fusion strategies can greatly improve the performance of deep models. This prompts us to study transformer-based structures and apply them to multi-modal medical images. Existing transformer-based network architectures require large-scale datasets to achieve better performance. However, medical imaging datasets are relatively small, which makes it difficult to apply pure transformers to medical image analysis. Therefore, we propose TransMed for multi-modal medical image classification. TransMed combines the advantages of CNN and transformer to efficiently extract low-level features of images and establish long-range dependencies between modalities. We evaluated our model on two datasets, parotid gland tumors classification and knee injury classification. Combining our contributions, we achieve an improvement of 10.1% and 1.9% in average accuracy, respectively, outperforming other state-of-the-art CNN-based models. The results of the proposed method are promising and have tremendous potential to be applied to a large number of medical image analysis tasks. To our best knowledge, this is the first work to apply transformers to multi-modal medical image classification.


2021 ◽  
Vol 66 (Special Issue) ◽  
pp. 38-38
Author(s):  
Sorana D. Bolboacă ◽  
◽  
Adriana Elena Bulboacă ◽  
◽  
◽  
...  

"The Clinical Decision Support (CDS), a form of artificial intelligence (AI), consider physician expertise and cognitive function along with patient’s data as the input and case-specific medical decision as an output. The improvements in physician’s performances when using a CDS ranges from 13% to 68%. The AI applications are of large interest nowadays, and a lot of effort is also put in the development of IT applications in healthcare. Medical decision support systems for non-medical staff users (MDSS-NMSF) as phone applications are nowadays available on the market. A MDSS-NMSF app is generally not accompanied by a scientific evaluation of the performances, even if they are freely available or not. Two clinical scenarios were created, and Doctor31 retrieved the diagnosis decisions. First scenario: man, 29 years old, and three symptoms: dysphagia, weight loss (normal body mass index), and tiredness. Second scenario: women, 47 years old with L5-S1 disk herniation, abnormal anti-TPO antibodies, lower back pain (burning sensations), constipation, and tiredness. The outcome possible effects and implications, as well as vulnerabilities induced on the used, are highlighted and discussed. "


2007 ◽  
Vol 07 (04) ◽  
pp. 663-687 ◽  
Author(s):  
ASHISH KHARE ◽  
UMA SHANKER TIWARY

Wavelet based denoising is an effective way to improve the quality of images. Various methods have been proposed for denoising using real-valued wavelet transform. Complex valued wavelets exist but are rarely used. The complex wavelet transform provides phase information and it is shift invariant in nature. In medical image denoising, both removal of phase incoherency as well as maintaining the phase coherency are needed. This paper is an attempt to explore and apply the complex Daubechies wavelet transform for medical image denoising. We have proposed a method to compute a complex threshold, which does not depend on any assumed model of noise. In this sense this is a "universal" method. The proposed complex-domain shrinkage function depends on mean, variance and median of wavelet coefficients. To test the effectiveness of the proposed method, we have computed the input and output SNR and PSNR of various types of medical images. The method gives an improvement for Gaussian additive, Speckle and Salt-&-Pepper noise as well as for the mixture of these noise types for a range of noisy images with 15 db to 30 db noise levels and outperforms other real-valued wavelet transform based methods. The application of the proposed method to Ultrasound, X-ray and MRI images is demonstrated in the experiments.


Sign in / Sign up

Export Citation Format

Share Document