scholarly journals Multi-Task Learning for Medical Image Inpainting Based on Organ Boundary Awareness

2021 ◽  
Vol 11 (9) ◽  
pp. 4247
Author(s):  
Minh-Trieu Tran ◽  
Soo-Hyung Kim ◽  
Hyung-Jeong Yang ◽  
Guee-Sang Lee

Distorted medical images can significantly hamper medical diagnosis, notably in the analysis of Computer Tomography (CT) images and organ segmentation specifics. Therefore, improving diagnostic imagery accuracy and reconstructing damaged portions are important for medical diagnosis. Recently, these issues have been studied extensively in the field of medical image inpainting. Inpainting techniques are emerging in medical image analysis since local deformations in medical modalities are common because of various factors such as metallic implants, foreign objects or specular reflections during the image captures. The completion of such missing or distorted regions is important for the enhancement of post-processing tasks such as segmentation or classification. In this paper, a novel framework for medical image inpainting is presented by using a multi-task learning model for CT images targeting the learning of the shape and structure of the organs of interest. This novelty has been accomplished through simultaneous training for the prediction of edges and organ boundaries with the image inpainting, while state-of-the-art methods still focus only on the inpainting area without considering the global structure of the target organ. Therefore, our model reproduces medical images with sharp contours and exact organ locations. Consequently, our technique generates more realistic and believable images compared to other approaches. Additionally, in quantitative evaluation, the proposed method achieved the best results in the literature so far, which include a PSNR value of 43.44 dB and SSIM of 0.9818 for the square-shaped regions; a PSNR value of 38.06 dB and SSIM of 0.9746 for the arbitrary-shaped regions. The proposed model generates the sharp and clear images for inpainting by learning the detailed structure of organs. Our method was able to show how promising the method is when applying it in medical image analysis, where the completion of missing or distorted regions is still a challenging task.

Diagnostics ◽  
2021 ◽  
Vol 11 (8) ◽  
pp. 1384
Author(s):  
Yin Dai ◽  
Yifan Gao ◽  
Fayu Liu

Over the past decade, convolutional neural networks (CNN) have shown very competitive performance in medical image analysis tasks, such as disease classification, tumor segmentation, and lesion detection. CNN has great advantages in extracting local features of images. However, due to the locality of convolution operation, it cannot deal with long-range relationships well. Recently, transformers have been applied to computer vision and achieved remarkable success in large-scale datasets. Compared with natural images, multi-modal medical images have explicit and important long-range dependencies, and effective multi-modal fusion strategies can greatly improve the performance of deep models. This prompts us to study transformer-based structures and apply them to multi-modal medical images. Existing transformer-based network architectures require large-scale datasets to achieve better performance. However, medical imaging datasets are relatively small, which makes it difficult to apply pure transformers to medical image analysis. Therefore, we propose TransMed for multi-modal medical image classification. TransMed combines the advantages of CNN and transformer to efficiently extract low-level features of images and establish long-range dependencies between modalities. We evaluated our model on two datasets, parotid gland tumors classification and knee injury classification. Combining our contributions, we achieve an improvement of 10.1% and 1.9% in average accuracy, respectively, outperforming other state-of-the-art CNN-based models. The results of the proposed method are promising and have tremendous potential to be applied to a large number of medical image analysis tasks. To our best knowledge, this is the first work to apply transformers to multi-modal medical image classification.


2008 ◽  
Vol 32 (6) ◽  
pp. 513-520 ◽  
Author(s):  
Ludvík Tesař ◽  
Akinobu Shimizu ◽  
Daniel Smutek ◽  
Hidefumi Kobatake ◽  
Shigeru Nawano

2017 ◽  
Vol 23 (2) ◽  
pp. 271-278
Author(s):  
Shoichiro Takao ◽  
Sayaka Kondo ◽  
Junji Ueno ◽  
Tadashi Kondo

2020 ◽  
Vol 237 (12) ◽  
pp. 1438-1441
Author(s):  
Soenke Langner ◽  
Ebba Beller ◽  
Felix Streckenbach

AbstractMedical images play an important role in ophthalmology and radiology. Medical image analysis has greatly benefited from the application of “deep learning” techniques in clinical and experimental radiology. Clinical applications and their relevance for radiological imaging in ophthalmology are presented.


2019 ◽  
Vol 1 (01) ◽  
pp. 39-50
Author(s):  
Pasumponpandian A

The image in-painting is the method of improving or enhancing the damaged and the missing parts of the images. This process would be very essential preprocessing procedure in case of the medical image analysis for the diagnosis of the disease. The traditional ways of in-painting being ineffective the paper proposes hybrid image in-painting technique combining the edge connect, patch match and the deep image prior for the images to improve the quality and the resolution of the images, the proposed method is tested with different number of images from the gathered form the website to prove the competence of the proposed image in-painting technique.


Medical image analysis will be used to develop image retrieval system to provide access to image databases using extracted features. Content Based Image Retrieval (CBIR) is used for retrieving similar images from image databases. During the last few years, medical images are grown and used for medical image analysis. Here, we are proposed that medical image retrieval using two dimensional Principal Component Analysis (2DPCA). For extracting medical image features, 2DPCA has advantageous that evaluates accurate covariance matrix easily as much smaller and also requires less time for finding Eigen vectors. Medical image reconstruction is performed with increased values of 2DPCA and observed from results that reconstruction accuracy improves with increase of principal component values. Retrieval is performed for transformed image space by calculating the Euclidean Distance(ED) between 2DPCA values of unknown images with database images. Minimum distance classifier is used for retrieval, which is simple classifier. Simulation results are reported by considering different medical images and showed that simulation results provide increased retrieval accuracy. Further, Segmentation of retrieved medical images is obtained using k-means clustering algorithm.


2021 ◽  
Vol 1 (1) ◽  
Author(s):  
Xiang Li ◽  
Yuchen Jiang ◽  
Juan J. Rodriguez-Andina ◽  
Hao Luo ◽  
Shen Yin ◽  
...  

AbstractDeep learning techniques have promoted the rise of artificial intelligence (AI) and performed well in computer vision. Medical image analysis is an important application of deep learning, which is expected to greatly reduce the workload of doctors, contributing to more sustainable health systems. However, most current AI methods for medical image analysis are based on supervised learning, which requires a lot of annotated data. The number of medical images available is usually small and the acquisition of medical image annotations is an expensive process. Generative adversarial network (GAN), an unsupervised method that has become very popular in recent years, can simulate the distribution of real data and reconstruct approximate real data. GAN opens some exciting new ways for medical image generation, expanding the number of medical images available for deep learning methods. Generated data can solve the problem of insufficient data or imbalanced data categories. Adversarial training is another contribution of GAN to medical imaging that has been applied to many tasks, such as classification, segmentation, or detection. This paper investigates the research status of GAN in medical images and analyzes several GAN methods commonly applied in this area. The study addresses GAN application for both medical image synthesis and adversarial learning for other medical image tasks. The open challenges and future research directions are also discussed.


2020 ◽  
Vol 64 (2) ◽  
pp. 20508-1-20508-12 ◽  
Author(s):  
Getao Du ◽  
Xu Cao ◽  
Jimin Liang ◽  
Xueli Chen ◽  
Yonghua Zhan

Abstract Medical image analysis is performed by analyzing images obtained by medical imaging systems to solve clinical problems. The purpose is to extract effective information and improve the level of clinical diagnosis. In recent years, automatic segmentation based on deep learning (DL) methods has been widely used, where a neural network can automatically learn image features, which is in sharp contrast with the traditional manual learning method. U-net is one of the most important semantic segmentation frameworks for a convolutional neural network (CNN). It is widely used in the medical image analysis domain for lesion segmentation, anatomical segmentation, and classification. The advantage of this network framework is that it can not only accurately segment the desired feature target and effectively process and objectively evaluate medical images but also help to improve accuracy in the diagnosis by medical images. Therefore, this article presents a literature review of medical image segmentation based on U-net, focusing on the successful segmentation experience of U-net for different lesion regions in six medical imaging systems. Along with the latest advances in DL, this article introduces the method of combining the original U-net architecture with deep learning and a method for improving the U-net network.


Author(s):  
Khalid Raza ◽  
Nripendra Kumar Singh

Background: Interpretation of medical images for the diagnosis and treatment of complex diseases from high-dimensional and heterogeneous data remains a key challenge in transforming healthcare. In the last few years, both supervised and unsupervised deep learning achieved promising results in the area of medical image analysis. Several reviews on supervised deep learning are published, but hardly any rigorous review on unsupervised deep learning for medical image analysis is available. Objectives: The objective of this review is to systematically present various unsupervised deep learning models, tools, and benchmark datasets applied to medical image analysis. Some of the discussed models are autoencoders and its other variants, Restricted Boltzmann machines (RBM), Deep belief networks (DBN), Deep Boltzmann machine (DBM), and Generative adversarial network (GAN). Further, future research opportunities and challenges of unsupervised deep learning techniques for medical image analysis are also discussed. Conclusion: Currently, interpretation of medical images for diagnostic purposes is usually performed by human experts that may be replaced by computer-aided diagnosis due to advancement in machine learning techniques, including deep learning, and the availability of cheap computing infrastructure through cloud computing. Both supervised and unsupervised machine learning approaches are widely applied in medical image analysis, each of them having certain pros and cons. Since human supervisions are not always available or inadequate or biased, therefore, unsupervised learning algorithms give a big hope with lots of advantages for biomedical image analysis.


Sign in / Sign up

Export Citation Format

Share Document