scholarly journals Medical image processing with contextual style transfer

Author(s):  
Yin Xu ◽  
Yan Li ◽  
Byeong-Seok Shin

Abstract With recent advances in deep learning research, generative models have achieved great achievements and play an increasingly important role in current industrial applications. At the same time, technologies derived from generative methods are also under a wide discussion with researches, such as style transfer, image synthesis and so on. In this work, we treat generative methods as a possible solution to medical image augmentation. We proposed a context-aware generative framework, which can successfully change the gray scale of CT scans but almost without any semantic loss. By producing target images that with specific style / distribution, we greatly increased the robustness of segmentation model after adding generations into training set. Besides, we improved 2– 4% pixel segmentation accuracy over original U-NET in terms of spine segmentation. Lastly, we compared generations produced by networks when using different feature extractors (Vgg, ResNet and DenseNet) and made a detailed analysis on their performances over style transfer.

Author(s):  
J. Magelin Mary ◽  
Chitra K. ◽  
Y. Arockia Suganthi

Image processing technique in general, involves the application of signal processing on the input image for isolating the individual color plane of an image. It plays an important role in the image analysis and computer version. This paper compares the efficiency of two approaches in the area of finding breast cancer in medical image processing. The fundamental target is to apply an image mining in the area of medical image handling utilizing grouping guideline created by genetic algorithm. The parameter using extracted border, the border pixels are considered as population strings to genetic algorithm and Ant Colony Optimization, to find out the optimum value from the border pixels. We likewise look at cost of ACO and GA also, endeavors to discover which one gives the better solution to identify an affected area in medical image based on computational time.


Author(s):  
P. Salgado ◽  
T.-P. Azevedo Perdicoúlis

Medical image techniques are used to examine and determine the well-being of the foetus during pregnancy. Digital image processing (DIP) is essential to extract valuable information embedded in most biomedical signals. After, intelligent segmentation methods, based on classifier algorithms, must be applied to identify structures and relevant features from previous data. The success of both is essential for helping doctors to identify adverse health conditions from the medical images. To obtain easy and reliable DIP methods for foetus images in real-time, at different gestational ages, aware pre-processing needs to be applied to the images. Thence, some data features are extracted that are meant to be used as input to the segmentation algorithms presented in this work. Due to the high dimension of the problems in question, assemblage of the data is also desired. The segmentation of the images is done by revisiting the K-nn algorithm that is a conventional nonparametric classifier. Besides its simplicity, its power to accomplish high classification results in medical applications has been demonstrated. In this work two versions of this algorithm are presented (i) an enhancement of the standard version by aggregating the data apriori and (ii) an iterative version of the same method where the training set (TS) is not static. The procedure is demonstrated in two experiments, where two images of different technologies were selected: a magnetic resonance image and an ultrasound image, respectively. The results were assessed by comparison with the K-means clustering algorithm, a well-known and robust method for this type of task. Both described versions showed results close to 100% matching with the ones obtained by the validation method, although the iterative version displays much higher reliability in the classification.


2021 ◽  
Vol 7 (8) ◽  
pp. 124
Author(s):  
Kostas Marias

The role of medical image computing in oncology is growing stronger, not least due to the unprecedented advancement of computational AI techniques, providing a technological bridge between radiology and oncology, which could significantly accelerate the advancement of precision medicine throughout the cancer care continuum. Medical image processing has been an active field of research for more than three decades, focusing initially on traditional image analysis tasks such as registration segmentation, fusion, and contrast optimization. However, with the advancement of model-based medical image processing, the field of imaging biomarker discovery has focused on transforming functional imaging data into meaningful biomarkers that are able to provide insight into a tumor’s pathophysiology. More recently, the advancement of high-performance computing, in conjunction with the availability of large medical imaging datasets, has enabled the deployment of sophisticated machine learning techniques in the context of radiomics and deep learning modeling. This paper reviews and discusses the evolving role of image analysis and processing through the lens of the abovementioned developments, which hold promise for accelerating precision oncology, in the sense of improved diagnosis, prognosis, and treatment planning of cancer.


2021 ◽  
Vol 69 ◽  
pp. 101960
Author(s):  
Israa Alnazer ◽  
Pascal Bourdon ◽  
Thierry Urruty ◽  
Omar Falou ◽  
Mohamad Khalil ◽  
...  

2020 ◽  
Vol 36 (Supplement_2) ◽  
pp. i610-i617
Author(s):  
Mohammad Lotfollahi ◽  
Mohsen Naghipourfar ◽  
Fabian J Theis ◽  
F Alexander Wolf

Abstract Motivation While generative models have shown great success in sampling high-dimensional samples conditional on low-dimensional descriptors (stroke thickness in MNIST, hair color in CelebA, speaker identity in WaveNet), their generation out-of-distribution poses fundamental problems due to the difficulty of learning compact joint distribution across conditions. The canonical example of the conditional variational autoencoder (CVAE), for instance, does not explicitly relate conditions during training and, hence, has no explicit incentive of learning such a compact representation. Results We overcome the limitation of the CVAE by matching distributions across conditions using maximum mean discrepancy in the decoder layer that follows the bottleneck. This introduces a strong regularization both for reconstructing samples within the same condition and for transforming samples across conditions, resulting in much improved generalization. As this amount to solving a style-transfer problem, we refer to the model as transfer VAE (trVAE). Benchmarking trVAE on high-dimensional image and single-cell RNA-seq, we demonstrate higher robustness and higher accuracy than existing approaches. We also show qualitatively improved predictions by tackling previously problematic minority classes and multiple conditions in the context of cellular perturbation response to treatment and disease based on high-dimensional single-cell gene expression data. For generic tasks, we improve Pearson correlations of high-dimensional estimated means and variances with their ground truths from 0.89 to 0.97 and 0.75 to 0.87, respectively. We further demonstrate that trVAE learns cell-type-specific responses after perturbation and improves the prediction of most cell-type-specific genes by 65%. Availability and implementation The trVAE implementation is available via github.com/theislab/trvae. The results of this article can be reproduced via github.com/theislab/trvae_reproducibility.


2021 ◽  
Vol 82 ◽  
pp. 103755
Author(s):  
Shengyan Cai ◽  
Fangyuan Chai ◽  
Chunhuan Hu ◽  
Xue Han ◽  
Shuyu Liu

Sign in / Sign up

Export Citation Format

Share Document