scholarly journals Enhanced Magnetic Resonance Image Synthesis with Contrast-Aware Generative Adversarial Networks

2021 ◽  
Vol 7 (8) ◽  
pp. 133
Author(s):  
Jonas Denck ◽  
Jens Guehring ◽  
Andreas Maier ◽  
Eva Rothgang

A magnetic resonance imaging (MRI) exam typically consists of the acquisition of multiple MR pulse sequences, which are required for a reliable diagnosis. With the rise of generative deep learning models, approaches for the synthesis of MR images are developed to either synthesize additional MR contrasts, generate synthetic data, or augment existing data for AI training. While current generative approaches allow only the synthesis of specific sets of MR contrasts, we developed a method to generate synthetic MR images with adjustable image contrast. Therefore, we trained a generative adversarial network (GAN) with a separate auxiliary classifier (AC) network to generate synthetic MR knee images conditioned on various acquisition parameters (repetition time, echo time, and image orientation). The AC determined the repetition time with a mean absolute error (MAE) of 239.6 ms, the echo time with an MAE of 1.6 ms, and the image orientation with an accuracy of 100%. Therefore, it can properly condition the generator network during training. Moreover, in a visual Turing test, two experts mislabeled 40.5% of real and synthetic MR images, demonstrating that the image quality of the generated synthetic and real MR images is comparable. This work can support radiologists and technologists during the parameterization of MR sequences by previewing the yielded MR contrast, can serve as a valuable tool for radiology training, and can be used for customized data generation to support AI training.

2021 ◽  
Vol 11 (4) ◽  
pp. 1380
Author(s):  
Yingbo Zhou ◽  
Pengcheng Zhao ◽  
Weiqin Tong ◽  
Yongxin Zhu

While Generative Adversarial Networks (GANs) have shown promising performance in image generation, they suffer from numerous issues such as mode collapse and training instability. To stabilize GAN training and improve image synthesis quality with diversity, we propose a simple yet effective approach as Contrastive Distance Learning GAN (CDL-GAN) in this paper. Specifically, we add Consistent Contrastive Distance (CoCD) and Characteristic Contrastive Distance (ChCD) into a principled framework to improve GAN performance. The CoCD explicitly maximizes the ratio of the distance between generated images and the increment between noise vectors to strengthen image feature learning for the generator. The ChCD measures the sampling distance of the encoded images in Euler space to boost feature representations for the discriminator. We model the framework by employing Siamese Network as a module into GANs without any modification on the backbone. Both qualitative and quantitative experiments conducted on three public datasets demonstrate the effectiveness of our method.


2010 ◽  
Vol 51 (3) ◽  
pp. 309-315 ◽  
Author(s):  
Raili Raininko ◽  
Peter Mattsson

Background: Age- and sex-related changes of metabolites in healthy adult brains have been examined with different 1H magnetic resonance spectroscopy (MRS) methods in varying populations, and with differing results. A long repetition time and short echo time technique reduces quantification errors due to T1 and T2 relaxation effects and makes it possible to measure metabolites with short T2 relaxation times. Purpose: To examine the effect of age on the metabolite concentrations measured by 1H MRS in normal supraventricular white matter using a long repetition time (TR) and a short echo time (TE). Material and Methods: Supraventricular white matter of 57 healthy subjects (25 women, 32 men), aged 13 to 72 years, was examined with a single-voxel MRS at 1.5T using a TR of 6000 ms and a TE of 22 ms. Tissue water was used as a reference in quantification. Results: Myoinositol increased slightly and total N-acetyl aspartate (NAA) decreased slightly with increasing age. Glutamine/glutamate complex (Glx) showed U-shaped age dependence, with highest concentrations in the youngest and oldest subjects. No significant age dependence was found in total choline and total creatine. No gender differences were found. Macromolecule/ lipid (ML) fractions were reliably measurable only in 36/57 or even fewer subjects and showed very large deviations. Conclusion: The concentrations of several metabolites in cerebral supraventricular white matter are age dependent on 1H MRS, even in young and middle-aged people, and age dependency can be nonlinear. Each 1H MRS study of the brain should therefore take age into account, whereas sex does not appear to be so important. The use of macromolecule and lipid evaluations is compromised by less successful quantification and large variations in healthy people.


2020 ◽  
Vol 2020 ◽  
pp. 1-10
Author(s):  
Linyan Li ◽  
Yu Sun ◽  
Fuyuan Hu ◽  
Tao Zhou ◽  
Xuefeng Xi ◽  
...  

In this paper, we propose an Attentional Concatenation Generative Adversarial Network (ACGAN) aiming at generating 1024 × 1024 high-resolution images. First, we propose a multilevel cascade structure, for text-to-image synthesis. During training progress, we gradually add new layers and, at the same time, use the results and word vectors from the previous layer as inputs to the next layer to generate high-resolution images with photo-realistic details. Second, the deep attentional multimodal similarity model is introduced into the network, and we match word vectors with images in a common semantic space to compute a fine-grained matching loss for training the generator. In this way, we can pay attention to the fine-grained information of the word level in the semantics. Finally, the measure of diversity is added to the discriminator, which enables the generator to obtain more diverse gradient directions and improve the diversity of generated samples. The experimental results show that the inception scores of the proposed model on the CUB and Oxford-102 datasets have reached 4.48 and 4.16, improved by 2.75% and 6.42% compared to Attentional Generative Adversarial Networks (AttenGAN). The ACGAN model has a better effect on text-generated images, and the resulting image is closer to the real image.


2020 ◽  
Vol 2020 ◽  
pp. 1-12
Author(s):  
Hyunhee Lee ◽  
Jaechoon Jo ◽  
Heuiseok Lim

Due to institutional and privacy issues, medical imaging researches are confronted with serious data scarcity. Image synthesis using generative adversarial networks provides a generic solution to the lack of medical imaging data. We synthesize high-quality brain tumor-segmented MR images, which consists of two tasks: synthesis and segmentation. We performed experiments with two different generative networks, the first using the ResNet model, which has significant advantages of style transfer, and the second, the U-Net model, one of the most powerful models for segmentation. We compare the performance of each model and propose a more robust model for synthesizing brain tumor-segmented MR images. Although ResNet produced better-quality images than did U-Net for the same samples, it used a great deal of memory and took much longer to train. U-Net, meanwhile, segmented the brain tumors more accurately than did ResNet.


2021 ◽  
Vol 11 ◽  
Author(s):  
Denis Yoo ◽  
Yuni Annette Choi ◽  
C. J. Rah ◽  
Eric Lee ◽  
Jing Cai ◽  
...  

In this study, the signal enhancement ratio of low-field magnetic resonance (MR) images was investigated using a deep learning-based algorithm. Unpaired image sets (0.06 Tesla and 1.5 Tesla MR images for different patients) were used in this study following three steps workflow. In the first step, the deformable registration of a 1.5 Tesla MR image into a 0.06 Tesla MR image was performed to ensure that the shapes of the unpaired set matched. In the second step, a cyclic-generative adversarial network (GAN) was used to generate a synthetic MR image of the original 0.06 Tesla MR image based on the deformed or original 1.5 Tesla MR image. Finally, an enhanced 0.06 Tesla MR image could be generated using the conventional-GAN with the deformed or synthetic MR image. The results from the optimized flow and enhanced MR images showed significant signal enhancement of the anatomical view, especially in the nasal septum, inferior nasal choncha, nasopharyngeal fossa, and eye lens. The signal enhancement ratio, signal-to-noise ratio (SNR) and correlation factor between the original and enhanced MR images were analyzed for the evaluation of the image quality. A combined method using conventional- and cyclic-GANs is a promising approach for generating enhanced MR images from low-magnetic-field MR.


2019 ◽  
Author(s):  
Wei Wang ◽  
Mingang Wang ◽  
Xiaofen Wu ◽  
Xie Ding ◽  
Xuexiang Cao ◽  
...  

Abstract Background: Automatic and detailed segmentation of the prostate using magnetic resonance imaging (MRI) plays an essential role in prostate imaging diagnosis. However, the complexity of the prostate gland hampers accurate segmentation from other tissues. Thus, we propose the automatic prostate segmentation method SegDGAN, which is based on a classic generative adversarial network (GAN) model. Methods: The proposed method comprises a fully convolutional generation network of densely connected blocks and a critic network with multi-scale feature extraction. In these computations, the objective function is optimized using mean absolute error and the Dice coefficient, leading to improved accuracy of segmentation results and correspondence with the ground truth. The common and similar medical image segmentation networks U-Net, fully convolution network, and SegAN were selected for qualitative and quantitative comparisons with SegDGAN using a 220-patient dataset and the publicly available dataset PROMISE12. The commonly used segmentation evaluation metrics Dice similarity coefficient (DSC), volumetric overlap error (VOE), average surface distance (ASD), and Hausdorff distance (HD) were also used to compare the accuracy of segmentation between these methods. Results: SegDGAN achieved the highest DSC value of 91.66%, the lowest VOE value of 23.47% and the lowest ASD values of 0.46 mm with the clinical dataset. In addition, the highest DCS value of 88.69%, the lowest VOE value of 23.47%, the lowest ASD value of 0.83 mm, and the lowest HD value of 11.40 mm was achieved with the PROMISE12 dataset. Conclusions: Our experimental results show that the SegDGAN model outperforms other segmentation methods Keywords: Automatic segmentation, Generative adversarial networks, Magnetic resonance imaging, Prostate


2021 ◽  
Author(s):  
Mengting Liu ◽  
Piyush Maiti ◽  
Sophia Thomopoulos ◽  
Alyssa Zhu ◽  
Yaqiong Chai ◽  
...  

AbstractLarge data initiatives and high-powered brain imaging analyses require the pooling of MR images acquired across multiple scanners, often using different protocols. Prospective cross-site harmonization often involves the use of a phantom or traveling subjects. However, as more datasets are becoming publicly available, there is a growing need for retrospective harmonization, pooling data from sites not originally coordinated together. Several retrospective harmonization techniques have shown promise in removing cross-site image variation. However, most unsupervised methods cannot distinguish between image-acquisition based variability and cross-site population variability, so they require that datasets contain subjects or patient groups with similar clinical or demographic information. To overcome this limitation, we consider cross-site MRI image harmonization as a style transfer problem rather than a domain transfer problem. Using a fully unsupervised deep-learning framework based on a generative adversarial network (GAN), we show that MR images can be harmonized by inserting the style information encoded from a reference image directly, without knowing their site/scanner labels a priori. We trained our model using data from five large-scale multi-site datasets with varied demographics. Results demonstrated that our styleencoding model can harmonize MR images, and match intensity profiles, successfully, without relying on traveling subjects. This model also avoids the need to control for clinical, diagnostic, or demographic information. Moreover, we further demonstrated that if we included diverse enough images into the training set, our method successfully harmonized MR images collected from unseen scanners and protocols, suggesting a promising novel tool for ongoing collaborative studies.


Author(s):  
Chaudhary Sarimurrab, Ankita Kesari Naman and Sudha Narang

The Generative Models have gained considerable attention in the field of unsupervised learning via a new and practical framework called Generative Adversarial Networks (GAN) due to its outstanding data generation capability. Many models of GAN have proposed, and several practical applications emerged in various domains of computer vision and machine learning. Despite GAN's excellent success, there are still obstacles to stable training. In this model, we aim to generate human faces through un-labelled data via the help of Deep Convolutional Generative Adversarial Networks. The applications for generating faces are vast in the field of image processing, entertainment, and other such industries. Our resulting model is successfully able to generate human faces from the given un-labelled data and random noise.


2005 ◽  
Vol 102 (Special_Supplement) ◽  
pp. 8-13 ◽  
Author(s):  
Josef Novotny ◽  
Josef Vymazal ◽  
Josef Novotny ◽  
Daniela Tlachacova ◽  
Michal Schmitt ◽  
...  

Object. The authors sought to compare the accuracy of stereotactic target imaging using the Siemens 1T EXPERT and 1.5T SYMPHONY magnetic resonance (MR) units. Methods. A water-filled cylindrical Perspex phantom with axial and coronal inserts containing grids of glass rods was fixed in the Leksell stereotactic frame and subjected to MR imaging in Siemens 1T EXPERT and Siemens 1.5T SYMPHONY units. Identical sequences were used for each unit. The images were transferred to the GammaPlan treatment planning system. Deviations between stereotactic coordinates based on MR images and estimated real geometrical positions given by the construction of the phantom insert were evaluated for each study. The deviations were further investigated as a function of the MR unit used, MR sequence, the image orientation, and the spatial position of measured points in the investigated volume. Conclusions. Larger distortions were observed when using the SYMPHONY 1.5T unit than those with the EXPERT 1T unit. Typical average distortion in EXPERT 1T was not more than 0.6 mm and 0.9 mm for axial and coronal images, respectively. Typical mean distortion for SYMPHONY 1.5T was not more than 1 mm and 1.3 mm for axial and coronal images, respectively. The image sequence affected the distortions in both units. Coronal T2-weighted spin-echo images performed in subthalamic imaging produced the largest distortions of 2.6 mm and 3 mm in the EXPERT 1T and SYMPHONY 1.5T, respectively. Larger distortions were observed in coronal slices than in axial slices in both units, and this effect was more pronounced in SYMPHONY 1.5T. Noncentrally located slice positions in the investigated volume of the phantom were associated with larger distortions.


2020 ◽  
Author(s):  
Belén Vega-Márquez ◽  
Cristina Rubio-Escudero ◽  
Isabel Nepomuceno-Chamorro

Abstract The generation of synthetic data is becoming a fundamental task in the daily life of any organization due to the new protection data laws that are emerging. Because of the rise in the use of Artificial Intelligence, one of the most recent proposals to address this problem is the use of Generative Adversarial Networks (GANs). These types of networks have demonstrated a great capacity to create synthetic data with very good performance. The goal of synthetic data generation is to create data that will perform similarly to the original dataset for many analysis tasks, such as classification. The problem of GANs is that in a classification problem, GANs do not take class labels into account when generating new data, it is treated as any other attribute. This research work has focused on the creation of new synthetic data from datasets with different characteristics with a Conditional Generative Adversarial Network (CGAN). CGANs are an extension of GANs where the class label is taken into account when the new data is generated. The performance of our results has been measured in two different ways: firstly, by comparing the results obtained with classification algorithms, both in the original datasets and in the data generated; secondly, by checking that the correlation between the original data and those generated is minimal.


Sign in / Sign up

Export Citation Format

Share Document