scholarly journals Synthesizing VERDICT maps from standard diffusion mp-MRI data using GANs

2021 ◽  
Author(s):  
Eleni Chiou ◽  
Vanya Valindria ◽  
Francesco Giganti ◽  
Shonit Punwani ◽  
Iasonas Kokkinos ◽  
...  

AbstractPurposeVERDICT maps have shown promising results in clinical settings discriminating normal from malignant tissue and identifying specific Gleason grades non-invasively. However, the quantitative estimation of VERDICT maps requires a specific diffusion-weighed imaging (DWI) acquisition. In this study we investigate the feasibility of synthesizing VERDICT maps from DWI data from multi-parametric (mp)-MRI which is widely used in clinical practice for prostate cancer diagnosis.MethodsWe use data from 67 patients who underwent both mp-MRI and VERDICT MRI. We compute the ground truth VERDICT maps from VERDICT MRI and we propose a generative adversarial network (GAN)-based approach to synthesize VERDICT maps from mp-MRI DWI data. We use correlation analysis and mean squared error to quantitatively evaluate the quality of the synthetic VERDICT maps compared to the real ones.ResultsQuantitative results show that the mean values of tumour areas in the synthetic and the real VERDICT maps were strongly correlated while qualitative results indicate that our method can generate realistic VERDICT maps from DWI from mp-MRI acquisitions.ConclusionRealistic VERDICT maps can be generated using DWI from standard mp-MRI. The synthetic maps preserve important quantitative information enabling the exploitation of VERDICT MRI for precise prostate cancer characterization with a single mp-MRI acquisition.

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Ji Eun Park ◽  
Dain Eun ◽  
Ho Sung Kim ◽  
Da Hyun Lee ◽  
Ryoung Woo Jang ◽  
...  

AbstractGenerative adversarial network (GAN) creates synthetic images to increase data quantity, but whether GAN ensures meaningful morphologic variations is still unknown. We investigated whether GAN-based synthetic images provide sufficient morphologic variations to improve molecular-based prediction, as a rare disease of isocitrate dehydrogenase (IDH)-mutant glioblastomas. GAN was initially trained on 500 normal brains and 110 IDH-mutant high-grade astocytomas, and paired contrast-enhanced T1-weighted and FLAIR MRI data were generated. Diagnostic models were developed from real IDH-wild type (n = 80) with real IDH-mutant glioblastomas (n = 38), or with synthetic IDH-mutant glioblastomas, or augmented by adding both real and synthetic IDH-mutant glioblastomas. Turing tests showed synthetic data showed reality (classification rate of 55%). Both the real and synthetic data showed that a more frontal or insular location (odds ratio [OR] 1.34 vs. 1.52; P = 0.04) and distinct non-enhancing tumor margins (OR 2.68 vs. 3.88; P < 0.001), which become significant predictors of IDH-mutation. In an independent validation set, diagnostic accuracy was higher for the augmented model (90.9% [40/44] and 93.2% [41/44] for each reader, respectively) than for the real model (84.1% [37/44] and 86.4% [38/44] for each reader, respectively). The GAN-based synthetic images yield morphologically variable, realistic-seeming IDH-mutant glioblastomas. GAN will be useful to create a realistic training set in terms of morphologic variations and quality, thereby improving diagnostic performance in a clinical model.


2021 ◽  
Author(s):  
Tham Vo

Abstract In abstractive summarization task, most of proposed models adopt the deep recurrent neural network (RNN)-based encoder-decoder architecture to learn and generate meaningful summary for a given input document. However, most of recent RNN-based models always suffer the challenges related to the involvement of much capturing high-frequency/reparative phrases in long documents during the training process which leads to the outcome of trivial and generic summaries are generated. Moreover, the lack of thorough analysis on the sequential and long-range dependency relationships between words within different contexts while learning the textual representation also make the generated summaries unnatural and incoherent. To deal with these challenges, in this paper we proposed a novel semantic-enhanced generative adversarial network (GAN)-based approach for abstractive text summarization task, called as: SGAN4AbSum. We use an adversarial training strategy for our text summarization model in which train the generator and discriminator to simultaneously handle the summary generation and distinguishing the generated summary with the ground-truth one. The input of generator is the jointed rich-semantic and global structural latent representations of training documents which are achieved by applying a combined BERT and graph convolutional network (GCN) textual embedding mechanism. Extensive experiments in benchmark datasets demonstrate the effectiveness of our proposed SGAN4AbSum which achieve the competitive ROUGE-based scores in comparing with state-of-the-art abstractive text summarization baselines.


Mathematics ◽  
2019 ◽  
Vol 7 (10) ◽  
pp. 883 ◽  
Author(s):  
Shuyu Li ◽  
Sejun Jang ◽  
Yunsick Sung

In traditional music composition, the composer has a special knowledge of music and combines emotion and creative experience to create music. As computer technology has evolved, various music-related technologies have been developed. To create new music, a considerable amount of time is required. Therefore, a system is required that can automatically compose music from input music. This study proposes a novel melody composition method that enhanced the original generative adversarial network (GAN) model based on individual bars. Two discriminators were used to form the enhanced GAN model: one was a long short-term memory (LSTM) model that was used to ensure correlation between the bars, and the other was a convolutional neural network (CNN) model that was used to ensure rationality of the bar structure. Experiments were conducted using bar encoding and the enhanced GAN model to compose a new melody and evaluate the quality of the composition melody. In the evaluation method, the TFIDF algorithm was also used to calculate the structural differences between four types of musical instrument digital interface (MIDI) file (i.e., randomly composed melody, melody composed by the original GAN, melody composed by the proposed method, and the real melody). Using the TFIDF algorithm, the structures of the melody composed were compared by the proposed method with the real melody and the structure of the traditional melody was compared with the structure of the real melody. The experimental results showed that the melody composed by the proposed method had more similarity with real melody structure with a difference of only 8% than that of the traditional melody structure.


Sensors ◽  
2019 ◽  
Vol 19 (13) ◽  
pp. 2919 ◽  
Author(s):  
Wangyong He ◽  
Zhongzhao Xie ◽  
Yongbo Li ◽  
Xinmei Wang ◽  
Wendi Cai

Hand pose estimation is a critical technology of computer vision and human-computer interaction. Deep-learning methods require a considerable amount of tagged data. Accordingly, numerous labeled training data are required. This paper aims to generate depth hand images. Given a ground-truth 3D hand pose, the developed method can generate depth hand images. To be specific, a ground truth can be 3D hand poses with the hand structure contained, while the synthesized image has an identical size to that of the training image and a similar visual appearance to the training set. The developed method, inspired by the progress in the generative adversarial network (GAN) and image-style transfer, helps model the latent statistical relationship between the ground-truth hand pose and the corresponding depth hand image. The images synthesized using the developed method are demonstrated to be feasible for enhancing performance. On public hand pose datasets (NYU, MSRA, ICVL), comprehensive experiments prove that the developed method outperforms the existing works.


2019 ◽  
Vol 1 (2) ◽  
pp. 99-120 ◽  
Author(s):  
Tongtao Zhang ◽  
Heng Ji ◽  
Avirup Sil

We propose a new framework for entity and event extraction based on generative adversarial imitation learning—an inverse reinforcement learning method using a generative adversarial network (GAN). We assume that instances and labels yield to various extents of difficulty and the gains and penalties (rewards) are expected to be diverse. We utilize discriminators to estimate proper rewards according to the difference between the labels committed by the ground-truth (expert) and the extractor (agent). Our experiments demonstrate that the proposed framework outperforms state-of-the-art methods.


2020 ◽  
Author(s):  
Mingwu Jin ◽  
Yang Pan ◽  
Shunrong Zhang ◽  
Yue Deng

&lt;p&gt;Because of the limited coverage of receiver stations, current measurements of Total Electron Content (TEC) by ground-based GNSS receivers are not complete with large portions of data gaps. The processing to obtain complete TEC maps for space science research is time consuming and needs the collaboration of five International GNSS Service (IGS) Ionosphere Associate Analysis Centers (IAACs) to use different data processing and filling algorithms and to consolidate their results into final IGS completed TEC maps. In this work, we developed a Deep Convolutional Generative Adversarial Network (DCGAN) and Poisson blending model (DCGAN-PB) to learn IGS completion process for automatic completion of TEC maps. Using 10-fold cross validation of 20-year IGS TEC data, DCGAN-PB achieves the average root mean squared error (RMSE) about 4 absolute TEC units (TECu) of the high solar activity years and around 2 TECu for low solar activity years, which is about 50% reduction of RMSE for recovered TEC values compared to two conventional single-image inpainting methods. The developed DCGAN-PB model can lead to an efficient automatic completion tool for TEC maps.&lt;/p&gt;


2021 ◽  
Author(s):  
Ziyu Li ◽  
Qiyuan Tian ◽  
Chanon Ngamsombat ◽  
Samuel Cartmell ◽  
John Conklin ◽  
...  

Purpose: To improve the signal-to-noise ratio (SNR) of highly accelerated volumetric MRI while preserve realistic textures using a generative adversarial network (GAN). Methods: A hybrid GAN for denoising entitled "HDnGAN" with a 3D generator and a 2D discriminator was proposed to denoise 3D T2-weighted fluid-attenuated inversion recovery (FLAIR) images acquired in 2.75 minutes (R=3×2) using wave-controlled aliasing in parallel imaging (Wave-CAIPI). HDnGAN was trained on data from 25 multiple sclerosis patients by minimizing a combined mean squared error and adversarial loss with adjustable weight λ. Results were evaluated on eight separate patients by comparing to standard T2-SPACE FLAIR images acquired in 7.25 minutes (R=2×2) using mean absolute error (MAE), peak SNR (PSNR), structural similarity index (SSIM), and VGG perceptual loss, and by two neuroradiologists using a five-point score regarding gray-white matter contrast, sharpness, SNR, lesion conspicuity, and overall quality. Results: HDnGAN (λ=0) produced the lowest MAE, highest PSNR and SSIM. HDnGAN (λ=10-3) produced the lowest VGG loss. In the reader study, HDnGAN (λ=10-3) significantly improved the gray-white contrast and SNR of Wave-CAIPI images, and outperformed BM4D and HDnGAN (λ=0) regarding image sharpness. The overall quality score from HDnGAN (λ=10-3) was significantly higher than those from Wave-CAIPI, BM4D, and HDnGAN (λ=0), with no significant difference compared to standard images. Conclusion: HDnGAN concurrently benefits from improved image synthesis performance of 3D convolution and increased training samples for training the 2D discriminator on limited data. HDnGAN generates images with high SNR and realistic textures, similar to those acquired in longer times and preferred by neuroradiologists.


Author(s):  
Y. Xun ◽  
W. Q. Yu

Abstract. As one of the important sources of meteorological information, satellite nephogram is playing an increasingly important role in the detection and forecast of disastrous weather. The predictions about the movement and transformation of cloud with certain timeliness can enhance the practicability of satellite nephogram. Based on the generative adversarial network in unsupervised learning, we propose a prediction model of time series nephogram, which construct the internal representation of cloud evolution accurately and realize nephogram prediction for the next several hours. We improve the traditional generative adversarial network by constructing the generator and discriminator used the multi-scale convolution network. After the scale transform process, different scales operate convolutions in parallel and then merge the features. This structure can solve the problem of long-term dependence in the traditional network, and both global and detailed features are considered. Then according to the network structure and practical application, we define a new loss function combined with adversarial loss function to accelerate the convergence of model and sharpen predictions which keeps the effectivity of predictions further. Our method has no need to carry out the stack mathematics calculation and the manual operations, has greatly enhanced the feasibility and the efficiency. The results show that this model can reasonably describe the basic characteristics and evolution trend of cloud cluster, the prediction nephogram has very high similarity to the ground-truth nephogram.


2021 ◽  
Author(s):  
Jiali Wang ◽  
Zhengchun Liu ◽  
Ian Foster ◽  
Won Chang ◽  
Rajkumar Kettimuthu ◽  
...  

Abstract. This study develops a neural network-based approach for emulating high-resolution modeled precipitation data with comparable statistical properties but at greatly reduced computational cost. The key idea is to use combination of low- and high- resolution simulations to train a neural network to map from the former to the latter. Specifically, we define two types of CNNs, one that stacks variables directly and one that encodes each variable before stacking, and we train each CNN type both with a conventional loss function, such as mean square error (MSE), and with a conditional generative adversarial network (CGAN), for a total of four CNN variants.We compare the four new CNN-derived high-resolution precipitation results with precipitation generated from original high resolution simulations, a bilinear interpolater and the state-of-the-art CNN-based super-resolution (SR) technique. Results show that the SR technique produces results similar to those of the bilinear interpolator with smoother spatial and temporal distributions and smaller data variabilities and extremes than the high resolution simulations. While the new CNNs trained by MSE generate better results over some regions than the interpolator and SR technique do, their predictions are still not as close as ground truth. The CNNs trained by CGAN generate more realistic and physically reasonable results, better capturing not only data variability in time and space but also extremes such as intense and long-lasting storms. The new proposed CNN-based downscaling approach can downscale precipitation from 50 km to 12 km in 14 min for 30 years once the network is trained (training takes 4 hours using 1 GPU), while the conventional dynamical downscaling would take 1 months using 600 CPU cores to generate simulations at the resolution of 12 km over contiguous United States.


2020 ◽  
Vol 10 (23) ◽  
pp. 8725
Author(s):  
Ssu-Han Chen ◽  
Chih-Hsiang Kang ◽  
Der-Baau Perng

This research used deep learning methods to develop a set of algorithms to detect die particle defects. Generative adversarial network (GAN) generated natural and realistic images, which improved the ability of you only look once version 3 (YOLOv3) to detect die defects. Then defects were measured based on the bounding boxes predicted by YOLOv3, which potentially provided the criteria for die quality sorting. The pseudo defective images generated by GAN from the real defective images were used as the training image set. The results obtained after training with the combination of the real and pseudo defective images were 7.33% higher in testing average precision (AP) and more accurate by one decimal place in testing coordinate error than after training with the real images alone. The GAN can enhance the diversity of defects, which improves the versatility of YOLOv3 somewhat. In summary, the method of combining GAN and YOLOv3 employed in this study creates a feature-free algorithm that does not require a massive collection of defective samples and does not require additional annotation of pseudo defects. The proposed method is feasible and advantageous for cases that deal with various kinds of die patterns.


Sign in / Sign up

Export Citation Format

Share Document