Synthesizing high‐resolution MRI using parallel cycle‐consistent generative adversarial networks for fast MR imaging

2021 ◽  
Author(s):  
Huiqiao Xie ◽  
Yang Lei ◽  
Tonghe Wang ◽  
Justin Roper ◽  
Anees H. Dhabaan ◽  
...  
2021 ◽  
Vol 12 (5) ◽  
pp. 439-448
Author(s):  
Edward Collier ◽  
Supratik Mukhopadhyay ◽  
Kate Duffy ◽  
Sangram Ganguly ◽  
Geri Madanguit ◽  
...  

2020 ◽  
Vol 2020 ◽  
pp. 1-10
Author(s):  
Linyan Li ◽  
Yu Sun ◽  
Fuyuan Hu ◽  
Tao Zhou ◽  
Xuefeng Xi ◽  
...  

In this paper, we propose an Attentional Concatenation Generative Adversarial Network (ACGAN) aiming at generating 1024 × 1024 high-resolution images. First, we propose a multilevel cascade structure, for text-to-image synthesis. During training progress, we gradually add new layers and, at the same time, use the results and word vectors from the previous layer as inputs to the next layer to generate high-resolution images with photo-realistic details. Second, the deep attentional multimodal similarity model is introduced into the network, and we match word vectors with images in a common semantic space to compute a fine-grained matching loss for training the generator. In this way, we can pay attention to the fine-grained information of the word level in the semantics. Finally, the measure of diversity is added to the discriminator, which enables the generator to obtain more diverse gradient directions and improve the diversity of generated samples. The experimental results show that the inception scores of the proposed model on the CUB and Oxford-102 datasets have reached 4.48 and 4.16, improved by 2.75% and 6.42% compared to Attentional Generative Adversarial Networks (AttenGAN). The ACGAN model has a better effect on text-generated images, and the resulting image is closer to the real image.


Sensors ◽  
2020 ◽  
Vol 20 (22) ◽  
pp. 6673
Author(s):  
Lichuan Zou ◽  
Hong Zhang ◽  
Chao Wang ◽  
Fan Wu ◽  
Feng Gu

In high-resolution Synthetic Aperture Radar (SAR) ship detection, the number of SAR samples seriously affects the performance of the algorithms based on deep learning. In this paper, aiming at the application requirements of high-resolution ship detection in small samples, a high-resolution SAR ship detection method combining an improved sample generation network, Multiscale Wasserstein Auxiliary Classifier Generative Adversarial Networks (MW-ACGAN) and the Yolo v3 network is proposed. Firstly, the multi-scale Wasserstein distance and gradient penalty loss are used to improve the original Auxiliary Classifier Generative Adversarial Networks (ACGAN), so that the improved network can stably generate high-resolution SAR ship images. Secondly, the multi-scale loss term is added to the network, so the multi-scale image output layers are added, and multi-scale SAR ship images can be generated. Then, the original ship data set and the generated data are combined into a composite data set to train the Yolo v3 target detection network, so as to solve the problem of low detection accuracy under small sample data set. The experimental results of Gaofen-3 (GF-3) 3 m SAR data show that the MW-ACGAN network can generate multi-scale and multi-class ship slices, and the confidence level of ResNet18 is higher than that of ACGAN network, with an average score of 0.91. The detection results of Yolo v3 network model show that the detection accuracy trained by the composite data set is as high as 94%, which is far better than that trained only by the original SAR data set. These results show that our method can make the best use of the original data set, improve the accuracy of ship detection.


2011 ◽  
Vol 21 (12) ◽  
pp. 2575-2583 ◽  
Author(s):  
Chiara Gaudino ◽  
Raluca Cosgarea ◽  
Sabine Heiland ◽  
Réka Csernus ◽  
Bruno Beomonte Zobel ◽  
...  

Author(s):  
Alejandro Güemes ◽  
Carlos Sanmiguel Vila ◽  
Stefano Discetti

A data-driven approach to reconstruct high-resolution flow fields is presented. The method is based on exploiting the recent advances of SRGANs (Super-Resolution Generative Adversarial Networks) to enhance the resolution of Particle Image Velocimetry (PIV). The proposed approach exploits the availability of incomplete projections on high-resolution fields using the same set of images processed by standard PIV. Such incomplete projection is made available by sparse particle-based measurements such as super-resolution particle tracking velocimetry. Consequently, in contrast to other works, the method does not need a dual set of low/high-resolution images, and can be applied directly on a single set of raw images for training and estimation. This data-enhanced particle approach is assessed employing two datasets generated from direct numerical simulations: a fluidic pinball and a turbulent channel flow. The results prove that this data-driven method is able to enhance the resolution of PIV measurements even in complex flows without the need of a separate high-resolution experiment for training.


2019 ◽  
Vol 21 (11) ◽  
pp. 2726-2737 ◽  
Author(s):  
Yong Guo ◽  
Qi Chen ◽  
Jian Chen ◽  
Qingyao Wu ◽  
Qinfeng Shi ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document