scholarly journals Reconstruction of cardiovascular black-blood T2-weighted image by deep learning algorithm: A comparison with intensity filter

2021 ◽  
Vol 10 (9) ◽  
pp. 205846012110447
Author(s):  
Ryo Ogawa ◽  
Tomoyuki Kido ◽  
Masashi Nakamura ◽  
Atsushi Nozaki ◽  
R Marc Lebel ◽  
...  

Background Deep learning–based methods have been used to denoise magnetic resonance imaging. Purpose The purpose of this study was to evaluate a deep learning reconstruction (DL Recon) in cardiovascular black-blood T2-weighted images and compare with intensity filtered images. Material and Methods Forty-five DL Recon images were compared with intensity filtered and the original images. For quantitative image analysis, the signal to noise ratio (SNR) of the septum, contrast ratio (CR) of the septum to lumen, and sharpness of the endocardial border were calculated in each image. For qualitative image quality assessment, a 4-point subjective scale was assigned to each image (1 = poor, 2 = fair, 3 = good, 4 = excellent). Results The SNR and CR were significantly higher in the DL Recon images than in the intensity filtered and the original images ( p < .05 in each). Sharpness of the endocardial border was significantly higher in the DL Recon and intensity filtered images than in the original images ( p < .05 in each). The image quality of the DL Recon images was significantly better than that of intensity filtered and original images ( p < .001 in each). Conclusions DL Recon reduced image noise while improving image contrast and sharpness in the cardiovascular black-blood T2-weight sequence.

2019 ◽  
Vol 257 (8) ◽  
pp. 1641-1648 ◽  
Author(s):  
J. L. Lauermann ◽  
M. Treder ◽  
M. Alnawaiseh ◽  
C. R. Clemens ◽  
N. Eter ◽  
...  

2021 ◽  
Author(s):  
Ganesh M. Balasubramaniam ◽  
Netanel Biton ◽  
Shlomi Arnon

Abstract Reconstructing objects behind scattering media is a challenging issue with applications in biomedical imaging, non-distractive testing, computer-assisted surgery, and autonomous vehicular systems. Such systems’ main challenge is the multiple scattering of the photons in the angular and spatial domain, which results in a blurred image. Previous works try to improve the reconstructing ability using deep learning algorithms, with some success. We enhance these methods by illuminating the set-up using several modes of vortex beams obtaining a series of time-gated images corresponding to each mode. The images are accurately reconstructed using a deep learning algorithm by analyzing the pattern captured in the camera. This study shows that using vortex beams instead of Gaussian beams enhances the deep learning algorithm’s image reconstruction ability in terms of the peak signal-to-noise ratio (PSNR) by ~ 2.5 dB and ~1 dB when low and high scattering scatterers are used respectively.


Sensors ◽  
2021 ◽  
Vol 21 (3) ◽  
pp. 964
Author(s):  
Shihong Wang ◽  
Jiayi Guo ◽  
Yueting Zhang ◽  
Yuxin Hu ◽  
Chibiao Ding ◽  
...  

Synthetic aperture radar tomography (TomoSAR) is an important 3D mapping method. Traditional TomoSAR requires a large number of observation orbits however, it is hard to meet the requirement of massive orbits. While on the one hand, this is due to funding constraints, on the other hand, because the target scene is changing over time and each observation orbit consumes lots of time, the number of orbits can be fewer as required within a narrow time window. When the number of observation orbits is insufficient, the signal-to-noise ratio (SNR), peak-to-sidelobe ratio (PSR), and resolution of 3D reconstruction results will decline severely, which seriously limits the practical application of TomoSAR. In order to solve this problem, we propose to use a deep learning network to improve the resolution and SNR of 3D reconstruction results under the condition of very few observation orbits by learning the prior distribution of targets. We use all available orbits to reconstruct a high resolution target, while only very few (around 3) orbits to reconstruct a low resolution input. The low-res and high-res 3D voxel-grid pairs are used to train a 3D super-resolution (SR) CNN (convolutional neural network) model, just like ordinary 2D image SR tasks. Experiments on the Civilian Vehicle Radar dataset show that the proposed deep learning algorithm can effectively improve the reconstruction both in quality and in quantity. In addition, the model also shows good generalization performance for targets not shown in the training set.


2021 ◽  
Vol 13 (9) ◽  
pp. 1779
Author(s):  
Xiaoyan Yin ◽  
Zhiqun Hu ◽  
Jiafeng Zheng ◽  
Boyong Li ◽  
Yuanyuan Zuo

Radar beam blockage is an important error source that affects the quality of weather radar data. An echo-filling network (EFnet) is proposed based on a deep learning algorithm to correct the echo intensity under the occlusion area in the Nanjing S-band new-generation weather radar (CINRAD/SA). The training dataset is constructed by the labels, which are the echo intensity at the 0.5° elevation in the unblocked area, and by the input features, which are the intensity in the cube including multiple elevations and gates corresponding to the location of bottom labels. Two loss functions are applied to compile the network: one is the common mean square error (MSE), and the other is a self-defined loss function that increases the weight of strong echoes. Considering that the radar beam broadens with distance and height, the 0.5° elevation scan is divided into six range bands every 25 km to train different models. The models are evaluated by three indicators: explained variance (EVar), mean absolute error (MAE), and correlation coefficient (CC). Two cases are demonstrated to compare the effect of the echo-filling model by different loss functions. The results suggest that EFnet can effectively correct the echo reflectivity and improve the data quality in the occlusion area, and there are better results for strong echoes when the self-defined loss function is used.


Sign in / Sign up

Export Citation Format

Share Document