Generation of quantification maps and weighted images from synthetic magnetic resonance imaging using deep learning network

Author(s):  
Yawen Liu ◽  
Haijun Niu ◽  
Pengling Ren ◽  
Jialiang Ren ◽  
Xuan Wei ◽  
...  

Abstract Objective: The generation of quantification maps and weighted images in synthetic MRI techniques is based on complex fitting equations. This process requires longer image generation times. The objective of this study is to evaluate the feasibility of deep learning method for fast reconstruction of synthetic MRI. Approach: A total of 44 healthy subjects were recruited and random divided into a training set (30 subjects) and a testing set (14 subjects). A multiple-dynamic, multiple-echo (MDME) sequence was used to acquire synthetic MRI images. Quantification maps (T1, T2, and proton density (PD) maps) and weighted (T1W, T2W, and T2W FLAIR) images were created with MAGiC software and then used as the ground truth images in the deep learning (DL) model. An improved multichannel U-Net structure network was trained to generate quantification maps and weighted images from raw synthetic MRI imaging data (8 module images). Quantitative evaluation was performed on quantification maps. Quantitative evaluation metrics, as well as qualitative evaluation were used in weighted image evaluation. Nonparametric Wilcoxon signed-rank tests were performed in this study. Main results: The results of quantitative evaluation show that the error between the generated quantification images and the reference images is small. For weighted images, no significant difference in overall image quality or SNR was identified between DL images and synthetic images. Notably, the DL images achieved improved image contrast with T2W images, and fewer artifacts were present on DL images than synthetic images acquired by T2W FLAIR. Significance: The DL algorithm provides a promising method for image generation in synthetic MRI techniques, in which every step of the calculation can be optimized and faster, thereby simplifying the workflow of synthetic MRI techniques.

2020 ◽  
pp. 135245852092136 ◽  
Author(s):  
Ivan Coronado ◽  
Refaat E Gabr ◽  
Ponnada A Narayana

Objective: The aim of this study is to assess the performance of deep learning convolutional neural networks (CNNs) in segmenting gadolinium-enhancing lesions using a large cohort of multiple sclerosis (MS) patients. Methods: A three-dimensional (3D) CNN model was trained for segmentation of gadolinium-enhancing lesions using multispectral magnetic resonance imaging data (MRI) from 1006 relapsing–remitting MS patients. The network performance was evaluated for three combinations of multispectral MRI used as input: (U5) fluid-attenuated inversion recovery (FLAIR), T2-weighted, proton density-weighted, and pre- and post-contrast T1-weighted images; (U2) pre- and post-contrast T1-weighted images; and (U1) only post-contrast T1-weighted images. Segmentation performance was evaluated using the Dice similarity coefficient (DSC) and lesion-wise true-positive (TPR) and false-positive (FPR) rates. Performance was also evaluated as a function of enhancing lesion volume. Results: The DSC/TPR/FPR values averaged over all the enhancing lesion sizes were 0.77/0.90/0.23 using the U5 model. These values for the largest enhancement volumes (>500 mm3) were 0.81/0.97/0.04. For U2, the average DSC/TPR/FPR values were 0.72/0.86/0.31. Comparable performance was observed with U1. For all types of input, the network performance degraded with decreased enhancement size. Conclusion: Excellent segmentation of enhancing lesions was observed for enhancement volume ⩾70 mm3. The best performance was achieved when the input included all five multispectral image sets.


Diagnostics ◽  
2020 ◽  
Vol 10 (9) ◽  
pp. 713 ◽  
Author(s):  
Roberta Marozzo ◽  
Valentina Pegoraro ◽  
Corrado Angelini

Becker muscular dystrophy (BMD) is an X-linked recessive disorder caused by dystrophin gene mutations. The phenotype and evolution of this muscle disorder are extremely clinical variable. In the last years, circulating biomarkers have acquired remarkable importance in their use as noninvasive biological indicators of prognosis and in monitoring muscle disease progression, especially when associated to muscle MRI imaging. We investigated the levels of circulating microRNAs (myo-miRNAs and inflammatory miRNAs) and of the proteins follistatin (FSTN) and myostatin (GDF-8) and compared results with clinical and radiological imaging data. In eight BMD patients, including two cases with evolving lower extremity weakness treated with deflazacort, we evaluated the expression level of 4 myo-miRNAs (miR-1, miR-206, miR-133a, and miR-133b), 3 inflammatory miRNAs (miR-146b, miR-155, and miR-221), FSTN, and GDF-8 proteins. In the two treated cases, there was pronounced posterior thigh and leg fibrofatty replacement assessed by muscle MRI by Mercuri score. The muscle-specific miR-206 was increased in all patients, and inflammatory miR-221 and miR-146b were variably elevated. A significant difference in myostatin expression was observed between steroid-treated and untreated patients. This study suggests that microRNAs and myostatin protein levels could be used to better understand the progression and management of the disease.


Author(s):  
Qinglin Meng ◽  
Mengqi Liu ◽  
Weiwei Deng ◽  
Ke Chen ◽  
Botao Wang ◽  
...  

Background: Calcium-suppressed (CaSupp) technique involving spectral-based images has been used to observe bone marrow edema by removing calcium components from the image. Objective: This study aimed to evaluate the knee articular cartilage using the CaSupp technique in dual-layer detector computed tomography (DLCT). Methods: Twenty-eight healthy participants and two patients with osteoarthritis were enrolled, who underwent DLCT and magnetic resonance imaging (MRI) examination. CaSupp images were reconstructed from spectral-based images using a calcium suppression algorithm and were overlaid conventional CT images for visual evaluation. The morphology of the knee cartilage was evaluated, and the thickness of the articular cartilage was measured on sagittal proton density– weighted and CaSupp images in the patellofemoral compartment. Results: No abnormal signal or density, cartilage defect, and subjacent bone ulceration were observed in the lateral and medial femorotibial compartments and the patellofemoral compartment on MRI images and CaSupp images for the 48 normal knee joints. CaSupp images could clearly identify cartilage thinning, defect, subjacent bone marrow edema, and edema of the infrapatellar fat pad in the same way as MRI images in the three knee joints with osteoarthritis. A significant difference was found in the mean thickness of the patellar cartilage between MRI images and CaSupp images, while the femoral cartilage presented no significant difference in thickness between MRI images and CaSupp images over all 48 knee joints. Conclusion: The present study demonstrated that CaSupp images could effectively be used to perform the visual and quantitative assessment of knee cartilage.


Endoscopy ◽  
2020 ◽  
Author(s):  
Alanna Ebigbo ◽  
Robert Mendel ◽  
Tobias Rückert ◽  
Laurin Schuster ◽  
Andreas Probst ◽  
...  

Background and aims: The accurate differentiation between T1a and T1b Barrett’s cancer has both therapeutic and prognostic implications but is challenging even for experienced physicians. We trained an Artificial Intelligence (AI) system on the basis of deep artificial neural networks (deep learning) to differentiate between T1a and T1b Barrett’s cancer white-light images. Methods: Endoscopic images from three tertiary care centres in Germany were collected retrospectively. A deep learning system was trained and tested using the principles of cross-validation. A total of 230 white-light endoscopic images (108 T1a and 122 T1b) was evaluated with the AI-system. For comparison, the images were also classified by experts specialized in endoscopic diagnosis and treatment of Barrett’s cancer. Results: The sensitivity, specificity, F1 and accuracy of the AI-system in the differentiation between T1a and T1b cancer lesions was 0.77, 0.64, 0.73 and 0.71, respectively. There was no statistically significant difference between the performance of the AI-system and that of human experts with sensitivity, specificity, F1 and accuracy of 0.63, 0.78, 0.67 and 0.70 respectively. Conclusion: This pilot study demonstrates the first multicenter application of an AI-based system in the prediction of submucosal invasion in endoscopic images of Barrett’s cancer. AI scored equal to international experts in the field, but more work is necessary to improve the system and apply it to video sequences and in a real-life setting. Nevertheless, the correct prediction of submucosal invasion in Barret´s cancer remains challenging for both experts and AI.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Song-Quan Ong ◽  
Hamdan Ahmad ◽  
Gomesh Nair ◽  
Pradeep Isawasan ◽  
Abdul Hafiz Ab Majid

AbstractClassification of Aedes aegypti (Linnaeus) and Aedes albopictus (Skuse) by humans remains challenging. We proposed a highly accessible method to develop a deep learning (DL) model and implement the model for mosquito image classification by using hardware that could regulate the development process. In particular, we constructed a dataset with 4120 images of Aedes mosquitoes that were older than 12 days old and had common morphological features that disappeared, and we illustrated how to set up supervised deep convolutional neural networks (DCNNs) with hyperparameter adjustment. The model application was first conducted by deploying the model externally in real time on three different generations of mosquitoes, and the accuracy was compared with human expert performance. Our results showed that both the learning rate and epochs significantly affected the accuracy, and the best-performing hyperparameters achieved an accuracy of more than 98% at classifying mosquitoes, which showed no significant difference from human-level performance. We demonstrated the feasibility of the method to construct a model with the DCNN when deployed externally on mosquitoes in real time.


2021 ◽  
Vol 202 ◽  
pp. 105958
Author(s):  
Antón Cid-Mejías ◽  
Raúl Alonso-Calvo ◽  
Helena Gavilán ◽  
José Crespo ◽  
Víctor Maojo

2021 ◽  
Vol 10 (1) ◽  
Author(s):  
Xinyang Li ◽  
Guoxun Zhang ◽  
Hui Qiao ◽  
Feng Bao ◽  
Yue Deng ◽  
...  

AbstractThe development of deep learning and open access to a substantial collection of imaging data together provide a potential solution for computational image transformation, which is gradually changing the landscape of optical imaging and biomedical research. However, current implementations of deep learning usually operate in a supervised manner, and their reliance on laborious and error-prone data annotation procedures remains a barrier to more general applicability. Here, we propose an unsupervised image transformation to facilitate the utilization of deep learning for optical microscopy, even in some cases in which supervised models cannot be applied. Through the introduction of a saliency constraint, the unsupervised model, named Unsupervised content-preserving Transformation for Optical Microscopy (UTOM), can learn the mapping between two image domains without requiring paired training data while avoiding distortions of the image content. UTOM shows promising performance in a wide range of biomedical image transformation tasks, including in silico histological staining, fluorescence image restoration, and virtual fluorescence labeling. Quantitative evaluations reveal that UTOM achieves stable and high-fidelity image transformations across different imaging conditions and modalities. We anticipate that our framework will encourage a paradigm shift in training neural networks and enable more applications of artificial intelligence in biomedical imaging.


2021 ◽  
Vol 210 ◽  
pp. 106371
Author(s):  
Elisa Moya-Sáez ◽  
Óscar Peña-Nogales ◽  
Rodrigo de Luis-García ◽  
Carlos Alberola-López

Author(s):  
Pranoy Ghosh ◽  
Krithika M Pai ◽  
Manohara Pai M M ◽  
Ujjwal Verma ◽  
Frederic Rivet ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document