scholarly journals Deep Learning Based Imaging Data Completion for Improved Brain Disease Diagnosis

Author(s):  
Rongjian Li ◽  
Wenlu Zhang ◽  
Heung-Il Suk ◽  
Li Wang ◽  
Jiang Li ◽  
...  
IEEE Access ◽  
2021 ◽  
Vol 9 ◽  
pp. 37622-37655
Author(s):  
Protima Khan ◽  
Md. Fazlul Kader ◽  
S. M. Riazul Islam ◽  
Aisha B. Rahman ◽  
Md. Shahriar Kamal ◽  
...  

2021 ◽  
Vol 15 ◽  
Author(s):  
Po-Jui Lu ◽  
Muhamed Barakovic ◽  
Matthias Weigel ◽  
Reza Rahmanzadeh ◽  
Riccardo Galbusera ◽  
...  

Conventional magnetic resonance imaging (cMRI) in multiple sclerosis (MS) patients provides measures of focal brain damage and activity, which are fundamental for disease diagnosis, prognosis, and the evaluation of response to therapy. However, cMRI is insensitive to the damage to the microenvironment of the brain tissue and the heterogeneity of MS lesions. In contrast, the damaged tissue can be characterized by mathematical models on multishell diffusion imaging data, which measure different compartmental water diffusion. In this work, we obtained 12 diffusion measures from eight diffusion models, and we applied a deep-learning attention-based convolutional neural network (CNN) (GAMER-MRI) to select the most discriminating measures in the classification of MS lesions and the perilesional tissue by attention weights. Furthermore, we provided clinical and biological validation of the chosen metrics—and of their most discriminative combinations—by correlating their respective mean values in MS patients with the corresponding Expanded Disability Status Scale (EDSS) and the serum level of neurofilament light chain (sNfL), which are measures of disability and neuroaxonal damage. Our results show that the neurite density index from neurite orientation and dispersion density imaging (NODDI), the measures of the intra-axonal and isotropic compartments from microstructural Bayesian approach, and the measure of the intra-axonal compartment from the spherical mean technique NODDI were the most discriminating (respective attention weights were 0.12, 0.12, 0.15, and 0.13). In addition, the combination of the neurite density index from NODDI and the measures for the intra-axonal and isotropic compartments from the microstructural Bayesian approach exhibited a stronger correlation with EDSS and sNfL than the individual measures. This work demonstrates that the proposed method might be useful to select the microstructural measures that are most discriminative of focal tissue damage and that may also be combined to a unique contrast to achieve stronger correlations to clinical disability and neuroaxonal damage.


Sensors ◽  
2020 ◽  
Vol 20 (18) ◽  
pp. 5097 ◽  
Author(s):  
Satya P. Singh ◽  
Lipo Wang ◽  
Sukrit Gupta ◽  
Haveesh Goli ◽  
Parasuraman Padmanabhan ◽  
...  

The rapid advancements in machine learning, graphics processing technologies and the availability of medical imaging data have led to a rapid increase in the use of deep learning models in the medical domain. This was exacerbated by the rapid advancements in convolutional neural network (CNN) based architectures, which were adopted by the medical imaging community to assist clinicians in disease diagnosis. Since the grand success of AlexNet in 2012, CNNs have been increasingly used in medical image analysis to improve the efficiency of human clinicians. In recent years, three-dimensional (3D) CNNs have been employed for the analysis of medical images. In this paper, we trace the history of how the 3D CNN was developed from its machine learning roots, we provide a brief mathematical description of 3D CNN and provide the preprocessing steps required for medical images before feeding them to 3D CNNs. We review the significant research in the field of 3D medical imaging analysis using 3D CNNs (and its variants) in different medical areas such as classification, segmentation, detection and localization. We conclude by discussing the challenges associated with the use of 3D CNNs in the medical imaging domain (and the use of deep learning models in general) and possible future trends in the field.


Author(s):  
Bo Ji ◽  
Wenlu Zhang ◽  
Rongjian Li ◽  
Hao Ji

Biomedical image analysis has become critically important to the public health and welfare. However, analyzing biomedical images is time-consuming and labor-intensive, and has long been performed manually by highly trained human experts. As a result, there has been an increasing interest in applying machine learning to automate biomedical image analysis. Recent progress in deep learning research has catalyzed the development of machine learning in learning discriminative features from data with minimum human intervention. Many deep learning models have been designed and achieved superior performance in various data analysis applications. This chapter starts with the basic of deep learning models and some practical strategies for handling biomedical image applications with limited data. After that, case studies of deep feature extraction for gene expression pattern image annotations, imaging data completion for brain disease diagnosis, and segmentation of infant brain tissue images are discussed to demonstrate the effectiveness of deep learning in biomedical image analysis.


2020 ◽  
Vol 15 ◽  
Author(s):  
Deeksha Saxena ◽  
Mohammed Haris Siddiqui ◽  
Rajnish Kumar

Background: Deep learning (DL) is an Artificial neural network-driven framework with multiple levels of representation for which non-linear modules combined in such a way that the levels of representation can be enhanced from lower to a much abstract level. Though DL is used widely in almost every field, it has largely brought a breakthrough in biological sciences as it is used in disease diagnosis and clinical trials. DL can be clubbed with machine learning, but at times both are used individually as well. DL seems to be a better platform than machine learning as the former does not require an intermediate feature extraction and works well with larger datasets. DL is one of the most discussed fields among the scientists and researchers these days for diagnosing and solving various biological problems. However, deep learning models need some improvisation and experimental validations to be more productive. Objective: To review the available DL models and datasets that are used in disease diagnosis. Methods: Available DL models and their applications in disease diagnosis were reviewed discussed and tabulated. Types of datasets and some of the popular disease related data sources for DL were highlighted. Results: We have analyzed the frequently used DL methods, data types and discussed some of the recent deep learning models used for solving different biological problems. Conclusion: The review presents useful insights about DL methods, data types, selection of DL models for the disease diagnosis.


2021 ◽  
Vol 49 (1) ◽  
pp. 030006052098284
Author(s):  
Tingting Qiao ◽  
Simin Liu ◽  
Zhijun Cui ◽  
Xiaqing Yu ◽  
Haidong Cai ◽  
...  

Objective To construct deep learning (DL) models to improve the accuracy and efficiency of thyroid disease diagnosis by thyroid scintigraphy. Methods We constructed DL models with AlexNet, VGGNet, and ResNet. The models were trained separately with transfer learning. We measured each model’s performance with six indicators: recall, precision, negative predictive value (NPV), specificity, accuracy, and F1-score. We also compared the diagnostic performances of first- and third-year nuclear medicine (NM) residents with assistance from the best-performing DL-based model. The Kappa coefficient and average classification time of each model were compared with those of two NM residents. Results The recall, precision, NPV, specificity, accuracy, and F1-score of the three models ranged from 73.33% to 97.00%. The Kappa coefficient of all three models was >0.710. All models performed better than the first-year NM resident but not as well as the third-year NM resident in terms of diagnostic ability. However, the ResNet model provided “diagnostic assistance” to the NM residents. The models provided results at speeds 400 to 600 times faster than the NM residents. Conclusion DL-based models perform well in diagnostic assessment by thyroid scintigraphy. These models may serve as tools for NM residents in the diagnosis of Graves’ disease and subacute thyroiditis.


Author(s):  
Jinyuan Dang ◽  
Hu Li ◽  
Kai Niu ◽  
Zhiyuan Xu ◽  
Jianhao Lin ◽  
...  

2021 ◽  
Vol 10 (1) ◽  
Author(s):  
Xinyang Li ◽  
Guoxun Zhang ◽  
Hui Qiao ◽  
Feng Bao ◽  
Yue Deng ◽  
...  

AbstractThe development of deep learning and open access to a substantial collection of imaging data together provide a potential solution for computational image transformation, which is gradually changing the landscape of optical imaging and biomedical research. However, current implementations of deep learning usually operate in a supervised manner, and their reliance on laborious and error-prone data annotation procedures remains a barrier to more general applicability. Here, we propose an unsupervised image transformation to facilitate the utilization of deep learning for optical microscopy, even in some cases in which supervised models cannot be applied. Through the introduction of a saliency constraint, the unsupervised model, named Unsupervised content-preserving Transformation for Optical Microscopy (UTOM), can learn the mapping between two image domains without requiring paired training data while avoiding distortions of the image content. UTOM shows promising performance in a wide range of biomedical image transformation tasks, including in silico histological staining, fluorescence image restoration, and virtual fluorescence labeling. Quantitative evaluations reveal that UTOM achieves stable and high-fidelity image transformations across different imaging conditions and modalities. We anticipate that our framework will encourage a paradigm shift in training neural networks and enable more applications of artificial intelligence in biomedical imaging.


Sign in / Sign up

Export Citation Format

Share Document