scholarly journals Deep learning based quantification of the accelerated brain aging rate in glioma patients after radiotherapy

Author(s):  
Selena I. Huisman ◽  
Arthur T.J. van der Boog ◽  
Fia Cialdella ◽  
Joost J.C. Verhoeff ◽  
Szabolcs David

Background and purpose. Changes of healthy appearing brain tissue after radiotherapy have been previously observed, however, they remain difficult to quantify. Due to these changes, patients undergoing radiotherapy may have a higher risk of cognitive decline, leading to a reduced quality of life. The experienced tissue atrophy is similar to the effects of normal aging in healthy individuals. We propose a new way to quantify tissue changes after cranial RT as accelerated brain aging using the BrainAGE framework. Materials and methods. BrainAGE was applied to longitudinal MRI scans of 32 glioma patients, who have undergone radiotherapy. Utilizing a pre-trained deep learning model, brain age is estimated for all patients' pre-radiotherapy planning and follow-up MRI scans to get a quantification of the changes occurring in the brain over time. Saliency maps were extracted from the model to spatially identify which areas of the brain the deep learning model weighs highest for predicting age. The predicted ages from the deep learning model were used in a linear mixed effects model to quantity aging and aging rates for patients after radiotherapy. Results. The linear mixed effects model resulted in an accelerated aging rate of 2.78 years per year, a significant increase over a normal aging rate of 1 (p <0.05, confidence interval (CI) = 2.54-3.02). Furthermore, the saliency maps showed numerous anatomically well-defined areas, e.g.: Heschl's gyrus among others, determined by the model as important for brain age prediction. Conclusion. We found that patients undergoing radiotherapy are affected by significant radiation- induced accelerated aging, with several anatomically well-defined areas contributing to this aging. The estimated brain age could provide a method for quantifying quality of life post-radiotherapy.

2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Shu-Hui Wang ◽  
Xin-Jun Han ◽  
Jing Du ◽  
Zhen-Chang Wang ◽  
Chunwang Yuan ◽  
...  

Abstract Background The imaging features of focal liver lesions (FLLs) are diverse and complex. Diagnosing FLLs with imaging alone remains challenging. We developed and validated an interpretable deep learning model for the classification of seven categories of FLLs on multisequence MRI and compared the differential diagnosis between the proposed model and radiologists. Methods In all, 557 lesions examined by multisequence MRI were utilised in this retrospective study and divided into training–validation (n = 444) and test (n = 113) datasets. The area under the receiver operating characteristic curve (AUC) was calculated to evaluate the performance of the model. The accuracy and confusion matrix of the model and individual radiologists were compared. Saliency maps were generated to highlight the activation region based on the model perspective. Results The AUC of the two- and seven-way classifications of the model were 0.969 (95% CI 0.944–0.994) and from 0.919 (95% CI 0.857–0.980) to 0.999 (95% CI 0.996–1.000), respectively. The model accuracy (79.6%) of the seven-way classification was higher than that of the radiology residents (66.4%, p = 0.035) and general radiologists (73.5%, p = 0.346) but lower than that of the academic radiologists (85.4%, p = 0.291). Confusion matrices showed the sources of diagnostic errors for the model and individual radiologists for each disease. Saliency maps detected the activation regions associated with each predicted class. Conclusion This interpretable deep learning model showed high diagnostic performance in the differentiation of FLLs on multisequence MRI. The analysis principle contributing to the predictions can be explained via saliency maps.


2020 ◽  
Vol 39 (10) ◽  
pp. 734-741
Author(s):  
Sébastien Guillon ◽  
Frédéric Joncour ◽  
Pierre-Emmanuel Barrallon ◽  
Laurent Castanié

We propose new metrics to measure the performance of a deep learning model applied to seismic interpretation tasks such as fault and horizon extraction. Faults and horizons are thin geologic boundaries (1 pixel thick on the image) for which a small prediction error could lead to inappropriately large variations in common metrics (precision, recall, and intersection over union). Through two examples, we show how classical metrics could fail to indicate the true quality of fault or horizon extraction. Measuring the accuracy of reconstruction of thin objects or boundaries requires introducing a tolerance distance between ground truth and prediction images to manage the uncertainties inherent in their delineation. We therefore adapt our metrics by introducing a tolerance function and illustrate their ability to manage uncertainties in seismic interpretation. We compare classical and new metrics through different examples and demonstrate the robustness of our metrics. Finally, we show on a 3D West African data set how our metrics are used to tune an optimal deep learning model.


2021 ◽  
Vol 10 (1) ◽  
Author(s):  
Xu Zhao ◽  
Ke Liao ◽  
Wei Wang ◽  
Junmei Xu ◽  
Lingzhong Meng

Abstract Background Intraoperative physiological monitoring generates a large quantity of time-series data that might be associated with postoperative outcomes. Using a deep learning model based on intraoperative time-series monitoring data to predict postoperative quality of recovery has not been previously reported. Methods Perioperative data from female patients having laparoscopic hysterectomy were prospectively collected. Deep learning, logistic regression, support vector machine, and random forest models were trained using different datasets and evaluated by 5-fold cross-validation. The quality of recovery on postoperative day 1 was assessed using the Quality of Recovery-15 scale. The quality of recovery was dichotomized into satisfactory if the score ≥122 and unsatisfactory if <122. Models’ discrimination was estimated using the area under the receiver operating characteristics curve (AUROC). Models’ calibration was visualized using the calibration plot and appraised by the Brier score. The SHapley Additive exPlanation (SHAP) approach was used to characterize different input features’ contributions. Results Data from 699 patients were used for modeling. When using preoperative data only, all four models exhibited poor performance (AUROC ranging from 0.65 to 0.68). The inclusion of the intraoperative intervention and/or monitoring data improved the performance of the deep leaning, logistic regression, and random forest models but not the support vector machine model. The AUROC of the deep learning model based on the intraoperative monitoring data only was 0.77 (95% CI, 0.72–0.81), which was indistinct from that based on the intraoperative intervention data only (AUROC, 0.79; 95% CI, 0.75–0.82) and from that based on the preoperative, intraoperative intervention, and monitoring data combined (AUROC, 0.81; 95% CI, 0.78–0.83). In contrast, when using the intraoperative monitoring data only, the logistic regression model had an AUROC of 0.72 (95% CI, 0.68–0.77), and the random forest model had an AUROC of 0.74 (95% CI, 0.73–0.76). The Brier score of the deep learning model based on the intraoperative monitoring data was 0.177, which was lower than that of other models. Conclusions Deep learning based on intraoperative time-series monitoring data can predict post-hysterectomy quality of recovery. The use of intraoperative monitoring data for outcome prediction warrants further investigation. Trial registration This trial (Identifier: NCT03641625) was registered at ClinicalTrials.gov by the principal investigator, Lingzhong Meng, on August 22, 2018.


2021 ◽  
Vol 2021 ◽  
pp. 1-16
Author(s):  
Monika Sethi ◽  
Sachin Ahuja ◽  
Shalli Rani ◽  
Puneet Bawa ◽  
Atef Zaguia

Alzheimer’s disease (AD) is one of the most important causes of mortality in elderly people, and it is often challenging to use traditional manual procedures when diagnosing a disease in the early stages. The successful implementation of machine learning (ML) techniques has also shown their effectiveness and its reliability as one of the better options for an early diagnosis of AD. But the heterogeneous dimensions and composition of the disease data have undoubtedly made diagnostics more difficult, needing a sufficient model choice to overcome the difficulty. Therefore, in this paper, four different 2D and 3D convolutional neural network (CNN) frameworks based on Bayesian search optimization are proposed to develop an optimized deep learning model to predict the early onset of AD binary and ternary classification on magnetic resonance imaging (MRI) scans. Moreover, certain hyperparameters such as learning rate, optimizers, and hidden units are to be set and adjusted for the performance boosting of the deep learning model. Bayesian optimization enables to leverage advantage throughout the experiments: A persistent hyperparameter space testing provides not only the output but also about the nearest conclusions. In this way, the series of experiments needed to explore space can be substantially reduced. Finally, alongside the use of Bayesian approaches, long short-term memory (LSTM) through the process of augmentation has resulted in finding the better settings of the model that too in less iterations with an relative improvement (RI) of 7.03%, 12.19%, 10.80%, and 11.99% over the four systems optimized with manual hyperparameters tuning such that hyperparameters that look more appealing from past data as well as the conventional techniques of manual selection.


Food is one of the basic needs of human being. We know that the population is rising enormously.so it is more important to feed such a huge population. But nowadays plants are largely affected with various types of diseases. If proper care should not be taken then it will show effect on quality of food products, quantity and finally on productivity of crops.. so, Early detection of plant disease is very essential, but it is very hard to farmers to monitor the crops manually it takes more processing time, huge amount of work, expensive and need expertised persons. Automatic detection of plant diseases helps the farmers to monitor the large fields easily,because our approach of using convolution neural networks provides a chance to discover diseases at the very early stage. By using Image Processing and machine learning models we can detect the plant diseases automatically but the accuracy is very less, early detection is also a major challenge. With the modern advanced developments in deep learning, in our project we have implemented the convolution neural networks(CNN) which comprises of different layers,by using those layers we can automatically detect and classify the diseases present in the plants. High Classification accuracy and more processing speed are the main advantages of our approach. After training the model on color, grayscale and segmented datasets our deep learning model will be capable of classifying a large number of different diseases and our project gives us the name of the disease that the plant has with its confidence level and also provides remedies for corresponding diseases


Author(s):  
Ozal Yildirim ◽  
Ulas Baloglu ◽  
U Acharya

Sleep disorder is a symptom of many neurological diseases that may significantly affect the quality of daily life. Traditional methods are time-consuming and involve the manual scoring of polysomnogram (PSG) signals obtained in a laboratory environment. However, the automated monitoring of sleep stages can help detect neurological disorders accurately as well. In this study, a flexible deep learning model is proposed using raw PSG signals. A one-dimensional convolutional neural network (1D-CNN) is developed using electroencephalogram (EEG) and electrooculogram (EOG) signals for the classification of sleep stages. The performance of the system is evaluated using two public databases (sleep-edf and sleep-edfx). The developed model yielded the highest accuracies of 98.06%, 94.64%, 92.36%, 91.22%, and 91.00% for two to six sleep classes, respectively, using the sleep-edf database. Further, the proposed model obtained the highest accuracies of 97.62%, 94.34%, 92.33%, 90.98%, and 89.54%, respectively for the same two to six sleep classes using the sleep-edfx dataset. The developed deep learning model is ready for clinical usage, and can be tested with big PSG data.


2021 ◽  
Author(s):  
Simon M. Hofmann ◽  
Frauke Beyer ◽  
Sebastian Lapuschkin ◽  
Loeffler Markus ◽  
Klaus-Robert Mueller ◽  
...  

Brain-age (BA) estimates based on deep learning are increasingly used as neuroimaging biomarker for brain health; however, the underlying neural features have remained unclear. We combined ensembles of convolutional neural networks with Layer-wise Relevance Propagation (LRP) to detect which brain features contribute to BA. Trained on magnetic resonance imaging (MRI) data of a population-based study (n=2637, 18-82 years), our models estimated age accurately based on single and multiple modalities, regionally restricted and whole-brain images (mean absolute errors 3.38-5.07 years). We find that BA estimates capture aging at both small and large-scale changes, revealing gross enlargements of ventricles and subarachnoid spaces, as well as lesions, iron accumulations and atrophies that appear throughout the brain. Divergence from expected aging reflected cardiovascular risk factors and accelerated aging was more pronounced in the frontal lobe. Applying LRP, our study demonstrates how superior deep learning models detect brain-aging in healthy and at-risk individuals throughout adulthood.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Zamir Merali ◽  
Justin Z. Wang ◽  
Jetan H. Badhiwala ◽  
Christopher D. Witiw ◽  
Jefferson R. Wilson ◽  
...  

AbstractMagnetic Resonance Imaging (MRI) evidence of spinal cord compression plays a central role in the diagnosis of degenerative cervical myelopathy (DCM). There is growing recognition that deep learning models may assist in addressing the increasing volume of medical imaging data and provide initial interpretation of images gathered in a primary-care setting. We aimed to develop and validate a deep learning model for detection of cervical spinal cord compression in MRI scans. Patients undergoing surgery for DCM as a part of the AO Spine CSM-NA or CSM-I prospective cohort studies were included in our study. Patients were divided into a training/validation or holdout dataset. Images were labelled by two specialist physicians. We trained a deep convolutional neural network using images from the training/validation dataset and assessed model performance on the holdout dataset. The training/validation cohort included 201 patients with 6588 images and the holdout dataset included 88 patients with 2991 images. On the holdout dataset the deep learning model achieved an overall AUC of 0.94, sensitivity of 0.88, specificity of 0.89, and f1-score of 0.82. This model could improve the efficiency and objectivity of the interpretation of cervical spine MRI scans.


2018 ◽  
Vol 56 (9) ◽  
pp. 110-117 ◽  
Author(s):  
Manuel Lopez-Martin ◽  
Belen Carro ◽  
Jaime Lloret ◽  
Santiago Egea ◽  
Antonio Sanchez-Esguevillas

Sign in / Sign up

Export Citation Format

Share Document