scholarly journals Automated Generation of Cerebral Blood Flow Maps Using Deep Learning and Multiple Delay Arterial Spin-Labelled MRI

2021 ◽  
Author(s):  
Nicholas J Luciw ◽  
Zahra Shirzadi ◽  
Sandra E Black ◽  
Maged J Goubran ◽  
Bradley J MacIntosh

The purpose of this work was to develop and evaluate a deep learning approach for estimation of cerebral blood flow (CBF) and arterial transit time (ATT) from multiple post-label delay (PLD) arterial spin-labelled (ASL) MRI. Six-PLD ASL MRI was acquired on a 1.5T or 3T system among 99 older males and females with and without cognitive impairment. We trained and compared two network architectures: standard feed-forward convolutional neural network (CNN) and U-Net. Mean absolute error (MAE) was evaluated between model estimates and ground truth obtained through conventional processing. The best-performing model was re-trained on inputs with missing PLDs to investigate generalizability to different PLD schedules. Relative to the CNN, the U-Net yielded lower MAE on training data. On test data, the U-Net MAE was 8.4±1.4 ml/100g/min for CBF and 0.22±0.09 s for ATT. Model uncertainty, estimated with Monte Carlo dropout, was associated with model error. Network estimates remained stable when tested on inputs with up to three missing PLD images. Mean processing times were: U-Net pipeline = 10.77s; ground truth pipeline = 10min 41s. These results suggest hemodynamic parameter estimation from 1.5T and 3T multi-PLD ASL MRI is feasible and fast with a deep learning image-generation approach.

eLife ◽  
2020 ◽  
Vol 9 ◽  
Author(s):  
Dennis Segebarth ◽  
Matthias Griebel ◽  
Nikolai Stein ◽  
Cora R von Collenberg ◽  
Corinna Martin ◽  
...  

Bioimage analysis of fluorescent labels is widely used in the life sciences. Recent advances in deep learning (DL) allow automating time-consuming manual image analysis processes based on annotated training data. However, manual annotation of fluorescent features with a low signal-to-noise ratio is somewhat subjective. Training DL models on subjective annotations may be instable or yield biased models. In turn, these models may be unable to reliably detect biological effects. An analysis pipeline integrating data annotation, ground truth estimation, and model training can mitigate this risk. To evaluate this integrated process, we compared different DL-based analysis approaches. With data from two model organisms (mice, zebrafish) and five laboratories, we show that ground truth estimation from multiple human annotators helps to establish objectivity in fluorescent feature annotations. Furthermore, ensembles of multiple models trained on the estimated ground truth establish reliability and validity. Our research provides guidelines for reproducible DL-based bioimage analyses.


2019 ◽  
Vol 38 (11) ◽  
pp. 872a1-872a9 ◽  
Author(s):  
Mauricio Araya-Polo ◽  
Stuart Farris ◽  
Manuel Florez

Exploration seismic data are heavily manipulated before human interpreters are able to extract meaningful information regarding subsurface structures. This manipulation adds modeling and human biases and is limited by methodological shortcomings. Alternatively, using seismic data directly is becoming possible thanks to deep learning (DL) techniques. A DL-based workflow is introduced that uses analog velocity models and realistic raw seismic waveforms as input and produces subsurface velocity models as output. When insufficient data are used for training, DL algorithms tend to overfit or fail. Gathering large amounts of labeled and standardized seismic data sets is not straightforward. This shortage of quality data is addressed by building a generative adversarial network (GAN) to augment the original training data set, which is then used by DL-driven seismic tomography as input. The DL tomographic operator predicts velocity models with high statistical and structural accuracy after being trained with GAN-generated velocity models. Beyond the field of exploration geophysics, the use of machine learning in earth science is challenged by the lack of labeled data or properly interpreted ground truth, since we seldom know what truly exists beneath the earth's surface. The unsupervised approach (using GANs to generate labeled data)illustrates a way to mitigate this problem and opens geology, geophysics, and planetary sciences to more DL applications.


2017 ◽  
Vol 37 (9) ◽  
pp. 3184-3192 ◽  
Author(s):  
Henri JMM Mutsaerts ◽  
Jan Petr ◽  
Lena Václavů ◽  
Jan W van Dalen ◽  
Andrew D Robertson ◽  
...  

Macro-vascular artifacts are a common arterial spin labeling (ASL) finding in populations with prolonged arterial transit time (ATT) and result in vascular regions with spuriously increased cerebral blood flow (CBF) and tissue regions with spuriously decreased CBF. This study investigates whether there is an association between the spatial signal distribution of a single post-label delay ASL CBF image and ATT. In 186 elderly with hypertension (46% male, 77.4 ± 2.5 years), we evaluated associations between the spatial coefficient of variation (CoV) of a CBF image and ATT. The spatial CoV and ATT metrics were subsequently evaluated with respect to their associations with age and sex – two demographics known to influence perfusion. Bland–Altman plots showed that spatial CoV predicted ATT with a maximum relative error of 7.6%. Spatial CoV was associated with age (β = 0.163, p = 0.028) and sex (β = −0.204, p = 0.004). The spatial distribution of the ASL signal on a standard CBF image can be used to infer between-participant ATT differences. In the absence of ATT mapping, the spatial CoV may be useful for the clinical interpretation of ASL in patients with cerebrovascular pathology that leads to prolonged transit of the ASL signal to tissue.


2021 ◽  
Author(s):  
Nataliya Rokhmanova ◽  
Katherine J. Kuchenbecker ◽  
Peter B. Shull ◽  
Reed Ferber ◽  
Eni Halilaj

Knee osteoarthritis is a progressive disease mediated by high joint loads. Foot progression angle modifications that reduce the knee adduction moment (KAM), a surrogate of knee loading, have demonstrated efficacy in alleviating pain and improving function. Although changes to the foot progression angle are overall beneficial, KAM reductions are not consistent across patients. Moreover, customized interventions are time-consuming and require instrumentation not commonly available in the clinic. We present a model that uses minimal clinical data to predict the extent of first peak KAM reduction after toe-in gait retraining. For such a model to generalize, the training data must be large and variable. Given the lack of large public datasets that contain different gaits for the same patient, we generated this dataset synthetically. Insights learned from ground-truth datasets with both baseline and toe-in gait trials (N=12) enabled the creation of a large (N=138) synthetic dataset for training the predictive model. On a test set of data collected by a separate research group (N=15), the first peak KAM reduction was predicted with a mean absolute error of 0.134% body weight * height (%BW*HT). This error is smaller than the test set’s subject average standard deviation of the first peak during baseline walking (0.306%BW*HT). This work demonstrates the feasibility of training predictive models with synthetic data and may provide clinicians with a streamlined pathway to identify a patient-specific gait retraining outcome without requiring gait lab instrumentation.


2019 ◽  
Vol 82 (5) ◽  
pp. 496-502 ◽  
Author(s):  
Patrick Luckett ◽  
Robert H. Paul ◽  
Jaimie Navid ◽  
Sarah A. Cooley ◽  
Julie K. Wisch ◽  
...  

2020 ◽  
Vol 36 (12) ◽  
pp. 3863-3870
Author(s):  
Mischa Schwendy ◽  
Ronald E Unger ◽  
Sapun H Parekh

Abstract Motivation Deep learning use for quantitative image analysis is exponentially increasing. However, training accurate, widely deployable deep learning algorithms requires a plethora of annotated (ground truth) data. Image collections must contain not only thousands of images to provide sufficient example objects (i.e. cells), but also contain an adequate degree of image heterogeneity. Results We present a new dataset, EVICAN—Expert visual cell annotation, comprising partially annotated grayscale images of 30 different cell lines from multiple microscopes, contrast mechanisms and magnifications that is readily usable as training data for computer vision applications. With 4600 images and ∼26 000 segmented cells, our collection offers an unparalleled heterogeneous training dataset for cell biology deep learning application development. Availability and implementation The dataset is freely available (https://edmond.mpdl.mpg.de/imeji/collection/l45s16atmi6Aa4sI?q=). Using a Mask R-CNN implementation, we demonstrate automated segmentation of cells and nuclei from brightfield images with a mean average precision of 61.6 % at a Jaccard Index above 0.5.


Sign in / Sign up

Export Citation Format

Share Document