scholarly journals Diagnosing osteoarthritis from T2 maps using deep learning: an analysis of the entire Osteoarthritis Initiative baseline cohort

2019 ◽  
Vol 27 (7) ◽  
pp. 1002-1010 ◽  
Author(s):  
V. Pedoia ◽  
J. Lee ◽  
B. Norman ◽  
T.M. Link ◽  
S. Majumdar
2020 ◽  
Vol 79 (Suppl 1) ◽  
pp. 41.2-42
Author(s):  
C. F. Kuo ◽  
K. Zheng ◽  
S. Miao ◽  
L. Lu ◽  
C. I. Hsieh ◽  
...  

Background:Osteoarthritis is a degenerative disorder characterized by radiographic features of asymmetric loss of joint space, subchondral sclerosis, and osteophyte formation. Conventional plain films are essential to detect structural changes in osteoarthritis. Recent evidence suggests that fractal- and entropy-based bone texture parameters may improve the prediction of radiographic osteoarthritis.1In contrast to the fixed texture features, deep learning models allow the comprehensive texture feature extraction and recognition relevant to osteoarthritis.Objectives:To assess the predictive value of deep learning-extracted bone texture features in the detection of radiographic osteoarthritis.Methods:We used data from the Osteoarthritis Initiative, which is a longitudinal study with 4,796 patients followed up and assessed for osteoarthritis. We used a training set of 25,978 images from 3,086 patients to develop the textual model. We use the BoneFinder software2to do the segmentation of distal femur and proximal tibia. We used the Deep Texture Encoding Network (Deep-TEN)3to encode the bone texture features into a vector, which is fed to a 5-way linear classifier for Kellgren and Lawrence grading for osteoarthritis classification. We also developed a Residual Network with 18 layers (ResNet18) for comparison since it deals with contours as well. Spearman’s correlation coefficient was used to assess the correlation between predicted and reference KL grades. We also test the performance of the model to identify osteoarthritis (KL grade≥2).Results:We obtained 6,490 knee radiographs from 446 female and 326 male patients who were not in the training sets to validate the performance of the models. The distribution of the KL grades in the training and testing sets were shown in Table 1. The Spearman’s correlation coefficient was 0.60 for the Deep-TEN and 0.67 for the ResNet18 model. Table 2 shows the performance of the models to detect osteoarthritis. The positive predictive value for Deep-TEN and ResNet18 model classification for OA was 81.37% and 87.46%, respectively.Table 1Distribution of KL grades in the training and testing sets.KL grades01234TotalTraining set1089341.9%458218.7%611423.5%332012.8%7993.1%25,978Testing set247238.1%135320.8%169626.1%77511.9%1943.0%6,490Table 2Performance matrices of the Deep-Ten and ResNet18 models to detect osteoarthritisDeep-TENResNet18Sensitivity62.29%(95% CI, 60.42%–64.13%)59.14%(95% CI, 57.24%–61.01%)Specificity90.07%(95% CI, 89.07%–91.00%)94.09%(95% CI, 93.30%–94.82%)Positive predictive value81.37%(95% CI, 79.81%–82.84%)87.46%(95% CI, 85.96%–88.82%)Negative predictive value77.42%(95% CI, 77.64%–79.65%)76.77%(95% CI, 75.93%–77.59%)Conclusion:This study demonstrates that the bone texture model performs reasonably well to detect radiographic osteoarthritis with a similar performance to the bone contour model.References:[1]Bertalan Z, Ljuhar R, Norman B, et al. Combining fractal- and entropy-based bone texture analysis for the prediction of osteoarthritis: data from the multicenter osteoarthritis study (MOST). Osteoarthritis Cartilage 2018;26:S49.[2]Lindner C, Wang CW, Huang CT, et al. Fully Automatic System for Accurate Localisation and Analysis of Cephalometric Landmarks in Lateral Cephalograms. Sci Rep 2016;6:33581.[3]Zhang H, Xue J, Dana K. Deep TEN: Texture Encoding Network. The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2017:708-17.Disclosure of Interests:None declared


Radiology ◽  
2020 ◽  
Vol 296 (3) ◽  
pp. 584-593
Author(s):  
Kevin Leung ◽  
Bofei Zhang ◽  
Jimin Tan ◽  
Yiqiu Shen ◽  
Krzysztof J. Geras ◽  
...  

2021 ◽  
Vol 29 ◽  
pp. S328-S329
Author(s):  
J.K. Schachinger ◽  
S. Maschek ◽  
A. Wisser ◽  
D. Fürst ◽  
A.S. Chaudhari ◽  
...  

Author(s):  
Alexander Tack ◽  
Alexey Shestakov ◽  
David Lüdke ◽  
Stefan Zachow

We present a novel and computationally efficient method for the detection of meniscal tears in Magnetic Resonance Imaging (MRI) data. Our method is based on a Convolutional Neural Network (CNN) that operates on complete 3D MRI scans. Our approach detects the presence of meniscal tears in three anatomical sub-regions (anterior horn, body, posterior horn) for both the Medial Meniscus (MM) and the Lateral Meniscus (LM) individually. For optimal performance of our method, we investigate how to preprocess the MRI data and how to train the CNN such that only relevant information within a Region of Interest (RoI) of the data volume is taken into account for meniscal tear detection. We propose meniscal tear detection combined with a bounding box regressor in a multi-task deep learning framework to let the CNN implicitly consider the corresponding RoIs of the menisci. We evaluate the accuracy of our CNN-based meniscal tear detection approach on 2,399 Double Echo Steady-State (DESS) MRI scans from the Osteoarthritis Initiative database. In addition, to show that our method is capable of generalizing to other MRI sequences, we also adapt our model to Intermediate-Weighted Turbo Spin-Echo (IW TSE) MRI scans. To judge the quality of our approaches, Receiver Operating Characteristic (ROC) curves and Area Under the Curve (AUC) values are evaluated for both MRI sequences. For the detection of tears in DESS MRI, our method reaches AUC values of 0.94, 0.93, 0.93 (anterior horn, body, posterior horn) in MM and 0.96, 0.94, 0.91 in LM. For the detection of tears in IW TSE MRI data, our method yields AUC values of 0.84, 0.88, 0.86 in MM and 0.95, 0.91, 0.90 in LM. In conclusion, the presented method achieves high accuracy for detecting meniscal tears in both DESS and IW TSE MRI data. Furthermore, our method can be easily trained and applied to other MRI sequences.


2020 ◽  
Author(s):  
Edward J Peake ◽  
Raphael Chevasson ◽  
Stefan Pszczolkowski ◽  
Dorothee P Auer ◽  
Christoph Arthofer

AbstractPurposeTo evaluate the performance of an ensemble learning approach for fully automated cartilage segmentation on knee magnetic resonance images of patients with osteoarthritis.Materials and MethodsThis retrospective study of 88 participants with knee osteoarthritis involved the study of three-dimensional (3D) double echo steady state (DESS) MR imaging volumes with manual segmentations for 6 different compartments of cartilage (Data available from the Osteoarthritis Initiative). We propose ensemble learning to boost the sensitivity of our deep learning method by combining predictions from two models, a U-Net for the segmentation of two labels (cartilage vs background) and a multi-label U-Net for specific cartilage compartments. Segmentation accuracy is evaluated using Dice coefficient, while volumetric measures and Bland Altman plots provide complimentary information when assessing segmentation results.ResultsOur model showed excellent accuracy for all 6 cartilage locations: femoral 0.88, medial tibial 0.84, lateral tibial 0.88, patellar 0.85, medial meniscal 0.85 and lateral meniscal 0.90. The average volume correlation was 0.988, overestimating volume by 9% ± 14% over all compartments. Simple post processing creates a single 3D connected component per compartment resulting in higher anatomical face validity.ConclusionOur model produces automated segmentation with high Dice coefficients when compared to expert manual annotations and leads to the recovery of missing labels in the manual annotations, while also creating smoother, more realistic boundaries avoiding slice discontinuity artifacts present in the manual annotations.Key ResultsCombining a 2-label U-Net (cartilage vs background) with a multi-class U-Net for segmentation of cartilage compartment boosts the accuracy of our deep learning model leading to the recovery of missing annotations in the manual dataset.Automatically generated segmentations have high Dice coefficients (0.85-0.90) and reduce inter-slice discontinuity artefact caused by slice wise delineation.Model refinement yields more anatomically plausible segmentations where each cartilage label is composed of only a single 3D region of interest.


Author(s):  
Stellan Ohlsson
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document