Towards Accurate Detection of Axial Spondyloarthritis by Using Deep Learning to Capture Sacroiliac Joints on Plain Radiographs.
Abstract BackgroundWell-informed decisions about how to best treat patients with axial spondyloarthritis (SpA) regularly include an evaluation of the sacroiliac joints (SIJ) on plain radiographs. However, grading radiographic findings correctly has proven to be a considerable challenge to expert readers as well as to state-of-the-art convolutional neural networks (CNNs). A method to reduce image information to the clinically relevant core would undoubtedly lead to more accurate results. We, therefore, trained a CNN only to detect SIJs on radiographs and evaluated its potential as a preprocessing pipeline in the automated classification of SpA.Materials and MethodsWe employed a CNN of the RetinaNet architecture, which was trained on a total of 423 plain radiographs of the sacroiliac joints (SIJs). Images were taken from two completely independent datasets. Training and tuning were performed on image data from the Patients With Axial Spondyloarthritis (PROOF) study and testing was executed using images from the German Spondyloarthritis Inception Cohort (GESPIC). Performance was evaluated by manual review and standard object detection metrics from PASCAL and Microsoft COCO.ResultsThe CNN produced excellent results in detecting SIJs on the tuning (n =106) and on the holdout dataset (n =140). Object detection metrics for the tuning data were AP = 0.996 and mAP = 0.538; values for the independent holdout data were AP = 0.981 and mAP = 0.515. ConclusionsThe developed CNN was highly accurate in detecting SIJs on radiographs. Such a model could increase the reliability of deep learning-based algorithms in detecting and grading SpA.