scholarly journals Towards Accurate Detection of Axial Spondyloarthritis by Using Deep Learning to Capture Sacroiliac Joints on Plain Radiographs.

Author(s):  
Nils Friedrich Grauhan ◽  
Keno Kyrill Bressem ◽  
Yves Nicolas Manzoni ◽  
Lisa Christine Adams ◽  
Valeria Rios-Rodgriguez ◽  
...  

Abstract BackgroundWell-informed decisions about how to best treat patients with axial spondyloarthritis (SpA) regularly include an evaluation of the sacroiliac joints (SIJ) on plain radiographs. However, grading radiographic findings correctly has proven to be a considerable challenge to expert readers as well as to state-of-the-art convolutional neural networks (CNNs). A method to reduce image information to the clinically relevant core would undoubtedly lead to more accurate results. We, therefore, trained a CNN only to detect SIJs on radiographs and evaluated its potential as a preprocessing pipeline in the automated classification of SpA.Materials and MethodsWe employed a CNN of the RetinaNet architecture, which was trained on a total of 423 plain radiographs of the sacroiliac joints (SIJs). Images were taken from two completely independent datasets. Training and tuning were performed on image data from the Patients With Axial Spondyloarthritis (PROOF) study and testing was executed using images from the German Spondyloarthritis Inception Cohort (GESPIC). Performance was evaluated by manual review and standard object detection metrics from PASCAL and Microsoft COCO.ResultsThe CNN produced excellent results in detecting SIJs on the tuning (n =106) and on the holdout dataset (n =140). Object detection metrics for the tuning data were AP = 0.996 and mAP = 0.538; values for the independent holdout data were AP = 0.981 and mAP = 0.515. ConclusionsThe developed CNN was highly accurate in detecting SIJs on radiographs. Such a model could increase the reliability of deep learning-based algorithms in detecting and grading SpA.

Sensors ◽  
2021 ◽  
Vol 21 (8) ◽  
pp. 2611
Author(s):  
Andrew Shepley ◽  
Greg Falzon ◽  
Christopher Lawson ◽  
Paul Meek ◽  
Paul Kwan

Image data is one of the primary sources of ecological data used in biodiversity conservation and management worldwide. However, classifying and interpreting large numbers of images is time and resource expensive, particularly in the context of camera trapping. Deep learning models have been used to achieve this task but are often not suited to specific applications due to their inability to generalise to new environments and inconsistent performance. Models need to be developed for specific species cohorts and environments, but the technical skills required to achieve this are a key barrier to the accessibility of this technology to ecologists. Thus, there is a strong need to democratize access to deep learning technologies by providing an easy-to-use software application allowing non-technical users to train custom object detectors. U-Infuse addresses this issue by providing ecologists with the ability to train customised models using publicly available images and/or their own images without specific technical expertise. Auto-annotation and annotation editing functionalities minimize the constraints of manually annotating and pre-processing large numbers of images. U-Infuse is a free and open-source software solution that supports both multiclass and single class training and object detection, allowing ecologists to access deep learning technologies usually only available to computer scientists, on their own device, customised for their application, without sharing intellectual property or sensitive data. It provides ecological practitioners with the ability to (i) easily achieve object detection within a user-friendly GUI, generating a species distribution report, and other useful statistics, (ii) custom train deep learning models using publicly available and custom training data, (iii) achieve supervised auto-annotation of images for further training, with the benefit of editing annotations to ensure quality datasets. Broad adoption of U-Infuse by ecological practitioners will improve ecological image analysis and processing by allowing significantly more image data to be processed with minimal expenditure of time and resources, particularly for camera trap images. Ease of training and use of transfer learning means domain-specific models can be trained rapidly, and frequently updated without the need for computer science expertise, or data sharing, protecting intellectual property and privacy.


Diagnostics ◽  
2021 ◽  
Vol 11 (7) ◽  
pp. 1156
Author(s):  
Kang Hee Lee ◽  
Sang Tae Choi ◽  
Guen Young Lee ◽  
You Jung Ha ◽  
Sang-Il Choi

Axial spondyloarthritis (axSpA) is a chronic inflammatory disease of the sacroiliac joints. In this study, we develop a method for detecting bone marrow edema by magnetic resonance (MR) imaging of the sacroiliac joints and a deep-learning network. A total of 815 MR images of the sacroiliac joints were obtained from 60 patients diagnosed with axSpA and 19 healthy subjects. Gadolinium-enhanced fat-suppressed T1-weighted oblique coronal images were used for deep learning. Active sacroiliitis was defined as bone marrow edema, and the following processes were performed: setting the region of interest (ROI) and normalizing it to a size suitable for input to a deep-learning network, determining bone marrow edema using a convolutional-neural-network-based deep-learning network for individual MR images, and determining sacroiliac arthritis in subject examinations based on the classification results of individual MR images. About 70% of the patients and normal subjects were randomly selected for the training dataset, and the remaining 30% formed the test dataset. This process was repeated five times to calculate the average classification rate of the five-fold sets. The gradient-weighted class activation mapping method was used to validate the classification results. In the performance analysis of the ResNet18-based classification network for individual MR images, use of the ROI showed excellent detection performance of bone marrow edema with 93.55 ± 2.19% accuracy, 92.87 ± 1.27% recall, and 94.69 ± 3.03% precision. The overall performance was additionally improved using a median filter to reflect the context information. Finally, active sacroiliitis was diagnosed in individual subjects with 96.06 ± 2.83% accuracy, 100% recall, and 94.84 ± 3.73% precision. This is a pilot study to diagnose bone marrow edema by deep learning based on MR images, and the results suggest that MR analysis using deep learning can be a useful complementary means for clinicians to diagnose bone marrow edema.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Song-Quan Ong ◽  
Hamdan Ahmad ◽  
Gomesh Nair ◽  
Pradeep Isawasan ◽  
Abdul Hafiz Ab Majid

AbstractClassification of Aedes aegypti (Linnaeus) and Aedes albopictus (Skuse) by humans remains challenging. We proposed a highly accessible method to develop a deep learning (DL) model and implement the model for mosquito image classification by using hardware that could regulate the development process. In particular, we constructed a dataset with 4120 images of Aedes mosquitoes that were older than 12 days old and had common morphological features that disappeared, and we illustrated how to set up supervised deep convolutional neural networks (DCNNs) with hyperparameter adjustment. The model application was first conducted by deploying the model externally in real time on three different generations of mosquitoes, and the accuracy was compared with human expert performance. Our results showed that both the learning rate and epochs significantly affected the accuracy, and the best-performing hyperparameters achieved an accuracy of more than 98% at classifying mosquitoes, which showed no significant difference from human-level performance. We demonstrated the feasibility of the method to construct a model with the DCNN when deployed externally on mosquitoes in real time.


2006 ◽  
Vol 14 (7S_Part_19) ◽  
pp. P1067-P1068
Author(s):  
Pradeep Anand Ravindranath ◽  
Rema Raman ◽  
Tiffany W. Chow ◽  
Michael S. Rafii ◽  
Paul S. Aisen ◽  
...  

2020 ◽  
Author(s):  
Na Yao ◽  
Fuchuan Ni ◽  
Ziyan Wang ◽  
Jun Luo ◽  
Wing-Kin Sung ◽  
...  

Abstract Background: Peach diseases can cause severe yield reduction and decreased quality for peach production. Rapid and accurate detection and identification of peach diseases is of great importance. Deep learning has been applied to detect peach diseases using imaging data. However, peach disease image data is difficult to collect and samples are imbalance. The popular deep networks perform poor for this issue.Results: This paper proposed an improved Xception network named as L2MXception which ensembles regularization term of L2-norm and mean. With the peach disease image dataset collected, results on seven mainstream deep learning models were compared in details and an improved loss function was integrated with regularization term L2-norm and mean (L2M Loss). Experiments showed that the Xception model with L2M Loss outperformed the current best method for peach disease prediction. Compared to the original Xception model, the validation accuracy of L2MXception was up to 93.85%, increased by 28.48%. Conclusions: The proposed L2MXception network may have great potential in early identification of peach diseases.


2012 ◽  
Vol 52 (No. 4) ◽  
pp. 181-187 ◽  
Author(s):  
F. Hájek

This paper describes the automated classification of tree species composition from Ikonos 4-meter imagery using an object-oriented approach. The image was acquired over a man-planted forest area with the proportion of various forest types (conifers, broadleaved, mixed) in the Krušné hory Mts., Czech Republic. In order to enlarge the class signature space, additional channels were calculated by low-pass filtering, IHS transformation and Haralick texture measures. Employing these layers, image segmentation and classification were conducted on several levels to create a hierarchical image object network. The higher level separated the image into smaller parts regarding the stand maturity and structure, the lower (detailed) level assigned individual tree clusters into classes for the main forest species. The classification accuracy was assessed by comparing the automated technique with the field inventory using Kappa coefficient. The study aimed to create a rule-base transferable to other datasets. Moreover, the appropriate scale of common image data and utilisation in forestry management are evaluated.


2016 ◽  
Vol 76 (2) ◽  
pp. 392-398 ◽  
Author(s):  
Pauline A C Bakker ◽  
Rosaline van den Berg ◽  
Gregory Lenczner ◽  
Fabrice Thévenin ◽  
Monique Reijnierse ◽  
...  

ObjectivesInvestigating the utility of adding structural lesions seen on MRI of the sacroiliac joints to the imaging criterion of the Assessment of SpondyloArthritis (ASAS) axial SpondyloArthritis (axSpA) criteria and the utility of replacement of radiographic sacroiliitis by structural lesions on MRI.MethodsTwo well-calibrated readers scored MRI STIR (inflammation, MRI-SI), MRI T1-w images (structural lesions, MRI-SI-s) and radiographs of the sacroiliac joints (X-SI) of patients in the DEvenir des Spondyloarthrites Indifférenciées Récentes cohort (inflammatory back pain: ≥3 months, <3 years, age <50). A third reader adjudicated MRI-SI and X-SI discrepancies. Previously proposed cut-offs for a positive MRI-SI-s were used (based on <5% prevalence among no-SpA patients): erosions (E) ≥3, fatty lesions (FL) ≥3, E/FL ≥5. Patients were classified according to the ASAS axSpA criteria using the various definitions of MRI-SI-s.ResultsOf the 582 patients included in this analysis, 418 fulfilled the ASAS axSpA criteria, of which 127 patients were modified New York (mNY) positive and 134 and 75 were MRI-SI-s positive (E/FL≥5) for readers 1 and 2, respectively. Agreement between mNY and MRI-SI-s (E/FL≥5) was moderate (reader 1: κ: 0.39; reader 2: κ: 0.44). Using the E/FL≥5 cut-off instead of mNY classification did not change in 478 (82.1%) and 469 (80.6%) patients for readers 1 and 2, respectively. Twelve (reader 1) or ten (reader 2) patients would not be classified as axSpA if only MRI-SI-s was performed (in the scenario of replacement of mNY), while three (reader 1) or six (reader 2) patients would be additionally classified as axSpA in both scenarios (replacement of mNY and addition of MRI-SI-s). Similar results were seen for the other cut-offs (E≥3, FL≥3).ConclusionsStructural lesions on MRI can be used reliably either as an addition to or as a substitute for radiographs in the ASAS axSpA classification of patients in our cohort of patients with short symptom duration.


Sign in / Sign up

Export Citation Format

Share Document