scholarly journals Deep Learning for Accurate Segmentation of Venous Thrombus from Black-Blood Magnetic Resonance Images: A Multicenter Study

2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Chuanqi Sun ◽  
Xiangyu Xiong ◽  
Tianjing Zhang ◽  
Xiuhong Guan ◽  
Huan Mao ◽  
...  

Objective. Deep vein thrombosis (DVT) is the third-largest cardiovascular disease, and accurate segmentation of venous thrombus from the black-blood magnetic resonance (MR) images can provide additional information for personalized DVT treatment planning. Therefore, a deep learning network is proposed to automatically segment venous thrombus with high accuracy and reliability. Methods. In order to train, test, and external test the developed network, total images of 110 subjects are obtained from three different centers with two different black-blood MR techniques (i.e., DANTE-SPACE and DANTE-FLASH). Two experienced radiologists manually contoured each venous thrombus, followed by reediting, to create the ground truth. 5-fold cross-validation strategy is applied for training and testing. The segmentation performance is measured on pixel and vessel segment levels. For the pixel level, the dice similarity coefficient (DSC), average Hausdorff distance (AHD), and absolute volume difference (AVD) of segmented thrombus are calculated. For the vessel segment level, the sensitivity (SE), specificity (SP), accuracy (ACC), and positive and negative predictive values (PPV and NPV) are used. Results. The proposed network generates segmentation results in good agreement with the ground truth. Based on the pixel level, the proposed network achieves excellent results on testing and the other two external testing sets, DSC are 0.76, 0.76, and 0.73, AHD (mm) are 4.11, 6.45, and 6.49, and AVD are 0.16, 0.18, and 0.22. On the vessel segment level, SE are 0.95, 0.93, and 0.81, SP are 0.97, 0.92, and 0.97, ACC are 0.96, 0.94, and 0.95, PPV are 0.97, 0.82, and 0.96, and NPV are 0.97, 0.96, and 0.94. Conclusions. The proposed deep learning network is effective and stable for fully automatic segmentation of venous thrombus on black blood MR images.

Diagnostics ◽  
2021 ◽  
Vol 11 (7) ◽  
pp. 1156
Author(s):  
Kang Hee Lee ◽  
Sang Tae Choi ◽  
Guen Young Lee ◽  
You Jung Ha ◽  
Sang-Il Choi

Axial spondyloarthritis (axSpA) is a chronic inflammatory disease of the sacroiliac joints. In this study, we develop a method for detecting bone marrow edema by magnetic resonance (MR) imaging of the sacroiliac joints and a deep-learning network. A total of 815 MR images of the sacroiliac joints were obtained from 60 patients diagnosed with axSpA and 19 healthy subjects. Gadolinium-enhanced fat-suppressed T1-weighted oblique coronal images were used for deep learning. Active sacroiliitis was defined as bone marrow edema, and the following processes were performed: setting the region of interest (ROI) and normalizing it to a size suitable for input to a deep-learning network, determining bone marrow edema using a convolutional-neural-network-based deep-learning network for individual MR images, and determining sacroiliac arthritis in subject examinations based on the classification results of individual MR images. About 70% of the patients and normal subjects were randomly selected for the training dataset, and the remaining 30% formed the test dataset. This process was repeated five times to calculate the average classification rate of the five-fold sets. The gradient-weighted class activation mapping method was used to validate the classification results. In the performance analysis of the ResNet18-based classification network for individual MR images, use of the ROI showed excellent detection performance of bone marrow edema with 93.55 ± 2.19% accuracy, 92.87 ± 1.27% recall, and 94.69 ± 3.03% precision. The overall performance was additionally improved using a median filter to reflect the context information. Finally, active sacroiliitis was diagnosed in individual subjects with 96.06 ± 2.83% accuracy, 100% recall, and 94.84 ± 3.73% precision. This is a pilot study to diagnose bone marrow edema by deep learning based on MR images, and the results suggest that MR analysis using deep learning can be a useful complementary means for clinicians to diagnose bone marrow edema.


Electronics ◽  
2020 ◽  
Vol 9 (8) ◽  
pp. 1199
Author(s):  
Michelle Bardis ◽  
Roozbeh Houshyar ◽  
Chanon Chantaduly ◽  
Alexander Ushinsky ◽  
Justin Glavis-Bloom ◽  
...  

(1) Background: The effectiveness of deep learning artificial intelligence depends on data availability, often requiring large volumes of data to effectively train an algorithm. However, few studies have explored the minimum number of images needed for optimal algorithmic performance. (2) Methods: This institutional review board (IRB)-approved retrospective review included patients who received prostate magnetic resonance imaging (MRI) between September 2014 and August 2018 and a magnetic resonance imaging (MRI) fusion transrectal biopsy. T2-weighted images were manually segmented by a board-certified abdominal radiologist. Segmented images were trained on a deep learning network with the following case numbers: 8, 16, 24, 32, 40, 80, 120, 160, 200, 240, 280, and 320. (3) Results: Our deep learning network’s performance was assessed with a Dice score, which measures overlap between the radiologist’s segmentations and deep learning-generated segmentations and ranges from 0 (no overlap) to 1 (perfect overlap). Our algorithm’s Dice score started at 0.424 with 8 cases and improved to 0.858 with 160 cases. After 160 cases, the Dice increased to 0.867 with 320 cases. (4) Conclusions: Our deep learning network for prostate segmentation produced the highest overall Dice score with 320 training cases. Performance improved notably from training sizes of 8 to 120, then plateaued with minimal improvement at training case size above 160. Other studies utilizing comparable network architectures may have similar plateaus, suggesting suitable results may be obtainable with small datasets.


2021 ◽  
Vol 15 ◽  
Author(s):  
Evan Fletcher ◽  
Charles DeCarli ◽  
Audrey P. Fan ◽  
Alexander Knaack

Deep learning implementations using convolutional neural nets have recently demonstrated promise in many areas of medical imaging. In this article we lay out the methods by which we have achieved consistently high quality, high throughput computation of intra-cranial segmentation from whole head magnetic resonance images, an essential but typically time-consuming bottleneck for brain image analysis. We refer to this output as “production-level” because it is suitable for routine use in processing pipelines. Training and testing with an extremely large archive of structural images, our segmentation algorithm performs uniformly well over a wide variety of separate national imaging cohorts, giving Dice metric scores exceeding those of other recent deep learning brain extractions. We describe the components involved to achieve this performance, including size, variety and quality of ground truth, and appropriate neural net architecture. We demonstrate the crucial role of appropriately large and varied datasets, suggesting a less prominent role for algorithm development beyond a threshold of capability.


2020 ◽  
Vol 12 (15) ◽  
pp. 2345 ◽  
Author(s):  
Ahram Song ◽  
Yongil Kim ◽  
Youkyung Han

Object-based image analysis (OBIA) is better than pixel-based image analysis for change detection (CD) in very high-resolution (VHR) remote sensing images. Although the effectiveness of deep learning approaches has recently been proved, few studies have investigated OBIA and deep learning for CD. Previously proposed methods use the object information obtained from the preprocessing and postprocessing phase of deep learning. In general, they use the dominant or most frequently used label information with respect to all the pixels inside an object without considering any quantitative criteria to integrate the deep learning network and object information. In this study, we developed an object-based CD method for VHR satellite images using a deep learning network to denote the uncertainty associated with an object and effectively detect the changes in an area without the ground truth data. The proposed method defines the uncertainty associated with an object and mainly includes two phases. Initially, CD objects were generated by unsupervised CD methods, and the objects were used to train the CD network comprising three-dimensional convolutional layers and convolutional long short-term memory layers. The CD objects were updated according to the uncertainty level after the learning process was completed. Further, the updated CD objects were considered as the training data for the CD network. This process was repeated until the entire area was classified into two classes, i.e., change and no-change, with respect to the object units or defined epoch. The experiments conducted using two different VHR satellite images confirmed that the proposed method achieved the best performance when compared with the performances obtained using the traditional CD approaches. The method was less affected by salt and pepper noise and could effectively extract the region of change in object units without ground truth data. Furthermore, the proposed method can offer advantages associated with unsupervised CD methods and a CD network subjected to postprocessing by effectively utilizing the deep learning technique and object information.


2021 ◽  
pp. 20210185
Author(s):  
Michihito Nozawa ◽  
Hirokazu Ito ◽  
Yoshiko Ariji ◽  
Motoki Fukuda ◽  
Chinami Igarashi ◽  
...  

Objectives: The aims of the present study were to construct a deep learning model for automatic segmentation of the temporomandibular joint (TMJ) disc on magnetic resonance (MR) images, and to evaluate the performances using the internal and external test data. Methods: In total, 1200 MR images of closed and open mouth positions in patients with temporomandibular disorder (TMD) were collected from two hospitals (Hospitals A and B). The training and validation data comprised 1000 images from Hospital A, which were used to create a segmentation model. The performance was evaluated using 200 images from Hospital A (internal validity test) and 200 images from Hospital B (external validity test). Results: Although the analysis of performance determined with data from Hospital B showed low recall (sensitivity), compared with the performance determined with data from Hospital A, both performances were above 80%. Precision (positive predictive value) was lower when test data from Hospital A were used for the position of anterior disc displacement. According to the intra-articular TMD classification, the proportions of accurately assigned TMJs were higher when using images from Hospital A than when using images from Hospital B. Conclusion: The segmentation deep learning model created in this study may be useful for identifying disc positions on MR images.


2021 ◽  
Vol 8 ◽  
Author(s):  
Anika Biercher ◽  
Sebastian Meller ◽  
Jakob Wendt ◽  
Norman Caspari ◽  
Johannes Schmidt-Mosig ◽  
...  

Deep Learning based Convolutional Neural Networks (CNNs) are the state-of-the-art machine learning technique with medical image data. They have the ability to process large amounts of data and learn image features directly from the raw data. Based on their training, these networks are ultimately able to classify unknown data and make predictions. Magnetic resonance imaging (MRI) is the imaging modality of choice for many spinal cord disorders. Proper interpretation requires time and expertise from radiologists, so there is great interest in using artificial intelligence to more quickly interpret and diagnose medical imaging data. In this study, a CNN was trained and tested using thoracolumbar MR images from 500 dogs. T1- and T2-weighted MR images in sagittal and transverse planes were used. The network was trained with unremarkable images as well as with images showing the following spinal cord pathologies: intervertebral disc extrusion (IVDE), intervertebral disc protrusion (IVDP), fibrocartilaginous embolism (FCE)/acute non-compressive nucleus pulposus extrusion (ANNPE), syringomyelia and neoplasia. 2,693 MR images from 375 dogs were used for network training. The network was tested using 7,695 MR images from 125 dogs. The network performed best in detecting IVDPs on sagittal T1-weighted images, with a sensitivity of 100% and specificity of 95.1%. The network also performed very well in detecting IVDEs, especially on sagittal T2-weighted images, with a sensitivity of 90.8% and specificity of 98.98%. The network detected FCEs and ANNPEs with a sensitivity of 62.22% and a specificity of 97.90% on sagittal T2-weighted images and with a sensitivity of 91% and a specificity of 90% on transverse T2-weighted images. In detecting neoplasms and syringomyelia, the CNN did not perform well because of insufficient training data or because the network had problems differentiating different hyperintensities on T2-weighted images and thus made incorrect predictions. This study has shown that it is possible to train a CNN in terms of recognizing and differentiating various spinal cord pathologies on canine MR images. CNNs therefore have great potential to act as a “second eye” for imagers in the future, providing a faster focus on the altered image area and thus increasing workflow in radiology.


Sign in / Sign up

Export Citation Format

Share Document