scholarly journals Development and Evaluation of a Deep Learning Algorithm for Rib Segmentation and Fracture Detection from Multi-Center Chest CT Images

2021 ◽  
pp. e200248
Author(s):  
Mingxiang Wu ◽  
Zhizhong Chai ◽  
Guangwu Qian ◽  
Huangjing Lin ◽  
Qiong Wang ◽  
...  
Author(s):  
Chuansheng Zheng ◽  
Xianbo Deng ◽  
Qiang Fu ◽  
Qiang Zhou ◽  
Jiapei Feng ◽  
...  

AbstractAccurate and rapid diagnosis of COVID-19 suspected cases plays a crucial role in timely quarantine and medical treatment. Developing a deep learning-based model for automatic COVID-19 detection on chest CT is helpful to counter the outbreak of SARS-CoV-2. A weakly-supervised deep learning-based software system was developed using 3D CT volumes to detect COVID-19. For each patient, the lung region was segmented using a pre-trained UNet; then the segmented 3D lung region was fed into a 3D deep neural network to predict the probability of COVID-19 infectious. 499 CT volumes collected from Dec. 13, 2019, to Jan. 23, 2020, were used for training and 131 CT volumes collected from Jan 24, 2020, to Feb 6, 2020, were used for testing. The deep learning algorithm obtained 0.959 ROC AUC and 0.976 PR AUC. There was an operating point with 0.907 sensitivity and 0.911 specificity in the ROC curve. When using a probability threshold of 0.5 to classify COVID-positive and COVID-negative, the algorithm obtained an accuracy of 0.901, a positive predictive value of 0.840 and a very high negative predictive value of 0.982. The algorithm took only 1.93 seconds to process a single patient’s CT volume using a dedicated GPU. Our weakly-supervised deep learning model can accurately predict the COVID-19 infectious probability in chest CT volumes without the need for annotating the lesions for training. The easily-trained and highperformance deep learning algorithm provides a fast way to identify COVID-19 patients, which is beneficial to control the outbreak of SARS-CoV-2. The developed deep learning software is available at https://github.com/sydney0zq/covid-19-detection.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Liding Yao ◽  
Xiaojun Guan ◽  
Xiaowei Song ◽  
Yanbin Tan ◽  
Chun Wang ◽  
...  

AbstractRib fracture detection is time-consuming and demanding work for radiologists. This study aimed to introduce a novel rib fracture detection system based on deep learning which can help radiologists to diagnose rib fractures in chest computer tomography (CT) images conveniently and accurately. A total of 1707 patients were included in this study from a single center. We developed a novel rib fracture detection system on chest CT using a three-step algorithm. According to the examination time, 1507, 100 and 100 patients were allocated to the training set, the validation set and the testing set, respectively. Free Response ROC analysis was performed to evaluate the sensitivity and false positivity of the deep learning algorithm. Precision, recall, F1-score, negative predictive value (NPV) and detection and diagnosis were selected as evaluation metrics to compare the diagnostic efficiency of this system with radiologists. The radiologist-only study was used as a benchmark and the radiologist-model collaboration study was evaluated to assess the model’s clinical applicability. A total of 50,170,399 blocks (fracture blocks, 91,574; normal blocks, 50,078,825) were labelled for training. The F1-score of the Rib Fracture Detection System was 0.890 and the precision, recall and NPV values were 0.869, 0.913 and 0.969, respectively. By interacting with this detection system, the F1-score of the junior and the experienced radiologists had improved from 0.796 to 0.925 and 0.889 to 0.970, respectively; the recall scores had increased from 0.693 to 0.920 and 0.853 to 0.972, respectively. On average, the diagnosis time of radiologist assisted with this detection system was reduced by 65.3 s. The constructed Rib Fracture Detection System has a comparable performance with the experienced radiologist and is readily available to automatically detect rib fracture in the clinical setting with high efficacy, which could reduce diagnosis time and radiologists’ workload in the clinical practice.


Author(s):  
Shuai Wang ◽  
Bo Kang ◽  
Jinlu Ma ◽  
Xianjun Zeng ◽  
Mingming Xiao ◽  
...  

Abstract Objective The outbreak of Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-COV-2) has caused more than 26 million cases of Corona virus disease (COVID-19) in the world so far. To control the spread of the disease, screening large numbers of suspected cases for appropriate quarantine and treatment are a priority. Pathogenic laboratory testing is typically the gold standard, but it bears the burden of significant false negativity, adding to the urgent need of alternative diagnostic methods to combat the disease. Based on COVID-19 radiographic changes in CT images, this study hypothesized that artificial intelligence methods might be able to extract specific graphical features of COVID-19 and provide a clinical diagnosis ahead of the pathogenic test, thus saving critical time for disease control. Methods We collected 1065 CT images of pathogen-confirmed COVID-19 cases along with those previously diagnosed with typical viral pneumonia. We modified the inception transfer-learning model to establish the algorithm, followed by internal and external validation. Results The internal validation achieved a total accuracy of 89.5% with a specificity of 0.88 and sensitivity of 0.87. The external testing dataset showed a total accuracy of 79.3% with a specificity of 0.83 and sensitivity of 0.67. In addition, in 54 COVID-19 images, the first two nucleic acid test results were negative, and 46 were predicted as COVID-19 positive by the algorithm, with an accuracy of 85.2%. Conclusion These results demonstrate the proof-of-principle for using artificial intelligence to extract radiological features for timely and accurate COVID-19 diagnosis. Key Points • The study evaluated the diagnostic performance of a deep learning algorithm using CT images to screen for COVID-19 during the influenza season. • As a screening method, our model achieved a relatively high sensitivity on internal and external CT image datasets. • The model was used to distinguish between COVID-19 and other typical viral pneumonia, both of which have quite similar radiologic characteristics.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Ji Young Lee ◽  
Jong Soo Kim ◽  
Tae Yoon Kim ◽  
Young Soo Kim

AbstractA novel deep-learning algorithm for artificial neural networks (ANNs), completely different from the back-propagation method, was developed in a previous study. The purpose of this study was to assess the feasibility of using the algorithm for the detection of intracranial haemorrhage (ICH) and the classification of its subtypes, without employing the convolutional neural network (CNN). For the detection of ICH with the summation of all the computed tomography (CT) images for each case, the area under the ROC curve (AUC) was 0.859, and the sensitivity and the specificity were 78.0% and 80.0%, respectively. Regarding ICH localisation, CT images were divided into 10 subdivisions based on the intracranial height. With the subdivision of 41–50%, the best diagnostic performance for detecting ICH was obtained with AUC of 0.903, the sensitivity of 82.5%, and the specificity of 84.1%. For the classification of the ICH to subtypes, the accuracy rate for subarachnoid haemorrhage (SAH) was considerably excellent at 91.7%. This study revealed that our approach can greatly reduce the ICH diagnosis time in an actual emergency situation with a fairly good diagnostic performance.


2021 ◽  
Vol 2021 ◽  
pp. 1-7
Author(s):  
Xiaojie Fan ◽  
Xiaoyu Zhang ◽  
Zibo Zhang ◽  
Yifang Jiang

In this study, deep learning algorithm-based energy/spectral computed tomography (CT) for the spinal metastasis from lung cancer was used. A dilated convolutional U-Net model (DC-U-Net model) was first proposed, which was used to segment the energy/spectral CT image of patients with the spinal metastasis from lung cancer. Subsequently, energy/spectral CT images under different energy levels were collected for the signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) comparison. It was found the learning rate of the model decreased exponentially as the number of training increased, with the lung contour segmented out of the image. Under 40–65 keV, the CT value of bone metastasis from lung cancer decreased with increasing energy, as with the average rank sum test result. The SNR and CNR values were the highest under 60 keV. The detection rate of the deep learning algorithm below 60 keV was 81.41%, and that of professional doctors was 77.56%. The detection rate of the deep learning algorithm below 140 keV was 66.03%, and that of professional doctors was 64.74%. In conclusion, the DC-U-Net model demonstrates better segmentation effects versus the convolutional neutral networ k (CNN), with the lung contour segmented. Further, a higher energy level leads to worse segmentation effects on the energy/spectral CT image.


Sign in / Sign up

Export Citation Format

Share Document