A fully automated rib fracture detection system on chest CT images and its impact on radiologist performance

Author(s):  
Xiang Hong Meng ◽  
Di Jia Wu ◽  
Zhi Wang ◽  
Xin Long Ma ◽  
Xiao Man Dong ◽  
...  
2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Liding Yao ◽  
Xiaojun Guan ◽  
Xiaowei Song ◽  
Yanbin Tan ◽  
Chun Wang ◽  
...  

AbstractRib fracture detection is time-consuming and demanding work for radiologists. This study aimed to introduce a novel rib fracture detection system based on deep learning which can help radiologists to diagnose rib fractures in chest computer tomography (CT) images conveniently and accurately. A total of 1707 patients were included in this study from a single center. We developed a novel rib fracture detection system on chest CT using a three-step algorithm. According to the examination time, 1507, 100 and 100 patients were allocated to the training set, the validation set and the testing set, respectively. Free Response ROC analysis was performed to evaluate the sensitivity and false positivity of the deep learning algorithm. Precision, recall, F1-score, negative predictive value (NPV) and detection and diagnosis were selected as evaluation metrics to compare the diagnostic efficiency of this system with radiologists. The radiologist-only study was used as a benchmark and the radiologist-model collaboration study was evaluated to assess the model’s clinical applicability. A total of 50,170,399 blocks (fracture blocks, 91,574; normal blocks, 50,078,825) were labelled for training. The F1-score of the Rib Fracture Detection System was 0.890 and the precision, recall and NPV values were 0.869, 0.913 and 0.969, respectively. By interacting with this detection system, the F1-score of the junior and the experienced radiologists had improved from 0.796 to 0.925 and 0.889 to 0.970, respectively; the recall scores had increased from 0.693 to 0.920 and 0.853 to 0.972, respectively. On average, the diagnosis time of radiologist assisted with this detection system was reduced by 65.3 s. The constructed Rib Fracture Detection System has a comparable performance with the experienced radiologist and is readily available to automatically detect rib fracture in the clinical setting with high efficacy, which could reduce diagnosis time and radiologists’ workload in the clinical practice.


2021 ◽  
pp. 028418512110438
Author(s):  
Xiang Liu ◽  
Dijia Wu ◽  
Huihui Xie ◽  
Yufeng Xu ◽  
Lin Liu ◽  
...  

Background The detection of rib fractures (RFs) on computed tomography (CT) images is time-consuming and susceptible to missed diagnosis. An automated artificial intelligence (AI) detection system may be helpful to improve the diagnostic efficiency for junior radiologists. Purpose To compare the diagnostic performance of junior radiologists with and without AI software for RF detection on chest CT images. Materials and methods Six junior radiologists from three institutions interpreted 393 CT images of patients with acute chest trauma, with and without AI software. The CT images were randomly split into two sets at each institution, with each set assigned to a different radiologist First, the detection of all fractures (AFs), including displaced fractures (DFs), non-displaced fractures and buckle fractures, was analyzed. Next, the DFs were selected for analysis. The sensitivity and specificity of the radiologist-only and radiologist-AI groups at the patient level were set as primary endpoints, and secondary endpoints were at the rib and lesion level. Results Regarding AFs, the sensitivity difference between the radiologist-AI group and the radiologist-only group were significant at different levels (patient-level: 26.20%; rib-level: 22.18%; lesion-level: 23.74%; P < 0.001). Regarding DFs, the sensitivity difference was 16.67%, 14.19%, and 16.16% at the patient, rib, and lesion levels, respectively ( P < 0.001). No significant difference was found in the specificity between the two groups for AFs and DFs at the patient and rib levels ( P > 0.05). Conclusion AI software improved the sensitivity of RF detection on CT images for junior radiologists and reduced the reading time by approximately 1 min per patient without decreasing the specificity.


Healthcare ◽  
2022 ◽  
Vol 10 (1) ◽  
pp. 166
Author(s):  
Mohamed Mouhafid ◽  
Mokhtar Salah ◽  
Chi Yue ◽  
Kewen Xia

Novel coronavirus (COVID-19) has been endangering human health and life since 2019. The timely quarantine, diagnosis, and treatment of infected people are the most necessary and important work. The most widely used method of detecting COVID-19 is real-time polymerase chain reaction (RT-PCR). Along with RT-PCR, computed tomography (CT) has become a vital technique in diagnosing and managing COVID-19 patients. COVID-19 reveals a number of radiological signatures that can be easily recognized through chest CT. These signatures must be analyzed by radiologists. It is, however, an error-prone and time-consuming process. Deep Learning-based methods can be used to perform automatic chest CT analysis, which may shorten the analysis time. The aim of this study is to design a robust and rapid medical recognition system to identify positive cases in chest CT images using three Ensemble Learning-based models. There are several techniques in Deep Learning for developing a detection system. In this paper, we employed Transfer Learning. With this technique, we can apply the knowledge obtained from a pre-trained Convolutional Neural Network (CNN) to a different but related task. In order to ensure the robustness of the proposed system for identifying positive cases in chest CT images, we used two Ensemble Learning methods namely Stacking and Weighted Average Ensemble (WAE) to combine the performances of three fine-tuned Base-Learners (VGG19, ResNet50, and DenseNet201). For Stacking, we explored 2-Levels and 3-Levels Stacking. The three generated Ensemble Learning-based models were trained on two chest CT datasets. A variety of common evaluation measures (accuracy, recall, precision, and F1-score) are used to perform a comparative analysis of each method. The experimental results show that the WAE method provides the most reliable performance, achieving a high recall value which is a desirable outcome in medical applications as it poses a greater risk if a true infected patient is not identified.


2004 ◽  
Author(s):  
Takeshi Hara ◽  
Akira Yamamoto ◽  
Xiangrong Zhou ◽  
Shingo Iwano ◽  
Shigeki Itoh ◽  
...  

2020 ◽  
Author(s):  
Jinseok Lee

BACKGROUND The coronavirus disease (COVID-19) has explosively spread worldwide since the beginning of 2020. According to a multinational consensus statement from the Fleischner Society, computed tomography (CT) can be used as a relevant screening tool owing to its higher sensitivity for detecting early pneumonic changes. However, physicians are extremely busy fighting COVID-19 in this era of worldwide crisis. Thus, it is crucial to accelerate the development of an artificial intelligence (AI) diagnostic tool to support physicians. OBJECTIVE We aimed to quickly develop an AI technique to diagnose COVID-19 pneumonia and differentiate it from non-COVID pneumonia and non-pneumonia diseases on CT. METHODS A simple 2D deep learning framework, named fast-track COVID-19 classification network (FCONet), was developed to diagnose COVID-19 pneumonia based on a single chest CT image. FCONet was developed by transfer learning, using one of the four state-of-art pre-trained deep learning models (VGG16, ResNet50, InceptionV3, or Xception) as a backbone. For training and testing of FCONet, we collected 3,993 chest CT images of patients with COVID-19 pneumonia, other pneumonia, and non-pneumonia diseases from Wonkwang University Hospital, Chonnam National University Hospital, and the Italian Society of Medical and Interventional Radiology public database. These CT images were split into a training and a testing set at a ratio of 8:2. For the test dataset, the diagnostic performance to diagnose COVID-19 pneumonia was compared among the four pre-trained FCONet models. In addition, we tested the FCONet models on an additional external testing dataset extracted from the embedded low-quality chest CT images of COVID-19 pneumonia in recently published papers. RESULTS Of the four pre-trained models of FCONet, the ResNet50 showed excellent diagnostic performance (sensitivity 99.58%, specificity 100%, and accuracy 99.87%) and outperformed the other three pre-trained models in testing dataset. In additional external test dataset using low-quality CT images, the detection accuracy of the ResNet50 model was the highest (96.97%), followed by Xception, InceptionV3, and VGG16 (90.71%, 89.38%, and 87.12%, respectively). CONCLUSIONS The FCONet, a simple 2D deep learning framework based on a single chest CT image, provides excellent diagnostic performance in detecting COVID-19 pneumonia. Based on our testing dataset, the ResNet50-based FCONet might be the best model, as it outperformed other FCONet models based on VGG16, Xception, and InceptionV3.


Sign in / Sign up

Export Citation Format

Share Document