scholarly journals Detection Accuracy and Latency of Colorectal Lesions with Computer-Aided Detection System Based on Low-Bias Evaluation

Diagnostics ◽  
2021 ◽  
Vol 11 (10) ◽  
pp. 1922
Author(s):  
Hiroaki Matsui ◽  
Shunsuke Kamba ◽  
Hideka Horiuchi ◽  
Sho Takahashi ◽  
Masako Nishikawa ◽  
...  

We developed a computer-aided detection (CADe) system to detect and localize colorectal lesions by modifying You-Only-Look-Once version 3 (YOLO v3) and evaluated its performance in two different settings. The test dataset was obtained from 20 randomly selected patients who underwent endoscopic resection for 69 colorectal lesions at the Jikei University Hospital between June 2017 and February 2018. First, we evaluated the diagnostic performances using still images randomly and automatically extracted from video recordings of the entire endoscopic procedure at intervals of 5 s, without eliminating poor quality images. Second, the latency of lesion detection by the CADe system from the initial appearance of lesions was investigated by reviewing the videos. A total of 6531 images, including 662 images with a lesion, were studied in the image-based analysis. The AUC, sensitivity, specificity, positive predictive value, negative predictive value, and accuracy were 0.983, 94.6%, 95.2%, 68.8%, 99.4%, and 95.1%, respectively. The median time for detecting colorectal lesions measured in the lesion-based analysis was 0.67 s. In conclusion, we proved that the originally developed CADe system based on YOLO v3 could accurately and instantaneously detect colorectal lesions using the test dataset obtained from videos, mitigating operator selection biases.

2020 ◽  
Author(s):  
Jinseok Lee

BACKGROUND The coronavirus disease (COVID-19) has explosively spread worldwide since the beginning of 2020. According to a multinational consensus statement from the Fleischner Society, computed tomography (CT) can be used as a relevant screening tool owing to its higher sensitivity for detecting early pneumonic changes. However, physicians are extremely busy fighting COVID-19 in this era of worldwide crisis. Thus, it is crucial to accelerate the development of an artificial intelligence (AI) diagnostic tool to support physicians. OBJECTIVE We aimed to quickly develop an AI technique to diagnose COVID-19 pneumonia and differentiate it from non-COVID pneumonia and non-pneumonia diseases on CT. METHODS A simple 2D deep learning framework, named fast-track COVID-19 classification network (FCONet), was developed to diagnose COVID-19 pneumonia based on a single chest CT image. FCONet was developed by transfer learning, using one of the four state-of-art pre-trained deep learning models (VGG16, ResNet50, InceptionV3, or Xception) as a backbone. For training and testing of FCONet, we collected 3,993 chest CT images of patients with COVID-19 pneumonia, other pneumonia, and non-pneumonia diseases from Wonkwang University Hospital, Chonnam National University Hospital, and the Italian Society of Medical and Interventional Radiology public database. These CT images were split into a training and a testing set at a ratio of 8:2. For the test dataset, the diagnostic performance to diagnose COVID-19 pneumonia was compared among the four pre-trained FCONet models. In addition, we tested the FCONet models on an additional external testing dataset extracted from the embedded low-quality chest CT images of COVID-19 pneumonia in recently published papers. RESULTS Of the four pre-trained models of FCONet, the ResNet50 showed excellent diagnostic performance (sensitivity 99.58%, specificity 100%, and accuracy 99.87%) and outperformed the other three pre-trained models in testing dataset. In additional external test dataset using low-quality CT images, the detection accuracy of the ResNet50 model was the highest (96.97%), followed by Xception, InceptionV3, and VGG16 (90.71%, 89.38%, and 87.12%, respectively). CONCLUSIONS The FCONet, a simple 2D deep learning framework based on a single chest CT image, provides excellent diagnostic performance in detecting COVID-19 pneumonia. Based on our testing dataset, the ResNet50-based FCONet might be the best model, as it outperformed other FCONet models based on VGG16, Xception, and InceptionV3.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Mu Sook Lee ◽  
Yong Soo Kim ◽  
Minki Kim ◽  
Muhammad Usman ◽  
Shi Sub Byon ◽  
...  

AbstractWe examined the feasibility of explainable computer-aided detection of cardiomegaly in routine clinical practice using segmentation-based methods. Overall, 793 retrospectively acquired posterior–anterior (PA) chest X-ray images (CXRs) of 793 patients were used to train deep learning (DL) models for lung and heart segmentation. The training dataset included PA CXRs from two public datasets and in-house PA CXRs. Two fully automated segmentation-based methods using state-of-the-art DL models for lung and heart segmentation were developed. The diagnostic performance was assessed and the reliability of the automatic cardiothoracic ratio (CTR) calculation was determined using the mean absolute error and paired t-test. The effects of thoracic pathological conditions on performance were assessed using subgroup analysis. One thousand PA CXRs of 1000 patients (480 men, 520 women; mean age 63 ± 23 years) were included. The CTR values derived from the DL models and diagnostic performance exhibited excellent agreement with reference standards for the whole test dataset. Performance of segmentation-based methods differed based on thoracic conditions. When tested using CXRs with lesions obscuring heart borders, the performance was lower than that for other thoracic pathological findings. Thus, segmentation-based methods using DL could detect cardiomegaly; however, the feasibility of computer-aided detection of cardiomegaly without human intervention was limited.


Author(s):  
Yongfeng Gao ◽  
Jiaxing Tan ◽  
Zhengrong Liang ◽  
Lihong Li ◽  
Yumei Huo

AbstractComputer aided detection (CADe) of pulmonary nodules plays an important role in assisting radiologists’ diagnosis and alleviating interpretation burden for lung cancer. Current CADe systems, aiming at simulating radiologists’ examination procedure, are built upon computer tomography (CT) images with feature extraction for detection and diagnosis. Human visual perception in CT image is reconstructed from sinogram, which is the original raw data acquired from CT scanner. In this work, different from the conventional image based CADe system, we propose a novel sinogram based CADe system in which the full projection information is used to explore additional effective features of nodules in the sinogram domain. Facing the challenges of limited research in this concept and unknown effective features in the sinogram domain, we design a new CADe system that utilizes the self-learning power of the convolutional neural network to learn and extract effective features from sinogram. The proposed system was validated on 208 patient cases from the publicly available online Lung Image Database Consortium database, with each case having at least one juxtapleural nodule annotation. Experimental results demonstrated that our proposed method obtained a value of 0.91 of the area under the curve (AUC) of receiver operating characteristic based on sinogram alone, comparing to 0.89 based on CT image alone. Moreover, a combination of sinogram and CT image could further improve the value of AUC to 0.92. This study indicates that pulmonary nodule detection in the sinogram domain is feasible with deep learning.


Radiology ◽  
2005 ◽  
Vol 235 (2) ◽  
pp. 385-390 ◽  
Author(s):  
Jay A. Baker ◽  
Eric L. Rosen ◽  
Michele M. Crockett ◽  
Joseph Y. Lo

Author(s):  
Ammar Chaudhry ◽  
Ammar Chaudhry ◽  
William H. Moore

Purpose: The radiographic diagnosis of lung nodules is associated with low sensitivity and specificity. Computer-aided detection (CAD) system has been shown to have higher accuracy in the detection of lung nodules. The purpose of this study is to assess the effect on sensitivity and specificity when a CAD system is used to review chest radiographs in real-time setting. Methods: Sixty-three patients, including 24 controls, who had chest radiographs and CT within three months were included in this study. Three radiologists were presented chest radiographs without CAD and were asked to mark all lung nodules. Then the radiologists were allowed to see the CAD region-of-interest (ROI) marks and were asked to agree or disagree with the marks. All marks were correlated with CT studies. Results: The mean sensitivity of the three radiologists without CAD was 16.1%, which showed a statistically significant improvement to 22.5% with CAD. The mean specificity of the three radiologists was 52.5% without CAD and decreased to 48.1% with CAD. There was no significant change in the positive predictive value or negative predictive value. Conclusion: The addition of a CAD system to chest radiography interpretation statistically improves the detection of lung nodules without affecting its specificity. Thus suggesting CAD would improve overall detection of lung nodules.


Diagnostics ◽  
2021 ◽  
Vol 11 (8) ◽  
pp. 1498
Author(s):  
I-Ling Chen ◽  
Yen-Jen Wang ◽  
Chang-Cheng Chang ◽  
Yu-Hung Wu ◽  
Chih-Wei Lu ◽  
...  

Dark skin-type individuals have a greater tendency to have pigmentary disorders, among which melasma is especially refractory to treat and often recurs. Objective measurement of melanin amount helps evaluate the treatment response of pigmentary disorders. However, naked-eye evaluation is subjective to weariness and bias. We used a cellular resolution full-field optical coherence tomography (FF-OCT) to assess melanin features of melasma lesions and perilesional skin on the cheeks of eight Asian patients. A computer-aided detection (CADe) system is proposed to mark and quantify melanin. This system combines spatial compounding-based denoising convolutional neural networks (SC-DnCNN), and through image processing techniques, various types of melanin features, including area, distribution, intensity, and shape, can be extracted. Through evaluations of the image differences between the lesion and perilesional skin, a distribution-based feature of confetti melanin without layering, two distribution-based features of confetti melanin in stratum spinosum, and a distribution-based feature of grain melanin at the dermal–epidermal junction, statistically significant findings were achieved (p-values = 0.0402, 0.0032, 0.0312, and 0.0426, respectively). FF-OCT enables the real-time observation of melanin features, and the CADe system with SC-DnCNN was a precise and objective tool with which to interpret the area, distribution, intensity, and shape of melanin on FF-OCT images.


Sign in / Sign up

Export Citation Format

Share Document