A large-scale evaluation of automatic pulmonary nodule detection in chest CT using local image features and k-nearest-neighbour classification

2009 ◽  
Vol 13 (5) ◽  
pp. 757-770 ◽  
Author(s):  
K. Murphy ◽  
B. van Ginneken ◽  
A.M.R. Schilham ◽  
B.J. de Hoop ◽  
H.A. Gietema ◽  
...  
2003 ◽  
Vol 44 (3) ◽  
pp. 252-257 ◽  
Author(s):  
D.-Y. Kim ◽  
J.-H. Kim ◽  
S.-M. Noh ◽  
J.-W. Park

2013 ◽  
Vol 37 (2) ◽  
pp. 334-341 ◽  
Author(s):  
Teresa Chapman ◽  
Jonathan O. Swanson ◽  
Grace S. Phillips ◽  
Marguerite T. Parisi ◽  
Adam M. Alessio

Sensors ◽  
2020 ◽  
Vol 20 (15) ◽  
pp. 4301 ◽  
Author(s):  
Jianning Chi ◽  
Shuang Zhang ◽  
Xiaosheng Yu ◽  
Chengdong Wu ◽  
Yang Jiang

Pulmonary nodule detection in chest computed tomography (CT) is of great significance for the early diagnosis of lung cancer. Therefore, it has attracted more and more researchers to propose various computer-assisted pulmonary nodule detection methods. However, these methods still could not provide convincing results because the nodules are easily confused with calcifications, vessels, or other benign lumps. In this paper, we propose a novel deep convolutional neural network (DCNN) framework for detecting pulmonary nodules in the chest CT image. The framework consists of three cascaded networks: First, a U-net network integrating inception structure and dense skip connection is proposed to segment the region of lung parenchyma from the chest CT image. The inception structure is used to replace the first convolution layer for better feature extraction with respect to multiple receptive fields, while the dense skip connection could reuse these features and transfer them through the network. Secondly, a modified U-net network where all the convolution layers are replaced by dilated convolution is proposed to detect the “suspicious nodules” in the image. The dilated convolution can increase the receptive fields to improve the ability of the network in learning global information of the image. Thirdly, a modified U-net adapting multi-scale pooling and multi-resolution convolution connection is proposed to find the true pulmonary nodule in the image with multiple candidate regions. During the detection, the result of the former step is used as the input of the latter step to follow the “coarse-to-fine” detection process. Moreover, the focal loss, perceptual loss and dice loss were used together to replace the cross-entropy loss to solve the problem of imbalance distribution of positive and negative samples. We apply our method on two public datasets to evaluate its ability in pulmonary nodule detection. Experimental results illustrate that the proposed method outperform the state-of-the-art methods with respect to accuracy, sensitivity and specificity.


IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 46033-46044 ◽  
Author(s):  
Jun Wang ◽  
Jiawei Wang ◽  
Yaofeng Wen ◽  
Hongbing Lu ◽  
Tianye Niu ◽  
...  

Author(s):  
C Franck ◽  
A Snoeckx ◽  
M Spinhoven ◽  
H El Addouli ◽  
S Nicolay ◽  
...  

Abstract This study’s aim was to assess whether deep learning image reconstruction (DLIR) techniques are non-inferior to ASIR-V for the clinical task of pulmonary nodule detection in chest computed tomography. Up to 6 (range 3–6, mean 4.2) artificial lung nodules (diameter: 3, 5, 8 mm; density: −800, −630, +100 HU) were inserted at different locations in the Kyoto Kagaku Lungman phantom. In total, 16 configurations (10 abnormal, 6 normal) were scanned at 7.6, 3, 1.6 and 0.38 mGy CTDIvol (respectively 0, 60, 80 and 95% dose reduction). Images were reconstructed using 50% ASIR-V and a deep learning-based algorithm with low (DL-L), medium (DL-M) and high (DL-H) strength. Four chest radiologists evaluated 256 series by locating and scoring nodules on a five-point scale. No statistically significant difference was found among the reconstruction algorithms (p = 0.987, average across readers AUC: 0.555, 0.561, 0.557, 0.558 for ASIR-V, DL-L, DL-M, DL-H).


Sensors ◽  
2019 ◽  
Vol 19 (2) ◽  
pp. 291 ◽  
Author(s):  
Hamdi Sahloul ◽  
Shouhei Shirafuji ◽  
Jun Ota

Local image features are invariant to in-plane rotations and robust to minor viewpoint changes. However, the current detectors and descriptors for local image features fail to accommodate out-of-plane rotations larger than 25°–30°. Invariance to such viewpoint changes is essential for numerous applications, including wide baseline matching, 6D pose estimation, and object reconstruction. In this study, we present a general embedding that wraps a detector/descriptor pair in order to increase viewpoint invariance by exploiting input depth maps. The proposed embedding locates smooth surfaces within the input RGB-D images and projects them into a viewpoint invariant representation, enabling the detection and description of more viewpoint invariant features. Our embedding can be utilized with different combinations of descriptor/detector pairs, according to the desired application. Using synthetic and real-world objects, we evaluated the viewpoint invariance of various detectors and descriptors, for both standalone and embedded approaches. While standalone local image features fail to accommodate average viewpoint changes beyond 33.3°, our proposed embedding boosted the viewpoint invariance to different levels, depending on the scene geometry. Objects with distinct surface discontinuities were on average invariant up to 52.8°, and the overall average for all evaluated datasets was 45.4°. Similarly, out of a total of 140 combinations involving 20 local image features and various objects with distinct surface discontinuities, only a single standalone local image feature exceeded the goal of 60° viewpoint difference in just two combinations, as compared with 19 different local image features succeeding in 73 combinations when wrapped in the proposed embedding. Furthermore, the proposed approach operates robustly in the presence of input depth noise, even that of low-cost commodity depth sensors, and well beyond.


Sign in / Sign up

Export Citation Format

Share Document