scholarly journals Feature based analyses of lung nodules from computed tomography (CT) images

Author(s):  
Md. Anwar Hussain ◽  
Lakshipriya Gogoi
2009 ◽  
Vol 56 (7) ◽  
pp. 1810-1820 ◽  
Author(s):  
Xujiong Ye ◽  
Xinyu Lin ◽  
J. Dehmeshki ◽  
G. Slabaugh ◽  
G. Beddoe

Mathematics ◽  
2021 ◽  
Vol 9 (13) ◽  
pp. 1457
Author(s):  
Muazzam Maqsood ◽  
Sadaf Yasmin ◽  
Irfan Mehmood ◽  
Maryam Bukhari ◽  
Mucheol Kim

A typical growth of cells inside tissue is normally known as a nodular entity. Lung nodule segmentation from computed tomography (CT) images becomes crucial for early lung cancer diagnosis. An issue that pertains to the segmentation of lung nodules is homogenous modular variants. The resemblance among nodules as well as among neighboring regions is very challenging to deal with. Here, we propose an end-to-end U-Net-based segmentation framework named DA-Net for efficient lung nodule segmentation. This method extracts rich features by integrating compactly and densely linked rich convolutional blocks merged with Atrous convolutions blocks to broaden the view of filters without dropping loss and coverage data. We first extract the lung’s ROI images from the whole CT scan slices using standard image processing operations and k-means clustering. This reduces the search space of the model to only lungs where the nodules are present instead of the whole CT scan slice. The evaluation of the suggested model was performed through utilizing the LIDC-IDRI dataset. According to the results, we found that DA-Net showed good performance, achieving an 81% Dice score value and 71.6% IOU score.


2021 ◽  
Vol 17 (4) ◽  
pp. 1-16
Author(s):  
Xiaowe Xu ◽  
Jiawei Zhang ◽  
Jinglan Liu ◽  
Yukun Ding ◽  
Tianchen Wang ◽  
...  

As one of the most commonly ordered imaging tests, the computed tomography (CT) scan comes with inevitable radiation exposure that increases cancer risk to patients. However, CT image quality is directly related to radiation dose, and thus it is desirable to obtain high-quality CT images with as little dose as possible. CT image denoising tries to obtain high-dose-like high-quality CT images (domain Y ) from low dose low-quality CT images (domain X ), which can be treated as an image-to-image translation task where the goal is to learn the transform between a source domain X (noisy images) and a target domain Y (clean images). Recently, the cycle-consistent adversarial denoising network (CCADN) has achieved state-of-the-art results by enforcing cycle-consistent loss without the need of paired training data, since the paired data is hard to collect due to patients’ interests and cardiac motion. However, out of concerns on patients’ privacy and data security, protocols typically require clinics to perform medical image processing tasks including CT image denoising locally (i.e., edge denoising). Therefore, the network models need to achieve high performance under various computation resource constraints including memory and performance. Our detailed analysis of CCADN raises a number of interesting questions that point to potential ways to further improve its performance using the same or even fewer computation resources. For example, if the noise is large leading to a significant difference between domain X and domain Y , can we bridge X and Y with a intermediate domain Z such that both the denoising process between X and Z and that between Z and Y are easier to learn? As such intermediate domains lead to multiple cycles, how do we best enforce cycle- consistency? Driven by these questions, we propose a multi-cycle-consistent adversarial network (MCCAN) that builds intermediate domains and enforces both local and global cycle-consistency for edge denoising of CT images. The global cycle-consistency couples all generators together to model the whole denoising process, whereas the local cycle-consistency imposes effective supervision on the process between adjacent domains. Experiments show that both local and global cycle-consistency are important for the success of MCCAN, which outperforms CCADN in terms of denoising quality with slightly less computation resource consumption.


1992 ◽  
Vol 11 (4) ◽  
pp. 546-553 ◽  
Author(s):  
S. Rathee ◽  
Z.J. Koles ◽  
T.R. Overton

1987 ◽  
Vol 28 (1) ◽  
pp. 25-30 ◽  
Author(s):  
K. Wadin ◽  
L. Thomander ◽  
H. Wilbrand

The reproducibility of the labyrinthine portion of the facial canal by computed tomography was investigated in 22 patients with Bell's palsy. The CT images were compared with those obtained in 18 temporal bone specimens. Measurements of the diameters of different parts of the facial canal were made on these images and also microscopically in plastic casts of the temporal bone specimens. No marked difference was found between the dimensions of the labyrinthine portion of the facial canal of the involved and healthy temporal bone in the patient, nor did these differ from the dimensions in the specimens. CT of the slender, curved labyrinthine portion was found to be of doubtful value for metric estimation of small differences in width. The anatomic variations of the canal rendered the evaluation more difficult. CT with a slice thickness of 2 mm was of no value for assessment of this part of the canal. Measurement of the diameters of the labyrinthine portion on CT images is an inappropriate and unreliable method for clinical purposes.


2021 ◽  
Author(s):  
Khalid Labib Alsamadony ◽  
Ertugrul Umut Yildirim ◽  
Guenther Glatz ◽  
Umair bin Waheed ◽  
Sherif M. Hanafy

Abstract Computed tomography (CT) is an important tool to characterize rock samples allowing quantification of physical properties in 3D and 4D. The accuracy of a property delineated from CT data is strongly correlated with the CT image quality. In general, high-quality, lower noise CT Images mandate greater exposure times. With increasing exposure time, however, more wear is put on the X-Ray tube and longer cooldown periods are required, inevitably limiting the temporal resolution of the particular phenomena under investigation. In this work, we propose a deep convolutional neural network (DCNN) based approach to improve the quality of images collected during reduced exposure time scans. First, we convolve long exposure time images from medical CT scanner with a blur kernel to mimic the degradation caused because of reduced exposure time scanning. Subsequently, utilizing the high- and low-quality scan stacks, we train a DCNN. The trained network enables us to restore any low-quality scan for which high-quality reference is not available. Furthermore, we investigate several factors affecting the DCNN performance such as the number of training images, transfer learning strategies, and loss functions. The results indicate that the number of training images is an important factor since the predictive capability of the DCNN improves as the number of training images increases. We illustrate, however, that the requirement for a large training dataset can be reduced by exploiting transfer learning. In addition, training the DCNN on mean squared error (MSE) as a loss function outperforms both mean absolute error (MAE) and Peak signal-to-noise ratio (PSNR) loss functions with respect to image quality metrics. The presented approach enables the prediction of high-quality images from low exposure CT images. Consequently, this allows for continued scanning without the need for X-Ray tube to cool down, thereby maximizing the temporal resolution. This is of particular value for any core flood experiment seeking to capture the underlying dynamics.


2014 ◽  
Vol 24 (6) ◽  
pp. 3179-3186 ◽  
Author(s):  
Tong Jia ◽  
Hao Zhang ◽  
Haixiu Meng

Sign in / Sign up

Export Citation Format

Share Document