scholarly journals Latent traits of lung tissue patterns in former smokers derived by dual channel deep learning in computed tomography images

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Frank Li ◽  
Jiwoong Choi ◽  
Chunrui Zou ◽  
John D. Newell ◽  
Alejandro P. Comellas ◽  
...  

AbstractChronic obstructive pulmonary disease (COPD) is a heterogeneous disease and the traditional variables extracted from computed tomography (CT) images may not be sufficient to describe all the topological features of lung tissues in COPD patients. We employed an unsupervised three-dimensional (3D) convolutional autoencoder (CAE)-feature constructor (FC) deep learning network to learn from CT data and derive tissue pattern-clusters jointly. We then applied exploratory factor analysis (EFA) to discover the unobserved latent traits (factors) among pattern-clusters. CT images at total lung capacity (TLC) and residual volume (RV) of 541 former smokers and 59 healthy non-smokers from the cohort of the SubPopulations and Intermediate Outcome Measures in the COPD Study (SPIROMICS) were analyzed. TLC and RV images were registered to calculate the Jacobian (determinant) values for all the voxels in TLC images. 3D Regions of interest (ROIs) with two data channels of CT intensity and Jacobian value were randomly extracted from training images and were fed to the 3D CAE-FC model. 80 pattern-clusters and 7 factors were identified. Factor scores computed for individual subjects were able to predict spirometry-measured pulmonary functions. Two factors which correlated with various emphysema subtypes, parametric response mapping (PRM) metrics, airway variants, and airway tree to lung volume ratio were discriminants of patients across all severity stages. Our findings suggest the potential of developing factor-based surrogate markers for new COPD phenotypes.

Cancers ◽  
2021 ◽  
Vol 14 (1) ◽  
pp. 40
Author(s):  
Gyu Sang Yoo ◽  
Huan Minh Luu ◽  
Heejung Kim ◽  
Won Park ◽  
Hongryull Pyo ◽  
...  

We aimed to evaluate and compare the qualities of synthetic computed tomography (sCT) generated by various deep-learning methods in volumetric modulated arc therapy (VMAT) planning for prostate cancer. Simulation computed tomography (CT) and T2-weighted simulation magnetic resonance image from 113 patients were used in the sCT generation by three deep-learning approaches: generative adversarial network (GAN), cycle-consistent GAN (CycGAN), and reference-guided CycGAN (RgGAN), a new model which performed further adjustment of sCTs generated by CycGAN with available paired images. VMAT plans on the original simulation CT images were recalculated on the sCTs and the dosimetric differences were evaluated. For soft tissue, a significant difference in the mean Hounsfield unites (HUs) was observed between the original CT images and only sCTs from GAN (p = 0.03). The mean relative dose differences for planning target volumes or organs at risk were within 2% among the sCTs from the three deep-learning approaches. The differences in dosimetric parameters for D98% and D95% from original CT were lowest in sCT from RgGAN. In conclusion, HU conservation for soft tissue was poorest for GAN. There was the trend that sCT generated from the RgGAN showed best performance in dosimetric conservation D98% and D95% than sCTs from other methodologies.


2021 ◽  
pp. 1063293X2110214
Author(s):  
RT Subhalakshmi ◽  
S Appavu alias Balamurugan ◽  
S Sasikala

Recently, the COVID-19 pandemic becomes increased in a drastic way, with the availability of a limited quantity of rapid testing kits. Therefore, automated COVID-19 diagnosis models are essential to identify the existence of disease from radiological images. Earlier studies have focused on the development of Artificial Intelligence (AI) techniques using X-ray images on COVID-19 diagnosis. This paper aims to develop a Deep Learning Based MultiModal Fusion technique called DLMMF for COVID-19 diagnosis and classification from Computed Tomography (CT) images. The proposed DLMMF model operates on three main processes namely Weiner Filtering (WF) based pre-processing, feature extraction and classification. The proposed model incorporates the fusion of deep features using VGG16 and Inception v4 models. Finally, Gaussian Naïve Bayes (GNB) based classifier is applied for identifying and classifying the test CT images into distinct class labels. The experimental validation of the DLMMF model takes place using open-source COVID-CT dataset, which comprises a total of 760 CT images. The experimental outcome defined the superior performance with the maximum sensitivity of 96.53%, specificity of 95.81%, accuracy of 96.81% and F-score of 96.73%.


2021 ◽  
Vol 17 (4) ◽  
pp. 1-16
Author(s):  
Xiaowe Xu ◽  
Jiawei Zhang ◽  
Jinglan Liu ◽  
Yukun Ding ◽  
Tianchen Wang ◽  
...  

As one of the most commonly ordered imaging tests, the computed tomography (CT) scan comes with inevitable radiation exposure that increases cancer risk to patients. However, CT image quality is directly related to radiation dose, and thus it is desirable to obtain high-quality CT images with as little dose as possible. CT image denoising tries to obtain high-dose-like high-quality CT images (domain Y ) from low dose low-quality CT images (domain X ), which can be treated as an image-to-image translation task where the goal is to learn the transform between a source domain X (noisy images) and a target domain Y (clean images). Recently, the cycle-consistent adversarial denoising network (CCADN) has achieved state-of-the-art results by enforcing cycle-consistent loss without the need of paired training data, since the paired data is hard to collect due to patients’ interests and cardiac motion. However, out of concerns on patients’ privacy and data security, protocols typically require clinics to perform medical image processing tasks including CT image denoising locally (i.e., edge denoising). Therefore, the network models need to achieve high performance under various computation resource constraints including memory and performance. Our detailed analysis of CCADN raises a number of interesting questions that point to potential ways to further improve its performance using the same or even fewer computation resources. For example, if the noise is large leading to a significant difference between domain X and domain Y , can we bridge X and Y with a intermediate domain Z such that both the denoising process between X and Z and that between Z and Y are easier to learn? As such intermediate domains lead to multiple cycles, how do we best enforce cycle- consistency? Driven by these questions, we propose a multi-cycle-consistent adversarial network (MCCAN) that builds intermediate domains and enforces both local and global cycle-consistency for edge denoising of CT images. The global cycle-consistency couples all generators together to model the whole denoising process, whereas the local cycle-consistency imposes effective supervision on the process between adjacent domains. Experiments show that both local and global cycle-consistency are important for the success of MCCAN, which outperforms CCADN in terms of denoising quality with slightly less computation resource consumption.


Author(s):  
José Denes Lima Araújo ◽  
Luana Batista da Cruz ◽  
João Otávio Bandeira Diniz ◽  
Jonnison Lima Ferreira ◽  
Aristófanes Corrêa Silva ◽  
...  

2021 ◽  
Author(s):  
Hoon Ko ◽  
Jimi Huh ◽  
Kyung Won Kim ◽  
Heewon Chung ◽  
Yousun Ko ◽  
...  

BACKGROUND Detection and quantification of intraabdominal free fluid (i.e., ascites) on computed tomography (CT) are essential processes to find emergent or urgent conditions in patients. In an emergent department, automatic detection and quantification of ascites will be beneficial. OBJECTIVE We aimed to develop an artificial intelligence (AI) algorithm for the automatic detection and quantification of ascites simultaneously using a single deep learning model (DLM). METHODS 2D deep learning models (DLMs) based on a deep residual U-Net, U-Net, bi-directional U-Net, and recurrent residual U-net were developed to segment areas of ascites on an abdominopelvic CT. Based on segmentation results, the DLMs detected ascites by classifying CT images into ascites images and non-ascites images. The AI algorithms were trained using 6,337 CT images from 160 subjects (80 with ascites and 80 without ascites) and tested using 1,635 CT images from 40 subjects (20 with ascites and 20 without ascites). The performance of AI algorithms was evaluated for diagnostic accuracy of ascites detection and for segmentation accuracy of ascites areas. Of these DLMs, we proposed an AI algorithm with the best performance. RESULTS The segmentation accuracy was the highest in the deep residual U-Net with a mean intersection over union (mIoU) value of 0.87, followed by U-Net, bi-directional U-Net, and recurrent residual U-net (mIoU values 0.80, 0.77, and 0.67, respectively). The detection accuracy was the highest in the deep residual U-net (0.96), followed by U-Net, bi-directional U-net, and recurrent residual U-net (0.90, 0.88, and 0.82, respectively). The deep residual U-net also achieved high sensitivity (0.96) and high specificity (0.96). CONCLUSIONS We propose the deep residual U-net-based AI algorithm for automatic detection and quantification of ascites on abdominopelvic CT scans, which provides excellent performance.


2013 ◽  
Vol 58 (6) ◽  
pp. 1531-1535 ◽  
Author(s):  
Ayaka Sakuma ◽  
Hisako Saitoh ◽  
Yoichi Suzuki ◽  
Yohsuke Makino ◽  
Go Inokuchi ◽  
...  

2020 ◽  
Author(s):  
Yodit Abebe Ayalew ◽  
Kinde Anlay Fante ◽  
Mohammed Aliy

Abstract Background: Liver cancer is the sixth most common cancer worldwide. According to WHO data in 2017, the liver cancer death in Ethiopia reached 1040 (0.16%) from all cancer deaths. Hepatocellular carcinoma (HCC), primary liver cancer causes the death of around 700,000 people each year worldwide and this makes it the third leading cause of cancer death. HCC is occurred due to cirrhosis and hepatitis B or C viruses. Liver cancer mostly diagnosed with a computed tomography (CT) scan. But, the detection of the tumor from the CT scan image is difficult since tumors have similar intensity with nearby tissues and may have a different appearance depending on their type, state, and equipment setting. Nowadays deep learning methods have been used for the segmentation of liver and its tumor from the CT scan images and they are more efficient than those traditional methods. But, they are computationally expensive and need many labeled samples for training, which are difficult in the case of biomedical images. Results: A deep learning-based segmentation algorithm is employed for liver and tumor segmentation from abdominal CT scan images. Three separate UNet models, one for liver segmentation and the others two for tumor segmentation from the segmented liver and directly from the abdominal CT scan image were used. A dice score of 0.96 was obtained for liver segmentation. And a dice score of 0.74 and 0.63 was obtained for segmentation of tumor from the liver and from abdominal CT scan image respectively. Conclusion: The research improves the liver tumor segmentation that will help the physicians in the diagnosis and detection of liver tumors and in designing a treatment plan for the patient. And for the patient, it increases the patients’ chance of getting treatment and decrease the mortality rate due to liver cancer.


Sign in / Sign up

Export Citation Format

Share Document