scholarly journals Deep convolutional neural network for reduction of contrast-enhanced region on CT images

2019 ◽  
Vol 60 (5) ◽  
pp. 586-594 ◽  
Author(s):  
Iori Sumida ◽  
Taiki Magome ◽  
Hideki Kitamori ◽  
Indra J Das ◽  
Hajime Yamaguchi ◽  
...  

Abstract This study aims to produce non-contrast computed tomography (CT) images using a deep convolutional neural network (CNN) for imaging. Twenty-nine patients were selected. CT images were acquired without and with a contrast enhancement medium. The transverse images were divided into 64 × 64 pixels. This resulted in 14 723 patches in total for both non-contrast and contrast-enhanced CT image pairs. The proposed CNN model comprises five two-dimensional (2D) convolution layers with one shortcut path. For comparison, the U-net model, which comprises five 2D convolution layers interleaved with pooling and unpooling layers, was used. Training was performed in 24 patients and, for testing of trained models, another 5 patients were used. For quantitative evaluation, 50 regions of interest (ROIs) were selected on the reference contrast-enhanced image of the test data, and the mean pixel value of the ROIs was calculated. The mean pixel values of the ROIs at the same location on the reference non-contrast image and the predicted non-contrast image were calculated and those values were compared. Regarding the quantitative analysis, the difference in mean pixel value between the reference contrast-enhanced image and the predicted non-contrast image was significant (P < 0.0001) for both models. Significant differences in pixels (P < 0.0001) were found using the U-net model; in contrast, there was no significant difference using the proposed CNN model when comparing the reference non-contrast images and the predicted non-contrast images. Using the proposed CNN model, the contrast-enhanced region was satisfactorily reduced.

2020 ◽  
Vol 7 ◽  
Author(s):  
Hayden Gunraj ◽  
Linda Wang ◽  
Alexander Wong

The coronavirus disease 2019 (COVID-19) pandemic continues to have a tremendous impact on patients and healthcare systems around the world. In the fight against this novel disease, there is a pressing need for rapid and effective screening tools to identify patients infected with COVID-19, and to this end CT imaging has been proposed as one of the key screening methods which may be used as a complement to RT-PCR testing, particularly in situations where patients undergo routine CT scans for non-COVID-19 related reasons, patients have worsening respiratory status or developing complications that require expedited care, or patients are suspected to be COVID-19-positive but have negative RT-PCR test results. Early studies on CT-based screening have reported abnormalities in chest CT images which are characteristic of COVID-19 infection, but these abnormalities may be difficult to distinguish from abnormalities caused by other lung conditions. Motivated by this, in this study we introduce COVIDNet-CT, a deep convolutional neural network architecture that is tailored for detection of COVID-19 cases from chest CT images via a machine-driven design exploration approach. Additionally, we introduce COVIDx-CT, a benchmark CT image dataset derived from CT imaging data collected by the China National Center for Bioinformation comprising 104,009 images across 1,489 patient cases. Furthermore, in the interest of reliability and transparency, we leverage an explainability-driven performance validation strategy to investigate the decision-making behavior of COVIDNet-CT, and in doing so ensure that COVIDNet-CT makes predictions based on relevant indicators in CT images. Both COVIDNet-CT and the COVIDx-CT dataset are available to the general public in an open-source and open access manner as part of the COVID-Net initiative. While COVIDNet-CT is not yet a production-ready screening solution, we hope that releasing the model and dataset will encourage researchers, clinicians, and citizen data scientists alike to leverage and build upon them.


2019 ◽  
Vol 8 (2) ◽  
pp. 4605-4613

This Raspberry Pi Single Board Computer-Based Cataract Detection System using Deep Convolutional Neural Network through GoogLeNet Transfer Learning and MATLAB digital image processing paradigm based on Lens Opacities Classification System III with Python application, which would capture the image of the eyes of cataract patients to detect the type of cataract without using dilating drops. Additionally, the system could also determine the severity, grade, color or area, and hardness of cataract. It would also display, save, search and print the partial diagnosis that can be done to the patients. Descriptive quantitative research, Waterfall System Development Life Cycle and Evolutionary Prototyping Models was used as the methodologies of this study. Cataract patients and ophthalmologists of one of the eye clinics in City of Biñan, Laguna, as well as engineers and information technology professionals tested the system and also served as respondents to the conducted survey. Obtained results indicated that the detection of cataract and its characteristics using the system were accurate and reliable, which has a significant difference from the current eye examination for cataract. Generally, this would be a modern cataract detection system for all Cataract patients


Author(s):  
Abdul Haseeb Wani ◽  
Yassar Shiekh ◽  
Najeeb Tallal Ahangar

<p class="abstract"><strong>Background:</strong> The gold standard for pulmonary artery pressure measurement is right heart catheterization but its invasive nature precludes its routine use. Main pulmonary arterial trunk calibre increase is a strong indicator of underlying pulmonary arterial hypertension. MDCT can accurately measure the diameter of main pulmonary artery. The objective of the study was to establish the normative values of main pulmonary artery caliber using contrast enhanced CT and try to ascertain any significant difference in main pulmonary artery calibers between two genders and correlation of age and main pulmonary artery diameter.</p><p class="abstract"><strong>Methods:</strong> Contrast enhanced CT images of 462 subjects were analysed on a PACS workstation monitor and widest diameter perpendicular to long axis of the main pulmonary artery as seen on reformatted axial image was measured with electronic caliper tool at the level of the main pulmonary artery bifurcation.  </p><p class="abstract"><strong>Results:</strong> The mean main pulmonary artery diameter in females was 22.54±2.19 mm and 23.34±3.06 mm in males. The mean pulmonary artery diameter in males was larger than females with statistically significant difference seen (p&lt;0.05). The correlation coefficient between age of whole sample and their mean main pulmonary artery was found to be 0.1006 with no statistically significant difference.</p><p class="abstract"><strong>Conclusions:</strong> There is a statistically significant difference in the mean main pulmonary artery calibre between males and females with no strong correlation between the age and mean main pulmonary artery calibre. Further studies are warranted to find the complex interaction between main pulmonary artery diameter and sex, age and body mass index.</p>


2021 ◽  
Vol 6 (1) ◽  
pp. 1-3
Author(s):  
Hayden Gunraj ◽  
Linda Wang ◽  
Alexander Wong

The COVID-19 pandemic continues to have a tremendous impact on patients and healthcare systems around the world. To combat this disease, there is a need for effective screening tools to identify patients infected with COVID-19, and to this end CT imaging has been proposed as a key screening method to complement RT-PCR testing. Early studies have reported abnormalities in chest CT images which are characteristic of COVID-19 infection, but these abnormalities may be difficult to distinguish from abnormalities caused by other lung conditions. Motivated by this, we introduce COVIDNet-CT, a deep convolutional neural network architecture tailored for detection of COVID-19 cases from chest CT images. We also introduce COVIDx-CT, a CT image dataset comprising 104,009 images across 1,489 patient cases. Finally, we leverage explainability to investigate the decision-making behaviour of COVIDNet-CT and ensure that COVIDNet-CT makes predictions based on relevant indicators in CT images.


2020 ◽  
Author(s):  
jianfeng sui ◽  
Liugang Gao ◽  
Haijiao Shang ◽  
Chunying Li ◽  
Zhengda Lu ◽  
...  

Abstract Objective: The aim of this study is to generate virtual noncontrast (VNC) computed tomography (CT) from intravenous enhanced CT by using Unet convolutional neural network (CNN). The differences among enhanced, VNC, and noncontrast CT in proton dose calculation were compared. Methods: A total of 30 groups of CT images of patients who received enhanced and noncontrast CT were selected. Enhanced and noncontrast CT were registered. Among these patients, 20 groups of the CT images were chosen as the training set. Enhanced CT images were used as the input, and the corresponding noncontrast CT images were used as output to train the Unet neural network. The remaining 10 groups of CT images were chosen as the test set. VNC images were generated by the trained Unet neural network. The same proton radiotherapy plan for esophagus cancer was designed based on three images. Proton dose distributions in enhanced, VNC, and noncontrast CT were calculated. The relative dose differences in enhanced CT with VNC and noncontrast CT were analyzed. Results: The mean absolute error (MAE) of the CT values between enhanced and noncontrast CT was 32.3 ± 2.6 HU. The MAE of the CT values between VNC and noncontrast CT was 6.7 ± 1.3 HU. The mean values of the enhanced CT in the great vessel, heart, lung, liver, and spinal cord were significantly higher than those of noncontrast CT, with the differences of 97, 83, 42, 40, and 10 HU, respectively. The mean values of the VNC CT showed no significant difference with noncontrast CT. The differences among enhanced, VNC, and noncontrast CT in terms of the average relative proton dose for clinical target volume(CTV), heart, great vessels, and lung were also investigated. The average relative proton doses of the contrast CT for these organs were significantly lower than those of noncontrast CT. The largest difference was observed in the great vessel, while the differences in other organs were relatively small. The γ-passing rates of the enhanced and VNC CT were calculated by 2% dose difference and 2 mm distance to agreement. Results showed that the mean γ-passing rate of VNC CT was significantly higher than that of enhanced CT (p<0.05). Conclusions: The proton radiotherapy design based on enhanced CT increased the range error, thereby resulting in calculation errors of the proton dose. Therefore, a technology that can be used to generate VNC CT from enhanced CT based on Unet neural network was proposed. The proton dose calculated based on VNC CT images was essentially consistent with that based on noncontrast CT.


2021 ◽  
Vol 20 (1) ◽  
Author(s):  
Hongmei Yuan ◽  
Minglei Yang ◽  
Shan Qian ◽  
Wenxin Wang ◽  
Xiaotian Jia ◽  
...  

Abstract Background Image registration is an essential step in the automated interpretation of the brain computed tomography (CT) images of patients with acute cerebrovascular disease (ACVD). However, performing brain CT registration accurately and rapidly remains greatly challenging due to the large intersubject anatomical variations, low resolution of soft tissues, and heavy computation costs. To this end, the HSCN-Net, a hybrid supervised convolutional neural network, was developed for precise and fast brain CT registration. Method HSCN-Net generated synthetic deformation fields using a simulator as one supervision for one reference–moving image pair to address the problem of lack of gold standards. Furthermore, the simulator was designed to generate multiscale affine and elastic deformation fields to overcome the registration challenge posed by large intersubject anatomical deformation. Finally, HSCN-Net adopted a hybrid loss function constituted by deformation field and image similarity to improve registration accuracy and generalization capability. In this work, 101 CT images of patients were collected for model construction (57), evaluation (14), and testing (30). HSCN-Net was compared with the classical Demons and VoxelMorph models. Qualitative analysis through the visual evaluation of critical brain tissues and quantitative analysis by determining the endpoint error (EPE) between the predicted sparse deformation vectors and gold-standard sparse deformation vectors, image normalized mutual information (NMI), and the Dice coefficient of the middle cerebral artery (MCA) blood supply area were carried out to assess model performance comprehensively. Results HSCN-Net and Demons had a better visual spatial matching performance than VoxelMorph, and HSCN-Net was more competent for smooth and large intersubject deformations than Demons. The mean EPE of HSCN-Net (3.29 mm) was less than that of Demons (3.47 mm) and VoxelMorph (5.12 mm); the mean Dice of HSCN-Net was 0.96, which was higher than that of Demons (0.90) and VoxelMorph (0.87); and the mean NMI of HSCN-Net (0.83) was slightly lower than that of Demons (0.84), but higher than that of VoxelMorph (0.81). Moreover, the mean registration time of HSCN-Net (17.86 s) was shorter than that of VoxelMorph (18.53 s) and Demons (147.21 s). Conclusion The proposed HSCN-Net could achieve accurate and rapid intersubject brain CT registration.


Sign in / Sign up

Export Citation Format

Share Document