Localization and labeling of cervical vertebral bones in the micro-CT images of rabbit fetuses using a 3D deep convolutional neural network

Author(s):  
Antong Chen ◽  
Dahai Xue ◽  
Tosha Shah ◽  
Catherine D. G. Hines ◽  
Alexa Gleason ◽  
...  
2020 ◽  
Vol 7 ◽  
Author(s):  
Hayden Gunraj ◽  
Linda Wang ◽  
Alexander Wong

The coronavirus disease 2019 (COVID-19) pandemic continues to have a tremendous impact on patients and healthcare systems around the world. In the fight against this novel disease, there is a pressing need for rapid and effective screening tools to identify patients infected with COVID-19, and to this end CT imaging has been proposed as one of the key screening methods which may be used as a complement to RT-PCR testing, particularly in situations where patients undergo routine CT scans for non-COVID-19 related reasons, patients have worsening respiratory status or developing complications that require expedited care, or patients are suspected to be COVID-19-positive but have negative RT-PCR test results. Early studies on CT-based screening have reported abnormalities in chest CT images which are characteristic of COVID-19 infection, but these abnormalities may be difficult to distinguish from abnormalities caused by other lung conditions. Motivated by this, in this study we introduce COVIDNet-CT, a deep convolutional neural network architecture that is tailored for detection of COVID-19 cases from chest CT images via a machine-driven design exploration approach. Additionally, we introduce COVIDx-CT, a benchmark CT image dataset derived from CT imaging data collected by the China National Center for Bioinformation comprising 104,009 images across 1,489 patient cases. Furthermore, in the interest of reliability and transparency, we leverage an explainability-driven performance validation strategy to investigate the decision-making behavior of COVIDNet-CT, and in doing so ensure that COVIDNet-CT makes predictions based on relevant indicators in CT images. Both COVIDNet-CT and the COVIDx-CT dataset are available to the general public in an open-source and open access manner as part of the COVID-Net initiative. While COVIDNet-CT is not yet a production-ready screening solution, we hope that releasing the model and dataset will encourage researchers, clinicians, and citizen data scientists alike to leverage and build upon them.


2021 ◽  
Vol 6 (1) ◽  
pp. 1-3
Author(s):  
Hayden Gunraj ◽  
Linda Wang ◽  
Alexander Wong

The COVID-19 pandemic continues to have a tremendous impact on patients and healthcare systems around the world. To combat this disease, there is a need for effective screening tools to identify patients infected with COVID-19, and to this end CT imaging has been proposed as a key screening method to complement RT-PCR testing. Early studies have reported abnormalities in chest CT images which are characteristic of COVID-19 infection, but these abnormalities may be difficult to distinguish from abnormalities caused by other lung conditions. Motivated by this, we introduce COVIDNet-CT, a deep convolutional neural network architecture tailored for detection of COVID-19 cases from chest CT images. We also introduce COVIDx-CT, a CT image dataset comprising 104,009 images across 1,489 patient cases. Finally, we leverage explainability to investigate the decision-making behaviour of COVIDNet-CT and ensure that COVIDNet-CT makes predictions based on relevant indicators in CT images.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Raabid Hussain ◽  
Alain Lalande ◽  
Kibrom Berihu Girum ◽  
Caroline Guigou ◽  
Alexis Bozorg Grayeli

AbstractTemporal bone CT-scan is a prerequisite in most surgical procedures concerning the ear such as cochlear implants. The 3D vision of inner ear structures is crucial for diagnostic and surgical preplanning purposes. Since clinical CT-scans are acquired at relatively low resolutions, improved performance can be achieved by registering patient-specific CT images to a high-resolution inner ear model built from accurate 3D segmentations based on micro-CT of human temporal bone specimens. This paper presents a framework based on convolutional neural network for human inner ear segmentation from micro-CT images which can be used to build such a model from an extensive database. The proposed approach employs an auto-context based cascaded 2D U-net architecture with 3D connected component refinement to segment the cochlear scalae, semicircular canals, and the vestibule. The system was formulated on a data set composed of 17 micro-CT from public Hear-EU dataset. A Dice coefficient of 0.90 and Hausdorff distance of 0.74 mm were obtained. The system yielded precise and fast automatic inner-ear segmentations.


2019 ◽  
Vol 60 (5) ◽  
pp. 586-594 ◽  
Author(s):  
Iori Sumida ◽  
Taiki Magome ◽  
Hideki Kitamori ◽  
Indra J Das ◽  
Hajime Yamaguchi ◽  
...  

Abstract This study aims to produce non-contrast computed tomography (CT) images using a deep convolutional neural network (CNN) for imaging. Twenty-nine patients were selected. CT images were acquired without and with a contrast enhancement medium. The transverse images were divided into 64 × 64 pixels. This resulted in 14 723 patches in total for both non-contrast and contrast-enhanced CT image pairs. The proposed CNN model comprises five two-dimensional (2D) convolution layers with one shortcut path. For comparison, the U-net model, which comprises five 2D convolution layers interleaved with pooling and unpooling layers, was used. Training was performed in 24 patients and, for testing of trained models, another 5 patients were used. For quantitative evaluation, 50 regions of interest (ROIs) were selected on the reference contrast-enhanced image of the test data, and the mean pixel value of the ROIs was calculated. The mean pixel values of the ROIs at the same location on the reference non-contrast image and the predicted non-contrast image were calculated and those values were compared. Regarding the quantitative analysis, the difference in mean pixel value between the reference contrast-enhanced image and the predicted non-contrast image was significant (P < 0.0001) for both models. Significant differences in pixels (P < 0.0001) were found using the U-net model; in contrast, there was no significant difference using the proposed CNN model when comparing the reference non-contrast images and the predicted non-contrast images. Using the proposed CNN model, the contrast-enhanced region was satisfactorily reduced.


Sign in / Sign up

Export Citation Format

Share Document