scholarly journals Fully Automated Segmentation of Globes for Volume Quantification in CT Images of Orbits using Deep Learning

2020 ◽  
Vol 41 (6) ◽  
pp. 1061-1069
Author(s):  
L. Umapathy ◽  
B. Winegar ◽  
L. MacKinnon ◽  
M. Hill ◽  
M.I. Altbach ◽  
...  
2021 ◽  
Author(s):  
Evropi Toulkeridou ◽  
Carlos Enrique Gutierrez ◽  
Daniel Baum ◽  
Kenji Doya ◽  
Evan P Economo

Three-dimensional (3D) imaging, such as micro-computed tomography (micro-CT), is increasingly being used by organismal biologists for precise and comprehensive anatomical characterization. However, the segmentation of anatomical structures remains a bottleneck in research, often requiring tedious manual work. Here, we propose a pipeline for the fully-automated segmentation of anatomical structures in micro-CT images utilizing state-of-the-art deep learning methods, selecting the ant brain as a testcase. We implemented the U-Net architecture for 2D image segmentation for our convolutional neural network (CNN), combined with pixel-island detection. For training and validation of the network, we assembled a dataset of semi-manually segmented brain images of 94 ant species. The trained network predicted the brain area in ant images fast and accurately; its performance tested on validation sets showed good agreement between the prediction and the target, scoring 80% Intersection over Union(IoU) and 90% Dice Coefficient (F1) accuracy. While manual segmentation usually takes many hours for each brain, the trained network takes only a few minutes.Furthermore, our network is generalizable for segmenting the whole neural system in full-body scans, and works in tests on distantly related and morphologically divergent insects (e.g., fruit flies). The latter suggest that methods like the one presented here generally apply across diverse taxa. Our method makes the construction of segmented maps and the morphological quantification of different species more efficient and scalable to large datasets, a step toward a big data approach to organismal anatomy.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Mahmood Nazari ◽  
Luis David Jiménez-Franco ◽  
Michael Schroeder ◽  
Andreas Kluge ◽  
Marcus Bronzel ◽  
...  

Abstract Purpose In this work, we address image segmentation in the scope of dosimetry using deep learning and make three main contributions: (a) to extend and optimize the architecture of an existing convolutional neural network (CNN) in order to obtain a fast, robust and accurate computed tomography (CT)-based organ segmentation method for kidneys and livers; (b) to train the CNN with an inhomogeneous set of CT scans and validate the CNN for daily dosimetry; and (c) to evaluate dosimetry results obtained using automated organ segmentation in comparison with manual segmentation done by two independent experts. Methods We adapted a performant deep learning approach using CT-images to delineate organ boundaries with sufficiently high accuracy and adequate processing time. The segmented organs were consequently used as binary masks for further convolution with a point spread function to retrieve the activity values from quantitatively reconstructed SPECT images for “volumetric”/3D dosimetry. The resulting activities were used to perform dosimetry calculations with the kidneys as source organs. Results The computational expense of the algorithm was sufficient for clinical daily routine, required minimum pre-processing and performed with acceptable accuracy a Dice coefficient of $$93\%$$ 93 % for liver segmentation and of $$94\%$$ 94 % for kidney segmentation, respectively. In addition, kidney self-absorbed doses calculated using automated segmentation differed by $$7\%$$ 7 % from dosimetry performed by two medical physicists in 8 patients. Conclusion The proposed approach may accelerate volumetric dosimetry of kidneys in molecular radiotherapy with 177Lu-labelled radiopharmaceuticals such as 177Lu-DOTATOC. However, even though a fully automated segmentation methodology based on CT images accelerates organ segmentation and performs with high accuracy, it does not remove the need for supervision and corrections by experts, mostly due to misalignments in the co-registration between SPECT and CT images. Trial registration EudraCT, 2016-001897-13. Registered 26.04.2016, www.clinicaltrialsregister.eu/ctr-search/search?query=2016-001897-13.


2020 ◽  
Vol 41 (Supplement_2) ◽  
Author(s):  
A.C Chandrashekar ◽  
A.H Handa ◽  
N.S Shivakumar ◽  
P.L Lapolla ◽  
V.G Grau ◽  
...  

Abstract Background Existing methods to reconstruct vascular structures from a computed tomography (CT) angiogram rely on injection of intravenous contrast to enhance the radio-density within the vessel lumen. Pathological changes present within the blood lumen, vessel wall or a combination of both prevent accurate 3D reconstruction. In the example of aortic aneurysmal (AAA) disease, a blood clot or thrombus adherent to the aortic wall within the expanding aneurysmal sac is present in 95% of cases. These deformations prevent the automatic extraction of vital clinically relevant information by current methods. Objectives In this study, we utilised deep learning segmentation methods to establish a high-throughput and automated segmentation pipeline for pathological blood vessels (ex. Aortic Aneurysm) in CT images acquired with or without the use of a contrast agent. Methods Twenty-six patients with paired non-contrast and contrast-enhanced CT images were randomly selected from an ethically-approved ongoing study (Ethics Ref 13/SC/0250), manually annotated and used for model training and evaluation (13/13). Data augmentation methods were implemented to diversify the training data set in a ratio of 10:1. We utilised a 3D U-Net with attention gating for both the aortic region-of-interest (ROI) detection and segmentation tasks. Trained architectures were evaluated using the DICE similarity score. Results Inter- and Intra- observer analysis supports the accuracy of the manual segmentations used for model training (intra-class correlation coefficient, “ICC” = 0.995 and 1.00, respective. P<0.001 for both). The performance of our Attention-based U-Net (DICE score: 94.8±0.5%) in extracting both the inner lumen and the outer wall of the aortic aneurysm from CT angiograms (CTA) was compared against a generic 3-D U-Net (DICE score: 89.5±0.6%) and displayed superior results (p<0.01). Fig 1A depicts the implementation of this network architecture within the aortic segmentation pipeline (automated ROI detection and aortic segmentation). This pipeline has allowed accurate and efficient extraction of the entire aortic volume from both contrast-enhanced CTA (DICE score: 95.3±0.6%) and non-contrast CT (DICE score: 93.2±0.7%) images. Fig 1B illustrates the model output alongside the labelled ground truth segmentation for the pathological aneurysmal region; only minor differences are visually discernible (coloured boxes). Conclusion We developed a novel automated pipeline for high resolution reconstruction of blood vessels using deep learning approaches. This pipeline enables automatic extraction of morphologic features of blood vessels and can be applied for research and potentially for clinical use. Automated Segmentation of Blood Vessels Funding Acknowledgement Type of funding source: Foundation. Main funding source(s): University of Oxford Medical Research Fund, John Fell Fund


2020 ◽  
Author(s):  
Jinseok Lee

BACKGROUND The coronavirus disease (COVID-19) has explosively spread worldwide since the beginning of 2020. According to a multinational consensus statement from the Fleischner Society, computed tomography (CT) can be used as a relevant screening tool owing to its higher sensitivity for detecting early pneumonic changes. However, physicians are extremely busy fighting COVID-19 in this era of worldwide crisis. Thus, it is crucial to accelerate the development of an artificial intelligence (AI) diagnostic tool to support physicians. OBJECTIVE We aimed to quickly develop an AI technique to diagnose COVID-19 pneumonia and differentiate it from non-COVID pneumonia and non-pneumonia diseases on CT. METHODS A simple 2D deep learning framework, named fast-track COVID-19 classification network (FCONet), was developed to diagnose COVID-19 pneumonia based on a single chest CT image. FCONet was developed by transfer learning, using one of the four state-of-art pre-trained deep learning models (VGG16, ResNet50, InceptionV3, or Xception) as a backbone. For training and testing of FCONet, we collected 3,993 chest CT images of patients with COVID-19 pneumonia, other pneumonia, and non-pneumonia diseases from Wonkwang University Hospital, Chonnam National University Hospital, and the Italian Society of Medical and Interventional Radiology public database. These CT images were split into a training and a testing set at a ratio of 8:2. For the test dataset, the diagnostic performance to diagnose COVID-19 pneumonia was compared among the four pre-trained FCONet models. In addition, we tested the FCONet models on an additional external testing dataset extracted from the embedded low-quality chest CT images of COVID-19 pneumonia in recently published papers. RESULTS Of the four pre-trained models of FCONet, the ResNet50 showed excellent diagnostic performance (sensitivity 99.58%, specificity 100%, and accuracy 99.87%) and outperformed the other three pre-trained models in testing dataset. In additional external test dataset using low-quality CT images, the detection accuracy of the ResNet50 model was the highest (96.97%), followed by Xception, InceptionV3, and VGG16 (90.71%, 89.38%, and 87.12%, respectively). CONCLUSIONS The FCONet, a simple 2D deep learning framework based on a single chest CT image, provides excellent diagnostic performance in detecting COVID-19 pneumonia. Based on our testing dataset, the ResNet50-based FCONet might be the best model, as it outperformed other FCONet models based on VGG16, Xception, and InceptionV3.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Jared Hamwood ◽  
Beat Schmutz ◽  
Michael J. Collins ◽  
Mark C. Allenby ◽  
David Alonso-Caneiro

AbstractThis paper proposes a fully automatic method to segment the inner boundary of the bony orbit in two different image modalities: magnetic resonance imaging (MRI) and computed tomography (CT). The method, based on a deep learning architecture, uses two fully convolutional neural networks in series followed by a graph-search method to generate a boundary for the orbit. When compared to human performance for segmentation of both CT and MRI data, the proposed method achieves high Dice coefficients on both orbit and background, with scores of 0.813 and 0.975 in CT images and 0.930 and 0.995 in MRI images, showing a high degree of agreement with a manual segmentation by a human expert. Given the volumetric characteristics of these imaging modalities and the complexity and time-consuming nature of the segmentation of the orbital region in the human skull, it is often impractical to manually segment these images. Thus, the proposed method provides a valid clinical and research tool that performs similarly to the human observer.


Energies ◽  
2021 ◽  
Vol 14 (15) ◽  
pp. 4595
Author(s):  
Parisa Asadi ◽  
Lauren E. Beckingham

X-ray CT imaging provides a 3D view of a sample and is a powerful tool for investigating the internal features of porous rock. Reliable phase segmentation in these images is highly necessary but, like any other digital rock imaging technique, is time-consuming, labor-intensive, and subjective. Combining 3D X-ray CT imaging with machine learning methods that can simultaneously consider several extracted features in addition to color attenuation, is a promising and powerful method for reliable phase segmentation. Machine learning-based phase segmentation of X-ray CT images enables faster data collection and interpretation than traditional methods. This study investigates the performance of several filtering techniques with three machine learning methods and a deep learning method to assess the potential for reliable feature extraction and pixel-level phase segmentation of X-ray CT images. Features were first extracted from images using well-known filters and from the second convolutional layer of the pre-trained VGG16 architecture. Then, K-means clustering, Random Forest, and Feed Forward Artificial Neural Network methods, as well as the modified U-Net model, were applied to the extracted input features. The models’ performances were then compared and contrasted to determine the influence of the machine learning method and input features on reliable phase segmentation. The results showed considering more dimensionality has promising results and all classification algorithms result in high accuracy ranging from 0.87 to 0.94. Feature-based Random Forest demonstrated the best performance among the machine learning models, with an accuracy of 0.88 for Mancos and 0.94 for Marcellus. The U-Net model with the linear combination of focal and dice loss also performed well with an accuracy of 0.91 and 0.93 for Mancos and Marcellus, respectively. In general, considering more features provided promising and reliable segmentation results that are valuable for analyzing the composition of dense samples, such as shales, which are significant unconventional reservoirs in oil recovery.


Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 1952
Author(s):  
May Phu Paing ◽  
Supan Tungjitkusolmun ◽  
Toan Huy Bui ◽  
Sarinporn Visitsattapongse ◽  
Chuchart Pintavirooj

Automated segmentation methods are critical for early detection, prompt actions, and immediate treatments in reducing disability and death risks of brain infarction. This paper aims to develop a fully automated method to segment the infarct lesions from T1-weighted brain scans. As a key novelty, the proposed method combines variational mode decomposition and deep learning-based segmentation to take advantages of both methods and provide better results. There are three main technical contributions in this paper. First, variational mode decomposition is applied as a pre-processing to discriminate the infarct lesions from unwanted non-infarct tissues. Second, overlapped patches strategy is proposed to reduce the workload of the deep-learning-based segmentation task. Finally, a three-dimensional U-Net model is developed to perform patch-wise segmentation of infarct lesions. A total of 239 brain scans from a public dataset is utilized to develop and evaluate the proposed method. Empirical results reveal that the proposed automated segmentation can provide promising performances with an average dice similarity coefficient (DSC) of 0.6684, intersection over union (IoU) of 0.5022, and average symmetric surface distance (ASSD) of 0.3932, respectively.


Sign in / Sign up

Export Citation Format

Share Document