scholarly journals Deep learning for automatic segmentation of thigh and leg muscles

Author(s):  
Abramo Agosti ◽  
Enea Shaqiri ◽  
Matteo Paoletti ◽  
Francesca Solazzo ◽  
Niels Bergsland ◽  
...  

Abstract Objective In this study we address the automatic segmentation of selected muscles of the thigh and leg through a supervised deep learning approach. Material and methods The application of quantitative imaging in neuromuscular diseases requires the availability of regions of interest (ROI) drawn on muscles to extract quantitative parameters. Up to now, manual drawing of ROIs has been considered the gold standard in clinical studies, with no clear and universally accepted standardized procedure for segmentation. Several automatic methods, based mainly on machine learning and deep learning algorithms, have recently been proposed to discriminate between skeletal muscle, bone, subcutaneous and intermuscular adipose tissue. We develop a supervised deep learning approach based on a unified framework for ROI segmentation. Results The proposed network generates segmentation maps with high accuracy, consisting in Dice Scores ranging from 0.89 to 0.95, with respect to “ground truth” manually segmented labelled images, also showing high average performance in both mild and severe cases of disease involvement (i.e. entity of fatty replacement). Discussion The presented results are promising and potentially translatable to different skeletal muscle groups and other MRI sequences with different contrast and resolution.

2019 ◽  
Vol 1 (Supplement_1) ◽  
pp. i20-i21
Author(s):  
Min Zhang ◽  
Geoffrey Young ◽  
Huai Chen ◽  
Lei Qin ◽  
Xinhua Cao ◽  
...  

Abstract BACKGROUND AND OBJECTIVE: Brain metastases have been found to account for one-fourth of all cancer metastases seen in clinics. Magnetic resonance imaging (MRI) is widely used for detecting brain metastases. Accurate detection of the brain metastases is critical to design radiotherapy to treat the cancer and monitor their progression or response to the therapy and prognosis. However, finding metastases on brain MRI is very challenging as many metastases are small and manifest as objects of weak contrast on the images. In this work we present a deep learning approach integrated with a classification scheme to detect cancer metastases to the brain on MRI. MATERIALS AND METHODS: We retrospectively extracted 101 metastases patients, equal to 1535 metastases on 10192 slices of images in a total of 336 scans from our PACS and manually marked the lesions on T1-weighted contrast enhanced MRI as the ground-truth. We then randomly separated the cases into training, validation, and test sets for developing and optimizing the deep learning neural network. We designed a 2-step computer-aided detection (CAD) pipeline by first applying a fast region-based convolutional neural network method (R-CNN) to sequentially process each slice of an axial brain MRI to find abnormal hyper-intensity that may correspond to a brain metastasis and, second, applying a random under sampling boost (RUSBoost) classification method to reduce the false positive metastases. RESULTS: The computational pipeline was tested on real brain images. A sensitivity of 97.28% and false positive rate of 36.25 per scan over the images were achieved by using the proposed method. CONCLUSION: Our results demonstrated the deep learning-based method can detect metastases in very challenging cases and can serve as CAD tool to help radiologists interpret brain MRIs in a time-constrained environment.


2021 ◽  
Vol 10 (16) ◽  
pp. 3589
Author(s):  
Yuhei Iwasa ◽  
Takuji Iwashita ◽  
Yuji Takeuchi ◽  
Hironao Ichikawa ◽  
Naoki Mita ◽  
...  

Background: Contrast-enhanced endoscopic ultrasound (CE-EUS) is useful for the differentiation of pancreatic tumors. Using deep learning for the segmentation and classification of pancreatic tumors might further improve the diagnostic capability of CE-EUS. Aims: The aim of this study was to evaluate the capability of deep learning for the automatic segmentation of pancreatic tumors on CE-EUS video images and possible factors affecting the automatic segmentation. Methods: This retrospective study included 100 patients who underwent CE-EUS for pancreatic tumors. The CE-EUS video images were converted from the originals to 90-second segments with six frames per second. Manual segmentation of pancreatic tumors from B-mode images was performed as ground truth. Automatic segmentation was performed using U-Net with 100 epochs and was evaluated with 4-fold cross-validation. The degree of respiratory movement (RM) and tumor boundary (TB) were divided into 3-degree intervals in each patient and evaluated as possible factors affecting the segmentation. The concordance rate was calculated using the intersection over union (IoU). Results: The median IoU of all cases was 0.77. The median IoUs in TB-1 (clear around), TB-2, and TB-3 (unclear more than half) were 0.80, 0.76, and 0.69, respectively. The IoU for TB-1 was significantly higher than that of TB-3 (p < 0.01). However, there was no significant difference between the degrees of RM. Conclusion: Automatic segmentation of pancreatic tumors using U-Net on CE-EUS video images showed a decent concordance rate. The concordance rate was lowered by an unclear TB but was not affected by RM.


2019 ◽  
Vol 13 (1) ◽  
pp. 120-126
Author(s):  
K. Bhavanishankar ◽  
M. V. Sudhamani

Objective: Lung cancer is proving to be one of the deadliest diseases that is haunting mankind in recent years. Timely detection of the lung nodules would surely enhance the survival rate. This paper focusses on the classification of candidate lung nodules into nodules/non-nodules in a CT scan of the patient. A deep learning approach –autoencoder is used for the classification. Investigation/Methodology: Candidate lung nodule patches obtained as the results of the lung segmentation are considered as input to the autoencoder model. The ground truth data from the LIDC repository is prepared and is submitted to the autoencoder training module. After a series of experiments, it is decided to use 4-stacked autoencoder. The model is trained for over 600 LIDC cases and the trained module is tested for remaining data sets. Results: The results of the classification are evaluated with respect to performance measures such as sensitivity, specificity, and accuracy. The results obtained are also compared with other related works and the proposed approach was found to be better by 6.2% with respect to accuracy. Conclusion: In this paper, a deep learning approach –autoencoder has been used for the classification of candidate lung nodules into nodules/non-nodules. The performance of the proposed approach was evaluated with respect to sensitivity, specificity, and accuracy and the obtained values are 82.6%, 91.3%, and 87.0%, respectively. This result is then compared with existing related works and an improvement of 6.2% with respect to accuracy has been observed.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Fan Yang ◽  
Xin Weng ◽  
Yuehong Miao ◽  
Yuhui Wu ◽  
Hong Xie ◽  
...  

Abstract Background Segmentation of the ulna and radius is a crucial step for the measurement of bone mineral density (BMD) in dual-energy X-ray imaging in patients suspected of having osteoporosis. Purpose This work aimed to propose a deep learning approach for the accurate automatic segmentation of the ulna and radius in dual-energy X-ray imaging. Methods and materials We developed a deep learning model with residual block (Resblock) for the segmentation of the ulna and radius. Three hundred and sixty subjects were included in the study, and five-fold cross-validation was used to evaluate the performance of the proposed network. The Dice coefficient and Jaccard index were calculated to evaluate the results of segmentation in this study. Results The proposed network model had a better segmentation performance than the previous deep learning-based methods with respect to the automatic segmentation of the ulna and radius. The evaluation results suggested that the average Dice coefficients of the ulna and radius were 0.9835 and 0.9874, with average Jaccard indexes of 0.9680 and 0.9751, respectively. Conclusion The deep learning-based method developed in this study improved the segmentation performance of the ulna and radius in dual-energy X-ray imaging.


2019 ◽  
Author(s):  
Nikolaos-Kosmas Chlis ◽  
Angelos Karlas ◽  
Nikolina-Alexia Fasoula ◽  
Michael Kallmayer ◽  
Hans-Henning Eckstein ◽  
...  

AbstractMultispectral Optoacoustic Tomography (MSOT) resolves oxy- (HbO2) and deoxy-hemoglobin (Hb) to perform vascular imaging. MSOT suffers from gradual signal attenuation with depth due to light-tissue interactions: an effect that hinders the precise manual segmentation of vessels. Furthermore, vascular assessment requires functional tests, which last several minutes and result in recording thousands of images. Here, we introduce a deep learning approach with a sparse UNET (S-UNET) for automatic vascular segmentation in MSOT images to avoid the rigorous and time-consuming manual segmentation. We evaluated the S-UNET on a test-set of 33 images, achieving a median DICE score of 0.88. Apart from high segmentation performance, our method based its decision on two wavelengths with physical meaning for the task-at-hand: 850 nm (peak absorption of oxy-hemoglobin) and 810 nm (isosbestic point of oxy-and deoxy-hemoglobin). Thus, our approach achieves precise data-driven vascular segmentation for automated vascular assessment and may boost MSOT further towards its clinical translation.


2020 ◽  
Vol 34 (04) ◽  
pp. 6454-6461 ◽  
Author(s):  
Ming-Kun Xie ◽  
Sheng-Jun Huang

Partial multi-label learning (PML) deals with problems where each instance is assigned with a candidate label set, which contains multiple relevant labels and some noisy labels. Recent studies usually solve PML problems with the disambiguation strategy, which recovers ground-truth labels from the candidate label set by simply assuming that the noisy labels are generated randomly. In real applications, however, noisy labels are usually caused by some ambiguous contents of the example. Based on this observation, we propose a partial multi-label learning approach to simultaneously recover the ground-truth information and identify the noisy labels. The two objectives are formalized in a unified framework with trace norm and ℓ1 norm regularizers. Under the supervision of the observed noise-corrupted label matrix, the multi-label classifier and noisy label identifier are jointly optimized by incorporating the label correlation exploitation and feature-induced noise model. Extensive experiments on synthetic as well as real-world data sets validate the effectiveness of the proposed approach.


Sign in / Sign up

Export Citation Format

Share Document