Semi-Supervised Segmentation of Renal Pathology: An Alternative to Manual Segmentation and Input to Deep Learning Training

Author(s):  
Adrienne Kline ◽  
Hyun Jae Chung ◽  
Waleed Rahmani ◽  
Justin Chun
2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Andreas M. Weng ◽  
Julius F. Heidenreich ◽  
Corona Metz ◽  
Simon Veldhoen ◽  
Thorsten A. Bley ◽  
...  

Abstract Background Functional lung MRI techniques are usually associated with time-consuming post-processing, where manual lung segmentation represents the most cumbersome part. The aim of this study was to investigate whether deep learning-based segmentation of lung images which were scanned by a fast UTE sequence exploiting the stack-of-spirals trajectory can provide sufficiently good accuracy for the calculation of functional parameters. Methods In this study, lung images were acquired in 20 patients suffering from cystic fibrosis (CF) and 33 healthy volunteers, by a fast UTE sequence with a stack-of-spirals trajectory and a minimum echo-time of 0.05 ms. A convolutional neural network was then trained for semantic lung segmentation using 17,713 2D coronal slices, each paired with a label obtained from manual segmentation. Subsequently, the network was applied to 4920 independent 2D test images and results were compared to a manual segmentation using the Sørensen–Dice similarity coefficient (DSC) and the Hausdorff distance (HD). Obtained lung volumes and fractional ventilation values calculated from both segmentations were compared using Pearson’s correlation coefficient and Bland Altman analysis. To investigate generalizability to patients outside the CF collective, in particular to those exhibiting larger consolidations inside the lung, the network was additionally applied to UTE images from four patients with pneumonia and one with lung cancer. Results The overall DSC for lung tissue was 0.967 ± 0.076 (mean ± standard deviation) and HD was 4.1 ± 4.4 mm. Lung volumes derived from manual and deep learning based segmentations as well as values for fractional ventilation exhibited a high overall correlation (Pearson’s correlation coefficent = 0.99 and 1.00). For the additional cohort with unseen pathologies / consolidations, mean DSC was 0.930 ± 0.083, HD = 12.9 ± 16.2 mm and the mean difference in lung volume was 0.032 ± 0.048 L. Conclusions Deep learning-based image segmentation in stack-of-spirals based lung MRI allows for accurate estimation of lung volumes and fractional ventilation values and promises to replace the time-consuming step of manual image segmentation in the future.


Author(s):  
R. Murugan

The retinal parts segmentation has been recognized as a key component in both ophthalmological and cardiovascular sickness analysis. The parts of retinal pictures, vessels, optic disc, and macula segmentations, will add to the indicative outcome. In any case, the manual segmentation of retinal parts is tedious and dreary work, and it additionally requires proficient aptitudes. This chapter proposes a supervised method to segment blood vessel utilizing deep learning methods. All the more explicitly, the proposed part has connected the completely convolutional network, which is normally used to perform semantic segmentation undertaking with exchange learning. The convolutional neural system has turned out to be an amazing asset for a few computer vision assignments. As of late, restorative picture investigation bunches over the world are rapidly entering this field and applying convolutional neural systems and other deep learning philosophies to a wide assortment of uses, and uncommon outcomes are rising constantly.


2021 ◽  
Vol 11 ◽  
Author(s):  
Xi Liu ◽  
Kai-Wen Li ◽  
Ruijie Yang ◽  
Li-Sheng Geng

Lung cancer is the leading cause of cancer-related mortality for males and females. Radiation therapy (RT) is one of the primary treatment modalities for lung cancer. While delivering the prescribed dose to tumor targets, it is essential to spare the tissues near the targets—the so-called organs-at-risk (OARs). An optimal RT planning benefits from the accurate segmentation of the gross tumor volume and surrounding OARs. Manual segmentation is a time-consuming and tedious task for radiation oncologists. Therefore, it is crucial to develop automatic image segmentation to relieve radiation oncologists of the tedious contouring work. Currently, the atlas-based automatic segmentation technique is commonly used in clinical routines. However, this technique depends heavily on the similarity between the atlas and the image segmented. With significant advances made in computer vision, deep learning as a part of artificial intelligence attracts increasing attention in medical image automatic segmentation. In this article, we reviewed deep learning based automatic segmentation techniques related to lung cancer and compared them with the atlas-based automatic segmentation technique. At present, the auto-segmentation of OARs with relatively large volume such as lung and heart etc. outperforms the organs with small volume such as esophagus. The average Dice similarity coefficient (DSC) of lung, heart and liver are over 0.9, and the best DSC of spinal cord reaches 0.9. However, the DSC of esophagus ranges between 0.71 and 0.87 with a ragged performance. In terms of the gross tumor volume, the average DSC is below 0.8. Although deep learning based automatic segmentation techniques indicate significant superiority in many aspects compared to manual segmentation, various issues still need to be solved. We discussed the potential issues in deep learning based automatic segmentation including low contrast, dataset size, consensus guidelines, and network design. Clinical limitations and future research directions of deep learning based automatic segmentation were discussed as well.


2019 ◽  
Author(s):  
Nikolaos-Kosmas Chlis ◽  
Angelos Karlas ◽  
Nikolina-Alexia Fasoula ◽  
Michael Kallmayer ◽  
Hans-Henning Eckstein ◽  
...  

AbstractMultispectral Optoacoustic Tomography (MSOT) resolves oxy- (HbO2) and deoxy-hemoglobin (Hb) to perform vascular imaging. MSOT suffers from gradual signal attenuation with depth due to light-tissue interactions: an effect that hinders the precise manual segmentation of vessels. Furthermore, vascular assessment requires functional tests, which last several minutes and result in recording thousands of images. Here, we introduce a deep learning approach with a sparse UNET (S-UNET) for automatic vascular segmentation in MSOT images to avoid the rigorous and time-consuming manual segmentation. We evaluated the S-UNET on a test-set of 33 images, achieving a median DICE score of 0.88. Apart from high segmentation performance, our method based its decision on two wavelengths with physical meaning for the task-at-hand: 850 nm (peak absorption of oxy-hemoglobin) and 810 nm (isosbestic point of oxy-and deoxy-hemoglobin). Thus, our approach achieves precise data-driven vascular segmentation for automated vascular assessment and may boost MSOT further towards its clinical translation.


2022 ◽  
pp. 256-273
Author(s):  
Devidas Tulshiram Kushnure ◽  
Sanjay Nilkanth Talbar

Liver segmentation is instrumental for decision making in the medical realm for the diagnosis and treatment planning of hepatic diseases. However, the manual segmentation of the hundreds of CT images is tedious for medical experts. Thus, it hampers the segmentation accuracy and is reliant on opinion of the operator. This chapter presents the deep learning-based modified multi-scale UNet++ (M2UNet++) approach for automatic liver segmentation. The multi-scale features were modified channel-wise using adaptive feature recalibration to improve the representation of the high-level semantic information of the skip pathways and improved the segmentation performance with fewer computational overheads. The experimental results proved the model's efficacy on the publicly available 3DIRCADb dataset, which offers significant complexity and variations. The model's dice coefficient value is 97.28% that is 7.64%, and 2.24% improved from the UNet and UNet++ model. The quantitative result analysis shows that the M2UNet++ model outperforms the state-of-the-art methods proposed for liver segmentation.


2020 ◽  
Vol 11 (1) ◽  
Author(s):  
Jie Ding ◽  
Peng Cao ◽  
Hing-Chiu Chang ◽  
Yuan Gao ◽  
Sophelia Hoi Shan Chan ◽  
...  

Abstract Background Time-efficient and accurate whole volume thigh muscle segmentation is a major challenge in moving from qualitative assessment of thigh muscle MRI to more quantitative methods. This study developed an automated whole thigh muscle segmentation method using deep learning for reproducible fat fraction quantification on fat–water decomposition MRI. Results This study was performed using a public reference database (Dataset 1, 25 scans) and a local clinical dataset (Dataset 2, 21 scans). A U-net was trained using 23 scans (16 from Dataset 1, seven from Dataset 2) to automatically segment four functional muscle groups: quadriceps femoris, sartorius, gracilis and hamstring. The segmentation accuracy was evaluated on an independent testing set (3 × 3 repeated scans in Dataset 1 and four scans in Dataset 2). The average Dice coefficients between manual and automated segmentation were > 0.85. The average percent difference (absolute) in volume was 7.57%, and the average difference (absolute) in mean fat fraction (meanFF) was 0.17%. The reproducibility in meanFF was calculated using intraclass correlation coefficients (ICCs) for the repeated scans, and automated segmentation produced overall higher ICCs than manual segmentation (0.921 vs. 0.902). A preliminary quantitative analysis was performed using two-sample t test to detect possible differences in meanFF between 14 normal and 14 abnormal (with fat infiltration) thighs in Dataset 2 using automated segmentation, and significantly higher meanFF was detected in abnormal thighs. Conclusions This automated thigh muscle segmentation exhibits excellent accuracy and higher reproducibility in fat fraction estimation compared to manual segmentation, which can be further used for quantifying fat infiltration in thigh muscles.


2021 ◽  
Author(s):  
Luca L. Weishaupt ◽  
Jose Torres ◽  
Sophie Camilleri-Broët ◽  
Roni F. Rayes ◽  
Jonathan D. Spicer ◽  
...  

Abstract The goal of this study was (i) to use artificial intelligence to automate the traditionally labor-intensive process of manual segmentation of tumor regions in pathology slides performed by a pathologist and (ii) to validate the use of a deep learning architecture. Automation will reduce the human error involved in the manual process, increase efficiency, and result in more accurate and reproducible segmentation. This advancement will alleviate the bottleneck in the workflow in clinical and research applications due to a lack of pathologist time. Our application is patient-specific microdosimetry and radiobiological modeling, which builds on the contoured pathology slides. A deep neural network named UNet was used to segment tumor regions in pathology core biopsies of lung tissue with adenocarcinoma stained using hematoxylin and eosin. A pathologist manually contoured the tumor regions in 56 images with binary masks for training. To overcome memory limitations overlapping and non-overlapping patch extraction with various patch sizes and image downsampling were investigated individually. Data augmentation was used to reduce overfitting and artificially create more data for training. Using this deep learning approach, the UNet achieved accuracy of 0.91±0.06, specificity of 0.90±0.08, sensitivity of 0.92±0.07, and precision of 0.8±0.1. The F1/DICE score was 0.85±0.07, with a segmentation time of 3.24±0.03 seconds per image, thus achieving a 370±3 times increased efficiency over manual segmentation, which took 20 minutes per image on average. In some cases, the neural network correctly delineated the tumor's stroma from its epithelial component in tumor regions that were classified as tumor by the pathologist. The UNet architecture can segment images with a level of efficiency and accuracy that makes it suitable for tumor segmentation of histopathological images in fields such as radiotherapy dosimetry, specifically in the subfields of microdosimetry.


Manual segmentation in the brain tumors analyses for malignancy prognosis, via massive amount MRI images produced through medical routine, frustrating task and is a hard. There is a dependence on automated brain tumor graphic segmentation. The amount of precision necessary for scientific purposes is normally as yet not known, and so can't be conveniently quantified actually by means of professional physicians. That is a fascinating point, which includes just sparsely been resolved in the literature, but is nonetheless truly relevant up to now. Additionally, storage space automatization for medical images is essential need nowadays. To carry out very quickly analysis as well as, prognosis there's an imperative want of automated photo storage. Hence, this paper focused on development of new algorithm called “EasyGet” for automatic data storage and retrieval using Hadoop architecture


2020 ◽  
Vol 39 (3) ◽  
pp. 161-167
Author(s):  
Francisco José Fumero Batista ◽  
Tinguaro Diaz-Aleman ◽  
Jose Sigut ◽  
Silvia Alayon ◽  
Rafael Arnay ◽  
...  

The first version of the Retinal IMage database for Optic Nerve Evaluation (RIM-ONE) was published in 2011. This was followed by two more, turning it into one of the most cited public retinography databases for evaluating glaucoma. Although it was initially intended to be a database with reference images for segmenting the optic disc, in recent years we have observed that its use has been more oriented toward training and testing deep learning models. The recent REFUGE challenge laid out some criteria that a set of images of these characteristics must satisfy to be used as a standard reference for validating deep learning methods that rely on the use of these data. This, combined with the certain confusion and even improper use observed in some cases of the three versions published, led us to consider revising and combining them into a new, publicly available version called RIM-ONE DL (RIM-ONE for Deep Learning). This paper describes this set of images, consisting of 313 retinographies from normal subjects and 172 retinographies from patients with glaucoma. All of these images have been assessed by two experts and include a manual segmentation of the disc and cup. It also describes an evaluation benchmark with different models of well-known convolutional neural networks.


2021 ◽  
Vol 2021 ◽  
pp. 1-7
Author(s):  
Xueling Wang ◽  
Xianmin Meng ◽  
Shu Yan

This paper aimed to study the adoption of deep learning (DL) algorithm of oral lesions for segmentation of cone-beam computed tomography (CBCT) images. 90 patients with oral lesions were taken as research subjects, and they were grouped into blank, control, and experimental groups, whose images were treated by the manual segmentation method, threshold segmentation algorithm, and full convolutional neural network (FCNN) DL algorithm, respectively. Then, effects of different methods on oral lesion CBCT image recognition and segmentation were analyzed. The results showed that there was no substantial difference in the number of patients with different types of oral lesions among three groups ( P > 0.05 ). The accuracy of lesion segmentation in the experimental group was as high as 98.3%, while those of the blank group and control group were 78.4% and 62.1%, respectively. The accuracy of segmentation of CBCT images in the blank group and control group was considerably inferior to the experimental group ( P < 0.05 ). The segmentation effect on the lesion and the lesion model in the experimental group and control group was evidently superior to the blank group ( P < 0.05 ). In short, the image segmentation accuracy of the FCNN DL method was better than the traditional manual segmentation and threshold segmentation algorithms. Applying the DL segmentation algorithm to CBCT images of oral lesions can accurately identify and segment the lesions.


Sign in / Sign up

Export Citation Format

Share Document