Use of U-Net Convolutional Neural Networks for Automated Segmentation of Fecal Material to Objective Evaluation of Bowel Preparation Quality in Colonoscopy (Preprint)

2021 ◽  
Author(s):  
Yen-Po Wang ◽  
Ying-Chun Jheng ◽  
Kuang-Yi Sung ◽  
Hung-En Lin ◽  
I-Fang Hsin ◽  
...  

BACKGROUND Adequate bowel cleansing is important for a complete examination of the colon mucosa during colonoscopy. Current bowel cleansing evaluation scales are subjective with a wide variation in consistency among physicians and low reported rate. Artificial intelligence (AI) has been increasingly used in endoscopy. OBJECTIVE We aim to use machine learning to develop a fully automatic segmentation method to mark the fecal residue-coated mucosa for objective evaluation of the adequacy of colon preparation. METHODS Colonoscopy videos were retrieved from a video data cohort and transferred to qualified images, which were randomly divided into training, validation and verification datasets. The fecal residue was manually segmented by skilled technicians. Deep learning model based on the U-Net convolutional network architecture was developed to perform automatic segmentation. TheA total of 10,118 qualified images from 119 videos were captured, and labelled manually. The model averaged 0.3634 seconds to segmentate one image automatically. The models produced a strong high-overlap area with manual segmentation to 94.7% ± 0.67% with an intersection over union (IOU) of 0.607 ± 0.17. The area predicted by our AI model correlated well with the area measured manually (r=0.915, p<0.001). The AI system can be applied real-time to qualitatively and quantitatively display the mucosa covered by fecal residue. performance of the automatic segmentation was evaluated on the overlap area with the manual segmentation. RESULTS A total of 10,118 qualified images from 119 videos were captured, and labelled manually. The model averaged 0.3634 seconds to segmentate one image automatically. The models produced a strong high-overlap area with manual segmentation to 94.7% ± 0.67% with an intersection over union (IOU) of 0.607 ± 0.17. The area predicted by our AI model correlated well with the area measured manually (r=0.915, p<0.001). The AI system can be applied real-time to qualitatively and quantitatively display the mucosa covered by fecal residue. CONCLUSIONS We used machine learning to establish a fully automatic segmentation method to rapidly and accurately mark the fecal residue-coated mucosa for objective evaluation of colon preparation.


2019 ◽  
Vol 34 (5) ◽  
pp. 1437-1451 ◽  
Author(s):  
Amy McGovern ◽  
Christopher D. Karstens ◽  
Travis Smith ◽  
Ryan Lagerquist

Abstract Real-time prediction of storm longevity is a critical challenge for National Weather Service (NWS) forecasters. These predictions can guide forecasters when they issue warnings and implicitly inform them about the potential severity of a storm. This paper presents a machine-learning (ML) system that was used for real-time prediction of storm longevity in the Probabilistic Hazard Information (PHI) tool, making it a Research-to-Operations (R2O) project. Currently, PHI provides forecasters with real-time storm variables and severity predictions from the ProbSevere system, but these predictions do not include storm longevity. We specifically designed our system to be tested in PHI during the 2016 and 2017 Hazardous Weather Testbed (HWT) experiments, which are a quasi-operational naturalistic environment. We considered three ML methods that have proven in prior work to be strong predictors for many weather prediction tasks: elastic nets, random forests, and gradient-boosted regression trees. We present experiments comparing the three ML methods with different types of input data, discuss trade-offs between forecast quality and requirements for real-time deployment, and present both subjective (human-based) and objective evaluation of real-time deployment in the HWT. Results demonstrate that the ML system has lower error than human forecasters, which suggests that it could be used to guide future storm-based warnings, enabling forecasters to focus on other aspects of the warning system.



Symmetry ◽  
2021 ◽  
Vol 13 (8) ◽  
pp. 1454
Author(s):  
Hanxi Li ◽  
Wenyu Zhu ◽  
Haiqiang Jin ◽  
Yong Ma

The conventional green screen keying method requires users’ interaction to guide the whole process and usually assumes a well-controlled illumination environment. In the era of “we-media”, millions of short videos are shared online every day, and most of them are produced by amateurs in relatively poor conditions. As a result, a fully automatic, real-time, and illumination-robust keying method would be very helpful and commercially promising in this era. In this paper, we propose a linear model guided by deep learning prediction to solve this problem. The simple, yet effective algorithm inherits the robustness of the deep-learning-based segmentation method, as well as the high matting quality of energy-minimization-based matting algorithms. Furthermore, thanks to the introduction of linear models, the proposed minimization problem is much less complex, and thus, real-time green screen keying is achieved. In the experiment, our algorithm achieved comparable keying performance to the manual keying software and deep-learning-based methods while beating other shallow matting algorithms in terms of accuracy. As for the matting speed and robustness, which are critical for a practical matting system, the proposed method significantly outperformed all the compared methods and showed superiority over all the off-the-self approaches.



2016 ◽  
Vol 6 (1) ◽  
Author(s):  
Raphael Meier ◽  
Urspeter Knecht ◽  
Tina Loosli ◽  
Stefan Bauer ◽  
Johannes Slotboom ◽  
...  


Author(s):  
Muthalakshmi Murugesan ◽  
Dhanasekaran Ragavan

Background: An accurate detection of tumor from the Magnetic Resonance Images (MRIs) is a critical and demanding task in medical image processing, due to the varying shape and structure of brain. So, different segmentation approaches such as manual, semi-automatic, and fully automatic are developed in the traditional works. Among them, the fully automatic segmentation techniques are increasingly used by the medical experts for an efficient disease diagnosis. But, it has the limitations of over segmentation, increased complexity, and time consumption. Objective: In order to solve these problems, this paper aims to develop an efficient segmentation and classification system by incorporating a novel image processing techniques. Methods: Here, the Distribution based Adaptive Median Filtering (DMAF) technique is employed for preprocessing the image. Then, skull removal is performed to extract the tumor portion from the filtered image. Further, the Neighborhood Differential Edge Detection (NDED) technique is implemented to cluster the tumor affected pixels, and it is segmented by the use of Intensity Variation Pattern Analysis (IVPA) technique. Finally, the normal and abnormal images are classified by using the Weighted Machine Learning (WML) technique. Results: During experiments, the results of the existing and proposed segmentation and classification techniques are evaluated based on different performance measures. To prove the superiority of the proposed technique, it is compared with the existing techniques. Conclusion: From the analysis, it is observed that the proposed IVPA-WML techniques provide the better results compared than the existing techniques.



2019 ◽  
Vol 17 (1) ◽  
Author(s):  
Bin Ye ◽  
Kangping Liu ◽  
Siting Cao ◽  
Padmaja Sankaridurg ◽  
Wayne Li ◽  
...  

Abstract Background Wearable smart watches provide large amount of real-time data on the environmental state of the users and are useful to determine risk factors for onset and progression of myopia. We aim to evaluate the efficacy of machine learning algorithm in differentiating indoor and outdoor locations as collected by use of smart watches. Methods Real time data on luminance, ultraviolet light levels and number of steps obtained with smart watches from dataset A: 12 adults from 8 scenes and manually recorded true locations. 70% of data was considered training set and support vector machine (SVM) algorithm generated using the variables to create a classification system. Data collected manually by the adults was the reference. The algorithm was used for predicting the location of the remaining 30% of dataset A. Accuracy was defined as the number of correct predictions divided by all. Similarly, data was corrected from dataset B: 172 children from 3 schools and 12 supervisors recorded true locations. Data collected by the supervisors was the reference. SVM model trained from dataset A was used to predict the location of dataset B for validation. Finally, we predicted the location of dataset B using the SVM model self-trained from dataset B. We repeated these three predictions with traditional univariate threshold segmentation method. Results In both datasets, SVM outperformed the univariate threshold segmentation method. In dataset A, the accuracy and AUC of SVM were 99.55% and 0.99 as compared to 95.11% and 0.95 with the univariate threshold segmentation (p < 0.01). In validation, the accuracy and AUC of SVM were 82.67% and 0.90 compared to 80.88% and 0.85 with the univariate threshold segmentation method (p < 0.01). In dataset B, the accuracy and AUC of SVM and AUC were 92.43% and 0.96 compared to 80.88% and 0.85 with the univariate threshold segmentation (p < 0.01). Conclusions Machine learning algorithm allows for discrimination of outdoor versus indoor environments with high accuracy and provides an opportunity to study and determine the role of environmental risk factors in onset and progression of myopia. The accuracy of machine learning algorithm could be improved if the model is trained with the dataset itself.



2020 ◽  
Vol 62 (12) ◽  
pp. 1637-1648
Author(s):  
Karin Gau ◽  
Charlotte S. M. Schmidt ◽  
Horst Urbach ◽  
Josef Zentner ◽  
Andreas Schulze-Bonhage ◽  
...  

Abstract Purpose Precise segmentation of brain lesions is essential for neurological research. Specifically, resection volume estimates can aid in the assessment of residual postoperative tissue, e.g. following surgery for glioma. Furthermore, behavioral lesion-symptom mapping in epilepsy relies on accurate delineation of surgical lesions. We sought to determine whether semi- and fully automatic segmentation methods can be applied to resected brain areas and which approach provides the most accurate and cost-efficient results. Methods We compared a semi-automatic (ITK-SNAP) with a fully automatic (lesion_GNB) method for segmentation of resected brain areas in terms of accuracy with manual segmentation serving as reference. Additionally, we evaluated processing times of all three methods. We used T1w, MRI-data of epilepsy patients (n = 27; 11 m; mean age 39 years, range 16–69) who underwent temporal lobe resections (17 left). Results The semi-automatic approach yielded superior accuracy (p < 0.001) with a median Dice similarity coefficient (mDSC) of 0.78 and a median average Hausdorff distance (maHD) of 0.44 compared with the fully automatic approach (mDSC 0.58, maHD 1.32). There was no significant difference between the median percent volume difference of the two approaches (p > 0.05). Manual segmentation required more human input (30.41 min/subject) and therefore inferring significantly higher costs than semi- (3.27 min/subject) or fully automatic approaches (labor and cost approaching zero). Conclusion Semi-automatic segmentation offers the most accurate results in resected brain areas with a moderate amount of human input, thus representing a viable alternative compared with manual segmentation, especially for studies with large patient cohorts.



2016 ◽  
Vol 22 (3) ◽  
pp. 497-506 ◽  
Author(s):  
Jindřich Soukup ◽  
Petr Císař ◽  
Filip Šroubek

AbstractBiocompatibility testing of new materials is often performed in vitro by measuring the growth rate of mammalian cancer cells in time-lapse images acquired by phase contrast microscopes. The growth rate is measured by tracking cell coverage, which requires an accurate automatic segmentation method. However, cancer cells have irregular shapes that change over time, the mottled background pattern is partially visible through the cells and the images contain artifacts such as halos. We developed a novel algorithm for cell segmentation that copes with the mentioned challenges. It is based on temporal differences of consecutive images and a combination of thresholding, blurring, and morphological operations. We tested the algorithm on images of four cell types acquired by two different microscopes, evaluated the precision of segmentation against manual segmentation performed by a human operator, and finally provided comparison with other freely available methods. We propose a new, fully automated method for measuring the cell growth rate based on fitting a coverage curve with the Verhulst population model. The algorithm is fast and shows accuracy comparable with manual segmentation. Most notably it can correctly separate live from dead cells.



Sign in / Sign up

Export Citation Format

Share Document