Su1614 Artificial Intelligence (AI) in Endoscopy--Deep Learning for Optical Biopsy of Colorectal Polyps in Real-Time on Unaltered Endoscopic Videos

2017 ◽  
Vol 85 (5) ◽  
pp. AB364-AB365 ◽  
Author(s):  
Michael F. Byrne ◽  
Nicolas Chapados ◽  
Florian Soudan ◽  
Clemens Oertel ◽  
Milagros L. Linares Pérez ◽  
...  
2019 ◽  
Vol 89 (6) ◽  
pp. AB89
Author(s):  
Yuichi Mori ◽  
Shinei Kudo ◽  
Masashi Misawa ◽  
Shinichi Kataoka ◽  
Kenichi Takeda ◽  
...  

2018 ◽  
Vol 87 (6) ◽  
pp. AB475 ◽  
Author(s):  
Michael F. Byrne ◽  
Florian Soudan ◽  
Milagros Henkel ◽  
Clemens Oertel ◽  
Nicolas Chapados ◽  
...  

2021 ◽  
Vol 14 ◽  
pp. 263177452199062
Author(s):  
Benjamin Gutierrez Becker ◽  
Filippo Arcadu ◽  
Andreas Thalhammer ◽  
Citlalli Gamez Serna ◽  
Owen Feehan ◽  
...  

Introduction: The Mayo Clinic Endoscopic Subscore is a commonly used grading system to assess the severity of ulcerative colitis. Correctly grading colonoscopies using the Mayo Clinic Endoscopic Subscore is a challenging task, with suboptimal rates of interrater and intrarater variability observed even among experienced and sufficiently trained experts. In recent years, several machine learning algorithms have been proposed in an effort to improve the standardization and reproducibility of Mayo Clinic Endoscopic Subscore grading. Methods: Here we propose an end-to-end fully automated system based on deep learning to predict a binary version of the Mayo Clinic Endoscopic Subscore directly from raw colonoscopy videos. Differently from previous studies, the proposed method mimics the assessment done in practice by a gastroenterologist, that is, traversing the whole colonoscopy video, identifying visually informative regions and computing an overall Mayo Clinic Endoscopic Subscore. The proposed deep learning–based system has been trained and deployed on raw colonoscopies using Mayo Clinic Endoscopic Subscore ground truth provided only at the colon section level, without manually selecting frames driving the severity scoring of ulcerative colitis. Results and Conclusion: Our evaluation on 1672 endoscopic videos obtained from a multisite data set obtained from the etrolizumab Phase II Eucalyptus and Phase III Hickory and Laurel clinical trials, show that our proposed methodology can grade endoscopic videos with a high degree of accuracy and robustness (Area Under the Receiver Operating Characteristic Curve = 0.84 for Mayo Clinic Endoscopic Subscore ⩾ 1, 0.85 for Mayo Clinic Endoscopic Subscore ⩾ 2 and 0.85 for Mayo Clinic Endoscopic Subscore ⩾ 3) and reduced amounts of manual annotation. Plain language summary Patient, caregiver and provider thoughts on educational materials about prescribing and medication safety Artificial intelligence can be used to automatically assess full endoscopic videos and estimate the severity of ulcerative colitis. In this work, we present an artificial intelligence algorithm for the automatic grading of ulcerative colitis in full endoscopic videos. Our artificial intelligence models were trained and evaluated on a large and diverse set of colonoscopy videos obtained from concluded clinical trials. We demonstrate not only that artificial intelligence is able to accurately grade full endoscopic videos, but also that using diverse data sets obtained from multiple sites is critical to train robust AI models that could potentially be deployed on real-world data.


Author(s):  
Seonho Kim ◽  
Jungjoon Kim ◽  
Hong-Woo Chun

Interest in research involving health-medical information analysis based on artificial intelligence, especially for deep learning techniques, has recently been increasing. Most of the research in this field has been focused on searching for new knowledge for predicting and diagnosing disease by revealing the relation between disease and various information features of data. These features are extracted by analyzing various clinical pathology data, such as EHR (electronic health records), and academic literature using the techniques of data analysis, natural language processing, etc. However, still needed are more research and interest in applying the latest advanced artificial intelligence-based data analysis technique to bio-signal data, which are continuous physiological records, such as EEG (electroencephalography) and ECG (electrocardiogram). Unlike the other types of data, applying deep learning to bio-signal data, which is in the form of time series of real numbers, has many issues that need to be resolved in preprocessing, learning, and analysis. Such issues include leaving feature selection, learning parts that are black boxes, difficulties in recognizing and identifying effective features, high computational complexities, etc. In this paper, to solve these issues, we provide an encoding-based Wave2vec time series classifier model, which combines signal-processing and deep learning-based natural language processing techniques. To demonstrate its advantages, we provide the results of three experiments conducted with EEG data of the University of California Irvine, which are a real-world benchmark bio-signal dataset. After converting the bio-signals (in the form of waves), which are a real number time series, into a sequence of symbols or a sequence of wavelet patterns that are converted into symbols, through encoding, the proposed model vectorizes the symbols by learning the sequence using deep learning-based natural language processing. The models of each class can be constructed through learning from the vectorized wavelet patterns and training data. The implemented models can be used for prediction and diagnosis of diseases by classifying the new data. The proposed method enhanced data readability and intuition of feature selection and learning processes by converting the time series of real number data into sequences of symbols. In addition, it facilitates intuitive and easy recognition, and identification of influential patterns. Furthermore, real-time large-capacity data analysis is facilitated, which is essential in the development of real-time analysis diagnosis systems, by drastically reducing the complexity of calculation without deterioration of analysis performance by data simplification through the encoding process.


2020 ◽  
Vol 12 (21) ◽  
pp. 9177
Author(s):  
Vishal Mandal ◽  
Abdul Rashid Mussah ◽  
Peng Jin ◽  
Yaw Adu-Gyamfi

Manual traffic surveillance can be a daunting task as Traffic Management Centers operate a myriad of cameras installed over a network. Injecting some level of automation could help lighten the workload of human operators performing manual surveillance and facilitate making proactive decisions which would reduce the impact of incidents and recurring congestion on roadways. This article presents a novel approach to automatically monitor real time traffic footage using deep convolutional neural networks and a stand-alone graphical user interface. The authors describe the results of research received in the process of developing models that serve as an integrated framework for an artificial intelligence enabled traffic monitoring system. The proposed system deploys several state-of-the-art deep learning algorithms to automate different traffic monitoring needs. Taking advantage of a large database of annotated video surveillance data, deep learning-based models are trained to detect queues, track stationary vehicles, and tabulate vehicle counts. A pixel-level segmentation approach is applied to detect traffic queues and predict severity. Real-time object detection algorithms coupled with different tracking systems are deployed to automatically detect stranded vehicles as well as perform vehicular counts. At each stage of development, interesting experimental results are presented to demonstrate the effectiveness of the proposed system. Overall, the results demonstrate that the proposed framework performs satisfactorily under varied conditions without being immensely impacted by environmental hazards such as blurry camera views, low illumination, rain, or snow.


2019 ◽  
Vol 156 (6) ◽  
pp. S-48-S-49 ◽  
Author(s):  
Nicolas Guizard ◽  
Sina Hamidi Ghalehjegh ◽  
Milagros Henkel ◽  
Liqiang Ding ◽  
Neal C. Shahidi ◽  
...  

2021 ◽  
Vol 14 ◽  
pp. 263177452110146
Author(s):  
Nasim Parsa ◽  
Michael F. Byrne

Colonoscopy remains the gold standard exam for colorectal cancer screening due to its ability to detect and resect pre-cancerous lesions in the colon. However, its performance is greatly operator dependent. Studies have shown that up to one-quarter of colorectal polyps can be missed on a single colonoscopy, leading to high rates of interval colorectal cancer. In addition, the American Society for Gastrointestinal Endoscopy has proposed the “resect-and-discard” and “diagnose-and-leave” strategies for diminutive colorectal polyps to reduce the costs of unnecessary polyp resection and pathology evaluation. However, the performance of optical biopsy has been suboptimal in community practice. With recent improvements in machine-learning techniques, artificial intelligence–assisted computer-aided detection and diagnosis have been increasingly utilized by endoscopists. The application of computer-aided design on real-time colonoscopy has been shown to increase the adenoma detection rate while decreasing the withdrawal time and improve endoscopists’ optical biopsy accuracy, while reducing the time to make the diagnosis. These are promising steps toward standardization and improvement of colonoscopy quality, and implementation of “resect-and-discard” and “diagnose-and-leave” strategies. Yet, issues such as real-world applications and regulatory approval need to be addressed before artificial intelligence models can be successfully implemented in clinical practice. In this review, we summarize the recent literature on the application of artificial intelligence for detection and characterization of colorectal polyps and review the limitation of existing artificial intelligence technologies and future directions for this field.


2022 ◽  
Vol 22 (1) ◽  
pp. 1-20
Author(s):  
Di Zhang ◽  
Feng Xu ◽  
Chi-Man Pun ◽  
Yang Yang ◽  
Rushi Lan ◽  
...  

Artificial intelligence including deep learning and 3D reconstruction methods is changing the daily life of people. Now, an unmanned aerial vehicle that can move freely in the air and avoid harsh ground conditions has been commonly adopted as a suitable tool for 3D reconstruction. The traditional 3D reconstruction mission based on drones usually consists of two steps: image collection and offline post-processing. But there are two problems: one is the uncertainty of whether all parts of the target object are covered, and another is the tedious post-processing time. Inspired by modern deep learning methods, we build a telexistence drone system with an onboard deep learning computation module and a wireless data transmission module that perform incremental real-time dense reconstruction of urban cities by itself. Two technical contributions are proposed to solve the preceding issues. First, based on the popular depth fusion surface reconstruction framework, we combine it with a visual-inertial odometry estimator that integrates the inertial measurement unit and allows for robust camera tracking as well as high-accuracy online 3D scan. Second, the capability of real-time 3D reconstruction enables a new rendering technique that can visualize the reconstructed geometry of the target as navigation guidance in the HMD. Therefore, it turns the traditional path-planning-based modeling process into an interactive one, leading to a higher level of scan completeness. The experiments in the simulation system and our real prototype demonstrate an improved quality of the 3D model using our artificial intelligence leveraged drone system.


2020 ◽  
Vol 158 (6) ◽  
pp. S-369
Author(s):  
Eladio Rodriguez-Diaz ◽  
Gyorgy Baffy ◽  
Wai-Kit Lo ◽  
Hiroshi Mashimo ◽  
Gitanjali Vidyarthi ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document