scholarly journals Identification of Barrett's esophagus in endoscopic images using deep learning

2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Wen Pan ◽  
Xujia Li ◽  
Weijia Wang ◽  
Linjing Zhou ◽  
Jiali Wu ◽  
...  

Abstract Background Development of a deep learning method to identify Barrett's esophagus (BE) scopes in endoscopic images. Methods 443 endoscopic images from 187 patients of BE were included in this study. The gastroesophageal junction (GEJ) and squamous-columnar junction (SCJ) of BE were manually annotated in endoscopic images by experts. Fully convolutional neural networks (FCN) were developed to automatically identify the BE scopes in endoscopic images. The networks were trained and evaluated in two separate image sets. The performance of segmentation was evaluated by intersection over union (IOU). Results The deep learning method was proved to be satisfying in the automated identification of BE in endoscopic images. The values of the IOU were 0.56 (GEJ) and 0.82 (SCJ), respectively. Conclusions Deep learning algorithm is promising with accuracies of concordance with manual human assessment in segmentation of the BE scope in endoscopic images. This automated recognition method helps clinicians to locate and recognize the scopes of BE in endoscopic examinations.

2021 ◽  
Author(s):  
Wen Pan ◽  
Xujia Li ◽  
Weijia Wang ◽  
Linjing Zhou ◽  
Jiali Wu ◽  
...  

Abstract Background: Development of a deep learning method to identify Barrett's esophagus (BE) scopes in endoscopic images.Methods: 443 endoscopic images from 187 patients of BE were included in this study. The gastroesophageal junction (GEJ) and squamous-columnar junction (SCJ) of BE were manually annotated in endoscopic images by experts. Fully convolutional neural networks (FCN) were developed to automatically identify the BE scopes in endoscopic images. The networks were trained and evaluated in two separate image sets. The performance of segmentation was evaluated by intersection over union (IOU).Results: The deep learning method was proved to be satisfying in the automated identification of BE in endoscopic images. The values of the IOU were 0.56 (GEJ) and 0.82 (SCJ), respectively.Conclusions: Deep learning algorithm is promising with accuracies of concordance with manual human assessment in segmentation of the BE scope in endoscopic images. This automated recognition method helps clinicians to locate and recognize the scopes of BE in endoscopic examinations.


2018 ◽  
Author(s):  
Sebastien Villon ◽  
David Mouillot ◽  
Marc Chaumont ◽  
Emily S Darling ◽  
Gérard Subsol ◽  
...  

Identifying and counting individual fish on videos is a crucial task to cost-effectively monitor marine biodiversity, but it remains a difficult and time-consuming task. In this paper, we present a method to assist the automated identification of fish species on underwater images, and we compare our algorithm performances to human ability in terms of speed and accuracy. We first tested the performance of a convolutional neural network trained with different photographic databases while accounting for different post-processing decision rules to identify 20 fish species. Finally, we compared the performance in species identification of our best model with human performances on a test database of 1197 pictures representing nine species. The best network was the one trained with 900 000 pictures of whole fish and of their parts and environment (e.g. reef bottom or water). The rate of correct identification of fish was 94.9%, greater than the rate of correct identifications by humans (89.3%). The network was also able to identify fish individuals partially hidden behind corals or behind other fish and was more effective than humans identification on smallest or blurry pictures while humans were better to recognize fish individuals in unusual positions (e.g. twisted body). On average, each identification by our best algorithm using a common hardware took 0.06 seconds. Deep Learning methods can thus perform efficient fish identification on underwater pictures which pave the way to new video-based protocols for monitoring fish biodiversity cheaply and effectively.


2020 ◽  
Author(s):  
Yong Tang ◽  
Xinpei Chen ◽  
Weijia Wang ◽  
Jiali Wu ◽  
Yingjun Zheng ◽  
...  

Abstract Background: Development and validation of a deep learning method to automatically segment the peri-ampullary (PA) region in magnetic resonance imaging (MRI) images. Methods: A group of patients with or without periampullary carcinoma (PAC) was included. The PA regions were manually annotated in MRI images by experts. Patients were randomly divided into one training set and one validation set. A deep learning method to automatically segment the PA region in MRI images was developed using the training set. The segmentation performance of the method was evaluated in the validation set. Results: The deep learning algorithm achieved optimal accuracies in the segmentation of the PA regions in both T1 and T2 MRI images. The value of the intersection over union (IoU) was 0.67 and 0.68 for T1 and T2 images, respectively. Conclusions: Deep learning algorithm is promising with accuracies of concordance with manual human assessment in segmentation of the PA region in MRI images. This automated non-invasive method helps clinicians to identify and locate the PA region using preoperative MRI scanning.


2018 ◽  
Author(s):  
Sebastien Villon ◽  
David Mouillot ◽  
Marc Chaumont ◽  
Emily S Darling ◽  
Gérard Subsol ◽  
...  

Identifying and counting individual fish on videos is a crucial task to cost-effectively monitor marine biodiversity, but it remains a difficult and time-consuming task. In this paper, we present a method to assist the automated identification of fish species on underwater images, and we compare our algorithm performances to human ability in terms of speed and accuracy. We first tested the performance of a convolutional neural network trained with different photographic databases while accounting for different post-processing decision rules to identify 20 fish species. Finally, we compared the performance in species identification of our best model with human performances on a test database of 1197 pictures representing nine species. The best network was the one trained with 900 000 pictures of whole fish and of their parts and environment (e.g. reef bottom or water). The rate of correct identification of fish was 94.9%, greater than the rate of correct identifications by humans (89.3%). The network was also able to identify fish individuals partially hidden behind corals or behind other fish and was more effective than humans identification on smallest or blurry pictures while humans were better to recognize fish individuals in unusual positions (e.g. twisted body). On average, each identification by our best algorithm using a common hardware took 0.06 seconds. Deep Learning methods can thus perform efficient fish identification on underwater pictures which pave the way to new video-based protocols for monitoring fish biodiversity cheaply and effectively.


2021 ◽  
Vol 944 (1) ◽  
pp. 012015
Author(s):  
N A Lestari ◽  
I Jaya ◽  
M Iqbal

Abstract Seagrass is an Angiosperms that live in shallow marine waters and estuaries. The method commonly used to identify seagrass is Seagrass-Watch which is done by sampling seagrass or by carrying a seagrass identification book. Technological developments in the era of the industrial revolution 4.0 made it possible to identify seagrass automatically. This research aims to apply the deep learning algorithm to detect seagrass recorded by underwater cameras. Enhalus acoroides seagrass species identification was carried out using a deep learning method with the mask region convolutional neural networks (Mask R-CNN) algorithm. The steps in the research procedure include collecting, labeling, training, testing models, and calculating the seagrass area. This study used 6000 epochs and got a measure of value generated by the model of ± 1.2. The Precision value, namely the model’s ability to correctly classify objects, reached 98.19% and the model’s ability to find all positive objects, based on system testing was able to perform recall is 95.04% and the F1 Score value of 96.58%. The results showed that the MASK R-CNN algorithm could detect and segment seagrass Enhalus acoroides.


Sign in / Sign up

Export Citation Format

Share Document