scholarly journals A Deep Learning algorithm for accurate and fast identification of coral reef fishes in underwater videos

Author(s):  
Sebastien Villon ◽  
David Mouillot ◽  
Marc Chaumont ◽  
Emily S Darling ◽  
Gérard Subsol ◽  
...  

Identifying and counting individual fish on videos is a crucial task to cost-effectively monitor marine biodiversity, but it remains a difficult and time-consuming task. In this paper, we present a method to assist the automated identification of fish species on underwater images, and we compare our algorithm performances to human ability in terms of speed and accuracy. We first tested the performance of a convolutional neural network trained with different photographic databases while accounting for different post-processing decision rules to identify 20 fish species. Finally, we compared the performance in species identification of our best model with human performances on a test database of 1197 pictures representing nine species. The best network was the one trained with 900 000 pictures of whole fish and of their parts and environment (e.g. reef bottom or water). The rate of correct identification of fish was 94.9%, greater than the rate of correct identifications by humans (89.3%). The network was also able to identify fish individuals partially hidden behind corals or behind other fish and was more effective than humans identification on smallest or blurry pictures while humans were better to recognize fish individuals in unusual positions (e.g. twisted body). On average, each identification by our best algorithm using a common hardware took 0.06 seconds. Deep Learning methods can thus perform efficient fish identification on underwater pictures which pave the way to new video-based protocols for monitoring fish biodiversity cheaply and effectively.

2018 ◽  
Author(s):  
Sebastien Villon ◽  
David Mouillot ◽  
Marc Chaumont ◽  
Emily S Darling ◽  
Gérard Subsol ◽  
...  

Identifying and counting individual fish on videos is a crucial task to cost-effectively monitor marine biodiversity, but it remains a difficult and time-consuming task. In this paper, we present a method to assist the automated identification of fish species on underwater images, and we compare our algorithm performances to human ability in terms of speed and accuracy. We first tested the performance of a convolutional neural network trained with different photographic databases while accounting for different post-processing decision rules to identify 20 fish species. Finally, we compared the performance in species identification of our best model with human performances on a test database of 1197 pictures representing nine species. The best network was the one trained with 900 000 pictures of whole fish and of their parts and environment (e.g. reef bottom or water). The rate of correct identification of fish was 94.9%, greater than the rate of correct identifications by humans (89.3%). The network was also able to identify fish individuals partially hidden behind corals or behind other fish and was more effective than humans identification on smallest or blurry pictures while humans were better to recognize fish individuals in unusual positions (e.g. twisted body). On average, each identification by our best algorithm using a common hardware took 0.06 seconds. Deep Learning methods can thus perform efficient fish identification on underwater pictures which pave the way to new video-based protocols for monitoring fish biodiversity cheaply and effectively.


Diagnostics ◽  
2021 ◽  
Vol 11 (2) ◽  
pp. 250
Author(s):  
Yejin Jeon ◽  
Kyeorye Lee ◽  
Leonard Sunwoo ◽  
Dongjun Choi ◽  
Dong Yul Oh ◽  
...  

Accurate image interpretation of Waters’ and Caldwell view radiographs used for sinusitis screening is challenging. Therefore, we developed a deep learning algorithm for diagnosing frontal, ethmoid, and maxillary sinusitis on both Waters’ and Caldwell views. The datasets were selected for the training and validation set (n = 1403, sinusitis% = 34.3%) and the test set (n = 132, sinusitis% = 29.5%) by temporal separation. The algorithm can simultaneously detect and classify each paranasal sinus using both Waters’ and Caldwell views without manual cropping. Single- and multi-view models were compared. Our proposed algorithm satisfactorily diagnosed frontal, ethmoid, and maxillary sinusitis on both Waters’ and Caldwell views (area under the curve (AUC), 0.71 (95% confidence interval, 0.62–0.80), 0.78 (0.72–0.85), and 0.88 (0.84–0.92), respectively). The one-sided DeLong’s test was used to compare the AUCs, and the Obuchowski–Rockette model was used to pool the AUCs of the radiologists. The algorithm yielded a higher AUC than radiologists for ethmoid and maxillary sinusitis (p = 0.012 and 0.013, respectively). The multi-view model also exhibited a higher AUC than the single Waters’ view model for maxillary sinusitis (p = 0.038). Therefore, our algorithm showed diagnostic performances comparable to radiologists and enhanced the value of radiography as a first-line imaging modality in assessing multiple sinusitis.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Wen Pan ◽  
Xujia Li ◽  
Weijia Wang ◽  
Linjing Zhou ◽  
Jiali Wu ◽  
...  

Abstract Background Development of a deep learning method to identify Barrett's esophagus (BE) scopes in endoscopic images. Methods 443 endoscopic images from 187 patients of BE were included in this study. The gastroesophageal junction (GEJ) and squamous-columnar junction (SCJ) of BE were manually annotated in endoscopic images by experts. Fully convolutional neural networks (FCN) were developed to automatically identify the BE scopes in endoscopic images. The networks were trained and evaluated in two separate image sets. The performance of segmentation was evaluated by intersection over union (IOU). Results The deep learning method was proved to be satisfying in the automated identification of BE in endoscopic images. The values of the IOU were 0.56 (GEJ) and 0.82 (SCJ), respectively. Conclusions Deep learning algorithm is promising with accuracies of concordance with manual human assessment in segmentation of the BE scope in endoscopic images. This automated recognition method helps clinicians to locate and recognize the scopes of BE in endoscopic examinations.


2019 ◽  
Vol 56 (5) ◽  
pp. 1404-1410 ◽  
Author(s):  
Ali Khalighifar ◽  
Ed Komp ◽  
Janine M Ramsey ◽  
Rodrigo Gurgel-Gonçalves ◽  
A Townsend Peterson

Abstract Vector-borne Chagas disease is endemic to the Americas and imposes significant economic and social burdens on public health. In a previous contribution, we presented an automated identification system that was able to discriminate among 12 Mexican and 39 Brazilian triatomine (Hemiptera: Reduviidae) species from digital images. To explore the same data more deeply using machine-learning approaches, hoping for improvements in classification, we employed TensorFlow, an open-source software platform for a deep learning algorithm. We trained the algorithm based on 405 images for Mexican triatomine species and 1,584 images for Brazilian triatomine species. Our system achieved 83.0 and 86.7% correct identification rates across all Mexican and Brazilian species, respectively, an improvement over comparable rates from statistical classifiers (80.3 and 83.9%, respectively). Incorporating distributional information to reduce numbers of species in analyses improved identification rates to 95.8% for Mexican species and 98.9% for Brazilian species. Given the ‘taxonomic impediment’ and difficulties in providing entomological expertise necessary to control such diseases, automating the identification process offers a potential partial solution to crucial challenges.


2021 ◽  
Author(s):  
Wen Pan ◽  
Xujia Li ◽  
Weijia Wang ◽  
Linjing Zhou ◽  
Jiali Wu ◽  
...  

Abstract Background: Development of a deep learning method to identify Barrett's esophagus (BE) scopes in endoscopic images.Methods: 443 endoscopic images from 187 patients of BE were included in this study. The gastroesophageal junction (GEJ) and squamous-columnar junction (SCJ) of BE were manually annotated in endoscopic images by experts. Fully convolutional neural networks (FCN) were developed to automatically identify the BE scopes in endoscopic images. The networks were trained and evaluated in two separate image sets. The performance of segmentation was evaluated by intersection over union (IOU).Results: The deep learning method was proved to be satisfying in the automated identification of BE in endoscopic images. The values of the IOU were 0.56 (GEJ) and 0.82 (SCJ), respectively.Conclusions: Deep learning algorithm is promising with accuracies of concordance with manual human assessment in segmentation of the BE scope in endoscopic images. This automated recognition method helps clinicians to locate and recognize the scopes of BE in endoscopic examinations.


Sensors ◽  
2021 ◽  
Vol 21 (16) ◽  
pp. 5598
Author(s):  
Jiaqi Li ◽  
Xuefeng Zhao ◽  
Guangyi Zhou ◽  
Mingyuan Zhang ◽  
Dongfang Li ◽  
...  

With the rapid development of deep learning, computer vision has assisted in solving a variety of problems in engineering construction. However, very few computer vision-based approaches have been proposed on work productivity’s evaluation. Therefore, taking a super high-rise project as a research case, using the detected object information obtained by a deep learning algorithm, a computer vision-based method for evaluating the productivity of assembling reinforcement is proposed. Firstly, a detector that can accurately distinguish various entities related to assembling reinforcement based on CenterNet is established. DLA34 is selected as the backbone. The mAP reaches 0.9682, and the speed of detecting a single image can be as low as 0.076 s. Secondly, the trained detector is used to detect the video frames, and images with detected boxes and documents with coordinates can be obtained. The position relationship between the detected work objects and detected workers is used to determine how many workers (N) have participated in the task. The time (T) to perform the process can be obtained from the change of coordinates of the work object. Finally, the productivity is evaluated according to N and T. The authors use four actual construction videos for validation, and the results show that the productivity evaluation is generally consistent with the actual conditions. The contribution of this research to construction management is twofold: On the one hand, without affecting the normal behavior of workers, a connection between construction individuals and work object is established, and the work productivity evaluation is realized. On the other hand, the proposed method has a positive effect on improving the efficiency of construction management.


Sign in / Sign up

Export Citation Format

Share Document