scholarly journals Deep Learning for Automated Detection and Identification of Migrating American Eel Anguilla rostrata from Imaging Sonar Data

2021 ◽  
Vol 13 (14) ◽  
pp. 2671
Author(s):  
Xiaoqin Zang ◽  
Tianzhixi Yin ◽  
Zhangshuan Hou ◽  
Robert P. Mueller ◽  
Zhiqun Daniel Deng ◽  
...  

Adult American eels (Anguilla rostrata) are vulnerable to hydropower turbine mortality during outmigration from growth habitat in inland waters to the ocean where they spawn. Imaging sonar is a reliable and proven technology for monitoring of fish passage and migration; however, there is no efficient automated method for eel detection. We designed a deep learning model for automated detection of adult American eels from sonar data. The method employs convolution neural network (CNN) to distinguish between 14 images of eels and non-eel objects. Prior to image classification with CNN, background subtraction and wavelet denoising were applied to enhance sonar images. The CNN model was first trained and tested on data obtained from a laboratory experiment, which yielded overall accuracies of >98% for image-based classification. Then, the model was trained and tested on field data that were obtained near the Iroquois Dam located on the St. Lawrence River; the accuracy achieved was commensurate with that of human experts.

Author(s):  
Mohamed Estai ◽  
Marc Tennant ◽  
Dieter Gebauer ◽  
Andrew Brostek ◽  
Janardhan Vignarajan ◽  
...  

Objective: This study aimed to evaluate an automated detection system to detect and classify permanent teeth on orthopantomogram (OPG) images using convolutional neural networks (CNNs). Methods: In total, 591 digital OPGs were collected from patients older than 18 years. Three qualified dentists performed individual teeth labelling on images to generate the ground truth annotations. A three-step procedure, relying upon CNNs, was proposed for automated detection and classification of teeth. Firstly, U-Net, a type of CNN, performed preliminary segmentation of tooth regions or detecting regions of interest (ROIs) on panoramic images. Secondly, the Faster R-CNN, an advanced object detection architecture, identified each tooth within the ROI determined by the U-Net. Thirdly, VGG-16 architecture classified each tooth into 32 categories, and a tooth number was assigned. A total of 17,135 teeth cropped from 591 radiographs were used to train and validate the tooth detection and tooth numbering modules. 90% of OPG images were used for training, and the remaining 10% were used for validation. 10-folds cross-validation was performed for measuring the performance. The intersection over union (IoU), F1 score, precision, and recall (i.e. sensitivity) were used as metrics to evaluate the performance of resultant CNNs. Results: The ROI detection module had an IoU of 0.70. The tooth detection module achieved a recall of 0.99 and a precision of 0.99. The tooth numbering module had a recall, precision and F1 score of 0.98. Conclusion: The resultant automated method achieved high performance for automated tooth detection and numbering from OPG images. Deep learning can be helpful in the automatic filing of dental charts in general dentistry and forensic medicine.


2021 ◽  
Vol Publish Ahead of Print ◽  
Author(s):  
Yi-Chu Li ◽  
Hung-Hsun Chen ◽  
Henry Horng-Shing Lu ◽  
Hung-Ta Hondar Wu ◽  
Ming-Chau Chang ◽  
...  

2020 ◽  
Vol 133 ◽  
pp. 210-216 ◽  
Author(s):  
K. Shankar ◽  
Abdul Rahaman Wahab Sait ◽  
Deepak Gupta ◽  
S.K. Lakshmanaprabu ◽  
Ashish Khanna ◽  
...  

2018 ◽  
Vol 113 (Supplement) ◽  
pp. S173-S174
Author(s):  
LinJie Guo ◽  
ChunCheng Wu ◽  
Xiao Xiao ◽  
Zhiwei Zhang ◽  
Weimin Pan ◽  
...  

Author(s):  
Hua Zhang ◽  
Jiajie Mo ◽  
Han Jiang ◽  
Zhuyun Li ◽  
Wenhan Hu ◽  
...  

2022 ◽  
Vol 8 ◽  
Author(s):  
Vishnu Kandimalla ◽  
Matt Richard ◽  
Frank Smith ◽  
Jean Quirion ◽  
Luis Torgo ◽  
...  

The Ocean Aware project, led by Innovasea and funded through Canada's Ocean Supercluster, is developing a fish passage observation platform to monitor fish without the use of traditional tags. This will provide an alternative to standard tracking technology, such as acoustic telemetry fish tracking, which are often not appropriate for tracking at-risk fish species protected by legislation. Rather, the observation platform uses a combination of sensors including acoustic devices, visual and active sonar, and optical cameras. This will enable more in-depth scientific research and better support regulatory monitoring of at-risk fish species in fish passages or marine energy sites. Analysis of this data will require a robust and accurate method to automatically detect fish, count fish, and classify them by species in real-time using both sonar and optical cameras. To meet this need, we developed and tested an automated real-time deep learning framework combining state of the art convolutional neural networks and Kalman filters. First, we showed that an adaptation of the widely used YOLO machine learning model can accurately detect and classify eight species of fish from a public high resolution DIDSON imaging sonar dataset captured from the Ocqueoc River in Michigan, USA. Although there has been extensive research in the literature identifying particular fish such as eel vs. non-eel and seal vs. fish, to our knowledge this is the first successful application of deep learning for classifying multiple fish species with high resolution imaging sonar. Second, we integrated the Norfair object tracking framework to track and count fish using a public video dataset captured by optical cameras from the Wells Dam fish ladder on the Columbia River in Washington State, USA. Our results demonstrate that deep learning models can indeed be used to detect, classify species, and track fish using both high resolution imaging sonar and underwater video from a fish ladder. This work is a first step toward developing a fully implemented system which can accurately detect, classify and generate insights about fish in a wide variety of fish passage environments and conditions with data collected from multiple types of sensors.


Sign in / Sign up

Export Citation Format

Share Document