scholarly journals Improved accuracy for automated counting of a fish in baited underwater videos for stock assessment

2021 ◽  
Author(s):  
RM Connolly ◽  
DV Fairclough ◽  
EL Jinks ◽  
EM Ditria ◽  
G Jackson ◽  
...  

AbstractThe ongoing need to sustainably manage fishery resources necessitates fishery-independent monitoring of the status of fish stocks. Camera systems, particularly baited remote underwater video stations (BRUVS), are a widely-used and repeatable method for monitoring relative abundance, required for building stock assessment models. The potential for BRUVS-based monitoring is restricted, however, by the substantial costs of manual data extraction from videos. Computer vision, in particular deep learning models, are increasingly being used to automatically detect and count fish at low abundances in videos. One of the advantages of BRUVS is that bait attractants help to reliably detect species in relatively short deployments (e.g. 1 hr). The high abundances of fish attracted to BRUVS, however, make computer vision more difficult, because fish often occlude other fish. We build upon existing deep learning methods for identifying and counting a target fisheries species across a wide range of fish abundances. Using BRUVS imagery targeting a recovering fishery species, Australian snapper (Chrysophrys auratus), we tested combinations of three further mathematical steps likely to generate accurate, efficient automation: 1) varying confidence thresholds (CTs), 2) on/off use of sequential non-maximum suppression (Seq-NMS), and 3) statistical correction equations. Output from the deep learning model was accurate at very low abundances of snapper; at higher abundances, however, the model over-predicted counts by as much as 50%. The procedure providing the most accurate counts across all fish abundances, with counts either correct or within 1 to 2 of manual counts (R2 = 93.4%), used Seq-NMS, a 55% confidence threshold, and a cubic polynomial corrective equation. The optimised modelling provides an automated procedure offering an effective and efficient method for accurately identifying and counting snapper in BRUV footage. Further testing is required to ensure that automated counts of snapper remain accurate in the survey region over time, and to determine the applicability to other regions within the distributional range of this species. For monitoring stocks of fishery species more generally, the specific equations will differ but the procedure demonstrated here would help to increase the usefulness of BRUVS, while decreasing costs.

2021 ◽  
Vol 8 ◽  
Author(s):  
Rod M. Connolly ◽  
David V. Fairclough ◽  
Eric L. Jinks ◽  
Ellen M. Ditria ◽  
Gary Jackson ◽  
...  

The ongoing need to sustainably manage fishery resources can benefit from fishery-independent monitoring of fish stocks. Camera systems, particularly baited remote underwater video system (BRUVS), are a widely used and repeatable method for monitoring relative abundance, required for building stock assessment models. The potential for BRUVS-based monitoring is restricted, however, by the substantial costs of manual data extraction from videos. Computer vision, in particular deep learning (DL) models, are increasingly being used to automatically detect and count fish at low abundances in videos. One of the advantages of BRUVS is that bait attractants help to reliably detect species in relatively short deployments (e.g., 1 h). The high abundances of fish attracted to BRUVS, however, make computer vision more difficult, because fish often obscure other fish. We build upon existing DL methods for identifying and counting a target fisheries species across a wide range of fish abundances. Using BRUVS imagery targeting a recovering fishery species, Australasian snapper (Chrysophrys auratus), we tested combinations of three further mathematical steps likely to generate accurate, efficient automation: (1) varying confidence thresholds (CTs), (2) on/off use of sequential non-maximum suppression (Seq-NMS), and (3) statistical correction equations. Output from the DL model was more accurate at low abundances of snapper than at higher abundances (>15 fish per frame) where the model over-predicted counts by as much as 50%. The procedure providing the most accurate counts across all fish abundances, with counts either correct or within 1–2 of manual counts (R2 = 88%), used Seq-NMS, a 45% CT, and a cubic polynomial corrective equation. The optimised modelling provides an automated procedure offering an effective and efficient method for accurately identifying and counting snapper in the BRUV footage on which it was tested. Additional evaluation will be required to test and refine the procedure so that automated counts of snapper are accurate in the survey region over time, and to determine the applicability to other regions within the distributional range of this species. For monitoring stocks of fishery species more generally, the specific equations will differ but the procedure demonstrated here could help to increase the usefulness of BRUVS.


2021 ◽  
Author(s):  
Yigit Gunduc

Transformers gain huge attention since they are first introduced and have a wide range of applications. Transformers start to take over all areas of deep learning and the Vision transformers paper also proved that they can be used for computer vision tasks. In this paper, we utilized a vision transformerbased custom-designed model, tensor-to-image, for the image to image translation. With the help of self-attention, our model was able to generalize and apply to different problems without a single modification.


2021 ◽  
Author(s):  
Yigit Gunduc

Transformers gain huge attention since they are first introduced and have a wide range of applications. Transformers start to take over all areas of deep learning and the Vision transformers paper also proved that they can be used for computer vision tasks. In this paper, we utilized a vision transformerbased custom-designed model, tensor-to-image, for the image to image translation. With the help of self-attention, our model was able to generalize and apply to different problems without a single modification.


2020 ◽  
Author(s):  
Cedar Warman ◽  
Christopher M. Sullivan ◽  
Justin Preece ◽  
Michaela E. Buchanan ◽  
Zuzana Vejlupkova ◽  
...  

AbstractHigh-throughput phenotyping systems are powerful, dramatically changing our ability to document, measure, and detect biological phenomena. Here, we describe a cost-effective combination of a custom-built imaging platform and deep-learning-based computer vision pipeline. A minimal version of the maize ear scanner was built with low-cost and readily available parts. The scanner rotates a maize ear while a cellphone or digital camera captures a video of the surface of the ear. Videos are then digitally flattened into two-dimensional ear projections. Segregating GFP and anthocyanin kernel phenotype are clearly distinguishable in ear projections, and can be manually annotated using image analysis software. Increased throughput was attained by designing and implementing an automated kernel counting system using transfer learning and a deep learning object detection model. The computer vision model was able to rapidly assess over 390,000 kernels, identifying male-specific transmission defects across a wide range of GFP-marked mutant alleles. This includes a previously undescribed defect putatively associated with mutation of Zm00001d002824, a gene predicted to encode a vacuolar processing enzyme (VPE). We show that by using this system, the quantification of transmission data and other ear phenotypes can be accelerated and scaled to generate large datasets for robust analyses.One sentence summaryA maize ear phenotyping system built from commonly available parts creates images of the surface of ears and identifies kernel phenotypes with a deep-learning-based computer vision pipeline.


2021 ◽  
Vol 2089 (1) ◽  
pp. 012079
Author(s):  
Makkena Brahmaiah ◽  
Srinivasa Rao Madala ◽  
Ch Mastan Chowdary

Abstract As crime rates rise at large events and possibly lonely places, security is always a top concern in every field. A wide range of issues may be solved with the use of computer vision, including anomalous detection and monitoring. Intelligence monitoring is becoming more dependent on video surveillance systems that can recognise and analyse scene and anomaly occurrences. Using SSD and Faster RCNN techniques, this paper provides automated gun (or weapon) identification. Use of two different kinds of datasets is included in the proposed approach. As opposed to the first dataset, the second one comprises pictures that have been manually tagged. However, the trade-off between speed and precision in real-world situations determines whether or not each method will be useful.


Author(s):  
Toke T. Høye ◽  
Johanna Ärje ◽  
Kim Bjerge ◽  
Oskar L. P. Hansen ◽  
Alexandros Iosifidis ◽  
...  

ABSTRACTMost animal species on Earth are insects, and recent reports suggest that their abundance is in drastic decline. Although these reports come from a wide range of insect taxa and regions, the evidence to assess the extent of the phenomenon is still sparse. Insect populations are challenging to study and most monitoring methods are labour intensive and inefficient. Advances in computer vision and deep learning provide potential new solutions to this global challenge. Cameras and other sensors that can effectively, continuously, and non-invasively perform entomological observations throughout diurnal and seasonal cycles. The physical appearance of specimens can also be captured by automated imaging in the lab. When trained on these data, deep learning models can provide estimates of insect abundance, biomass, and diversity. Further, deep learning models can quantify variation in phenotypic traits, behaviour, and interactions. Here, we connect recent developments in deep learning and computer vision to the urgent demand for more cost-efficient monitoring of insects and other invertebrates. We present examples of sensor-based monitoring of insects. We show how deep learning tools can be applied to the big data outputs to derive ecological information and discuss the challenges that lie ahead for the implementation of such solutions in entomology. We identify four focal areas, which will facilitate this transformation: 1) Validation of image-based taxonomic identification, 2) generation of sufficient training data, 3) development of public, curated reference databases, and 4) solutions to integrate deep learning and molecular tools.Significance statementInsect populations are challenging to study, but computer vision and deep learning provide opportunities for continuous and non-invasive monitoring of biodiversity around the clock and over entire seasons. These tools can also facilitate the processing of samples in a laboratory setting. Automated imaging in particular can provide an effective way of identifying and counting specimens to measure abundance. We present examples of sensors and devices of relevance to entomology and show how deep learning tools can convert the big data streams into ecological information. We discuss the challenges that lie ahead and identify four focal areas to make deep learning and computer vision game changers for entomology.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Yingqi Gu ◽  
Akshay Zalkikar ◽  
Mingming Liu ◽  
Lara Kelly ◽  
Amy Hall ◽  
...  

AbstractClinical studies from WHO have demonstrated that only 50–70% of patients adhere properly to prescribed drug therapy. Such adherence failure can impact therapeutic efficacy for the patients in question and compromises data quality around the population-level efficacy of the drug for the indications targeted. In this study, we applied various ensemble learning and deep learning models to predict medication adherence among patients. Our contribution to this endeavour involves targeting the problem of adherence prediction for a particularly challenging class of patients who self-administer injectable medication at home. Our prediction pipeline, based on event history, comprises a connected sharps bin which aims to help patients better manage their condition and improve outcomes. In other words, the efficiency of interventions can be significantly improved by prioritizing the patients who are most likely to be non-adherent. The collected data comprising a rich event feature set may be exploited for the purposes of predicting the status of the next adherence state for individual patients. This paper reports on how this concept can be realized through an investigation using a wide range of ensemble learning and deep learning models on a real-world dataset collected from such a system. The dataset investigated comprises 342,174 historic injection disposal records collected over the course of more than 5 years. A comprehensive comparison of different models is given in this paper. Moreover, we demonstrate that the selected best performer, long short-term memory (LSTM), generalizes well by deploying it in a true future testing dataset. The proposed end-to-end pipeline is capable of predicting patient failure in adhering to their therapeutic regimen with 77.35 % accuracy (Specificity: 78.28 %, Sensitivity: 76.42%, Precision: 77.87%,F1 score: 0.7714, ROC AUC: 0.8390).


2021 ◽  
Vol 2021 (1) ◽  
pp. 11-15
Author(s):  
Marco Leonardi ◽  
Paolo Napoletano ◽  
Alessandro Rozza ◽  
Raimondo Schettini

Automatic assessment of image aesthetics is a challenging task for the computer vision community that has a wide range of applications. The most promising state-of-the-art approaches are based on deep learning methods that jointly predict aesthetics-related attributes and aesthetics score. In this article, we propose a method that learns the aesthetics score on the basis of the prediction of aesthetics-related attributes. To this end, we extract a multi-level spatially pooled (MLSP) features set from a pretrained ImageNet network and then these features are used to train a Multi Layer Perceptron (MLP) to predict image aesthetics-related attributes. A Support Vector Regression machine (SVR) is finally used to estimate the image aesthetics score starting from the aesthetics-related attributes. Experimental results on the ”Aesthetics with Attributes Database” (AADB) demonstrate the effectiveness of our approach that outperforms the state of the art of about 5.5% in terms of Spearman’s Rankorder Correlation Coefficient (SROCC).


Author(s):  
Bobburi Taralathasri ◽  
Dammati Vidya Sri ◽  
Gadidammalla Narendra Kumar ◽  
Annam Subbarao ◽  
Palli R Krishna Prasad

The major and wide range applications like Driverless cars, robots, Image surveillance has become famous in the Computer vision .Computer vision is the core in all those applications which is responsible for the image detection and it became more popular worldwide. Object Detection System using Deep Learning Technique” detects objects efficiently based on YOLO algorithm and applies the algorithm on image data to detect objects.


2021 ◽  
Vol 118 (2) ◽  
pp. e2002545117
Author(s):  
Toke T. Høye ◽  
Johanna Ärje ◽  
Kim Bjerge ◽  
Oskar L. P. Hansen ◽  
Alexandros Iosifidis ◽  
...  

Most animal species on Earth are insects, and recent reports suggest that their abundance is in drastic decline. Although these reports come from a wide range of insect taxa and regions, the evidence to assess the extent of the phenomenon is sparse. Insect populations are challenging to study, and most monitoring methods are labor intensive and inefficient. Advances in computer vision and deep learning provide potential new solutions to this global challenge. Cameras and other sensors can effectively, continuously, and noninvasively perform entomological observations throughout diurnal and seasonal cycles. The physical appearance of specimens can also be captured by automated imaging in the laboratory. When trained on these data, deep learning models can provide estimates of insect abundance, biomass, and diversity. Further, deep learning models can quantify variation in phenotypic traits, behavior, and interactions. Here, we connect recent developments in deep learning and computer vision to the urgent demand for more cost-efficient monitoring of insects and other invertebrates. We present examples of sensor-based monitoring of insects. We show how deep learning tools can be applied to exceptionally large datasets to derive ecological information and discuss the challenges that lie ahead for the implementation of such solutions in entomology. We identify four focal areas, which will facilitate this transformation: 1) validation of image-based taxonomic identification; 2) generation of sufficient training data; 3) development of public, curated reference databases; and 4) solutions to integrate deep learning and molecular tools.


Sign in / Sign up

Export Citation Format

Share Document