scholarly journals Improved Accuracy for Automated Counting of a Fish in Baited Underwater Videos for Stock Assessment

2021 ◽  
Vol 8 ◽  
Author(s):  
Rod M. Connolly ◽  
David V. Fairclough ◽  
Eric L. Jinks ◽  
Ellen M. Ditria ◽  
Gary Jackson ◽  
...  

The ongoing need to sustainably manage fishery resources can benefit from fishery-independent monitoring of fish stocks. Camera systems, particularly baited remote underwater video system (BRUVS), are a widely used and repeatable method for monitoring relative abundance, required for building stock assessment models. The potential for BRUVS-based monitoring is restricted, however, by the substantial costs of manual data extraction from videos. Computer vision, in particular deep learning (DL) models, are increasingly being used to automatically detect and count fish at low abundances in videos. One of the advantages of BRUVS is that bait attractants help to reliably detect species in relatively short deployments (e.g., 1 h). The high abundances of fish attracted to BRUVS, however, make computer vision more difficult, because fish often obscure other fish. We build upon existing DL methods for identifying and counting a target fisheries species across a wide range of fish abundances. Using BRUVS imagery targeting a recovering fishery species, Australasian snapper (Chrysophrys auratus), we tested combinations of three further mathematical steps likely to generate accurate, efficient automation: (1) varying confidence thresholds (CTs), (2) on/off use of sequential non-maximum suppression (Seq-NMS), and (3) statistical correction equations. Output from the DL model was more accurate at low abundances of snapper than at higher abundances (>15 fish per frame) where the model over-predicted counts by as much as 50%. The procedure providing the most accurate counts across all fish abundances, with counts either correct or within 1–2 of manual counts (R2 = 88%), used Seq-NMS, a 45% CT, and a cubic polynomial corrective equation. The optimised modelling provides an automated procedure offering an effective and efficient method for accurately identifying and counting snapper in the BRUV footage on which it was tested. Additional evaluation will be required to test and refine the procedure so that automated counts of snapper are accurate in the survey region over time, and to determine the applicability to other regions within the distributional range of this species. For monitoring stocks of fishery species more generally, the specific equations will differ but the procedure demonstrated here could help to increase the usefulness of BRUVS.

2021 ◽  
Author(s):  
RM Connolly ◽  
DV Fairclough ◽  
EL Jinks ◽  
EM Ditria ◽  
G Jackson ◽  
...  

AbstractThe ongoing need to sustainably manage fishery resources necessitates fishery-independent monitoring of the status of fish stocks. Camera systems, particularly baited remote underwater video stations (BRUVS), are a widely-used and repeatable method for monitoring relative abundance, required for building stock assessment models. The potential for BRUVS-based monitoring is restricted, however, by the substantial costs of manual data extraction from videos. Computer vision, in particular deep learning models, are increasingly being used to automatically detect and count fish at low abundances in videos. One of the advantages of BRUVS is that bait attractants help to reliably detect species in relatively short deployments (e.g. 1 hr). The high abundances of fish attracted to BRUVS, however, make computer vision more difficult, because fish often occlude other fish. We build upon existing deep learning methods for identifying and counting a target fisheries species across a wide range of fish abundances. Using BRUVS imagery targeting a recovering fishery species, Australian snapper (Chrysophrys auratus), we tested combinations of three further mathematical steps likely to generate accurate, efficient automation: 1) varying confidence thresholds (CTs), 2) on/off use of sequential non-maximum suppression (Seq-NMS), and 3) statistical correction equations. Output from the deep learning model was accurate at very low abundances of snapper; at higher abundances, however, the model over-predicted counts by as much as 50%. The procedure providing the most accurate counts across all fish abundances, with counts either correct or within 1 to 2 of manual counts (R2 = 93.4%), used Seq-NMS, a 55% confidence threshold, and a cubic polynomial corrective equation. The optimised modelling provides an automated procedure offering an effective and efficient method for accurately identifying and counting snapper in BRUV footage. Further testing is required to ensure that automated counts of snapper remain accurate in the survey region over time, and to determine the applicability to other regions within the distributional range of this species. For monitoring stocks of fishery species more generally, the specific equations will differ but the procedure demonstrated here would help to increase the usefulness of BRUVS, while decreasing costs.


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Eleanor F. Miller ◽  
Andrea Manica

Abstract Background Today an unprecedented amount of genetic sequence data is stored in publicly available repositories. For decades now, mitochondrial DNA (mtDNA) has been the workhorse of genetic studies, and as a result, there is a large volume of mtDNA data available in these repositories for a wide range of species. Indeed, whilst whole genome sequencing is an exciting prospect for the future, for most non-model organisms’ classical markers such as mtDNA remain widely used. By compiling existing data from multiple original studies, it is possible to build powerful new datasets capable of exploring many questions in ecology, evolution and conservation biology. One key question that these data can help inform is what happened in a species’ demographic past. However, compiling data in this manner is not trivial, there are many complexities associated with data extraction, data quality and data handling. Results Here we present the mtDNAcombine package, a collection of tools developed to manage some of the major decisions associated with handling multi-study sequence data with a particular focus on preparing sequence data for Bayesian skyline plot demographic reconstructions. Conclusions There is now more genetic information available than ever before and large meta-data sets offer great opportunities to explore new and exciting avenues of research. However, compiling multi-study datasets still remains a technically challenging prospect. The mtDNAcombine package provides a pipeline to streamline the process of downloading, curating, and analysing sequence data, guiding the process of compiling data sets from the online database GenBank.


Author(s):  
Oluwaseun Adeyeye ◽  
Ali Aldalbahi ◽  
Jawad Raza ◽  
Zurni Omar ◽  
Mostafizur Rahaman ◽  
...  

AbstractThe processes of diffusion and reaction play essential roles in numerous system dynamics. Consequently, the solutions of reaction–diffusion equations have gained much attention because of not only their occurrence in many fields of science but also the existence of important properties and information in the solutions. However, despite the wide range of numerical methods explored for approximating solutions, the adoption of block methods is yet to be investigated. Hence, this article introduces a new two-step third–fourth-derivative block method as a numerical approach to solve the reaction–diffusion equation. In order to ensure improved accuracy, the method introduces the concept of nonlinearity in the solution of the linear model through the presence of higher derivatives. The method obtained accurate solutions for the model at varying values of the dimensionless diffusion parameter and saturation parameter. Furthermore, the solutions are also in good agreement with previous solutions by existing authors.


2021 ◽  
Vol 13 (2) ◽  
pp. 723
Author(s):  
Antti Kurvinen ◽  
Arto Saari ◽  
Juhani Heljo ◽  
Eero Nippala

It is widely agreed that dynamics of building stocks are relatively poorly known even if it is recognized to be an important research topic. Better understanding of building stock dynamics and future development is crucial, e.g., for sustainable management of the built environment as various analyses require long-term projections of building stock development. Recognizing the uncertainty in relation to long-term modeling, we propose a transparent calculation-based QuantiSTOCK model for modeling building stock development. Our approach not only provides a tangible tool for understanding development when selected assumptions are valid but also, most importantly, allows for studying the sensitivity of results to alternative developments of the key variables. Therefore, this relatively simple modeling approach provides fruitful grounds for understanding the impact of different key variables, which is needed to facilitate meaningful debate on different housing, land use, and environment-related policies. The QuantiSTOCK model may be extended in numerous ways and lays the groundwork for modeling the future developments of building stocks. The presented model may be used in a wide range of analyses ranging from assessing housing demand at the regional level to providing input for defining sustainable pathways towards climate targets. Due to the availability of high-quality data, the Finnish building stock provided a great test arena for the model development.


2014 ◽  
Vol 72 (1) ◽  
pp. 111-116 ◽  
Author(s):  
M. Dickey-Collas ◽  
N. T. Hintzen ◽  
R. D. M. Nash ◽  
P-J. Schön ◽  
M. R. Payne

Abstract The accessibility of databases of global or regional stock assessment outputs is leading to an increase in meta-analysis of the dynamics of fish stocks. In most of these analyses, each of the time-series is generally assumed to be directly comparable. However, the approach to stock assessment employed, and the associated modelling assumptions, can have an important influence on the characteristics of each time-series. We explore this idea by investigating recruitment time-series with three different recruitment parameterizations: a stock–recruitment model, a random-walk time-series model, and non-parametric “free” estimation of recruitment. We show that the recruitment time-series is sensitive to model assumptions and this can impact reference points in management, the perception of variability in recruitment and thus undermine meta-analyses. The assumption of the direct comparability of recruitment time-series in databases is therefore not consistent across or within species and stocks. Caution is therefore required as perhaps the characteristics of the time-series of stock dynamics may be determined by the model used to generate them, rather than underlying ecological phenomena. This is especially true when information about cohort abundance is noisy or lacking.


2021 ◽  
Vol 13 (11) ◽  
pp. 6018
Author(s):  
Theo Lynn ◽  
Pierangelo Rosati ◽  
Antonia Egli ◽  
Stelios Krinidis ◽  
Komninos Angelakoglou ◽  
...  

The building stock accounts for a significant portion of worldwide energy consumption and greenhouse gas emissions. While the majority of the existing building stock has poor energy performance, deep renovation efforts are stymied by a wide range of human, technological, organisational and external environment factors across the value chain. A key challenge is integrating appropriate human resources, materials, fabrication, information and automation systems and knowledge management in a proper manner to achieve the required outcomes and meet the relevant regulatory standards, while satisfying a wide range of stakeholders with differing, often conflicting, motivations. RINNO is a Horizon 2020 project that aims to deliver a set of processes that, when working together, provide a system, repository, marketplace and enabling workflow process for managing deep renovation projects from inception to implementation. This paper presents a roadmap for an open renovation platform for managing and delivering deep renovation projects for residential buildings based on seven design principles. We illustrate a preliminary stepwise framework for applying the platform across the full-lifecycle of a deep renovation project. Based on this work, RINNO will develop a new open renovation software platform that will be implemented and evaluated at four pilot sites with varying construction, regulatory, market and climate contexts.


2003 ◽  
Vol 125 (3) ◽  
pp. 319-324 ◽  
Author(s):  
C. B. Coetzer ◽  
J. A. Visser

This paper introduces a compact model to predict the interfin velocity and the resulting pressure drop across a longitudinal fin heat sink with tip bypass. The compact model is based on results obtained from a comprehensive study into the behavior of both laminar and turbulent flow in longitudinal fin heat sinks with tip bypass using CFD analysis. The new compact flow prediction model is critically compared to existing compact models as well as to the results obtained from the CFD simulations. The results indicate that the new compact model shows at least a 4.5% improvement in accuracy predicting the pressure drop over a wide range of heat sink geometries and Reynolds numbers simulated. The improved accuracy in velocity distribution between the fins also increases the accuracy of the calculated heat transfer coefficients applied to the heat sinks.


2020 ◽  
Author(s):  
Johannes H. Uhl ◽  
Stefan Leyk ◽  
Caitlin M. McShane ◽  
Anna E. Braswell ◽  
Dylan S. Connor ◽  
...  

Abstract. The collection, processing and analysis of remote sensing data since the early 1970s has rapidly improved our understanding of change on the Earth’s surface. While satellite-based earth observation has proven to be of vast scientific value, these data are typically confined to recent decades of observation and often lack important thematic detail. Here, we advance in this arena by constructing new spatially-explicit settlement data for the United States that extend back to the early nineteenth century, and is consistently enumerated at fine spatial and temporal granularity (i.e., 250 m spatial, and 5 a temporal resolution). We create these time series using a large, novel building stock database to extract and map retrospective, fine-grained spatial distributions of built-up properties in the conterminous United States from 1810 to 2015. From our data extraction, we analyse and publish a series of gridded geospatial datasets that enable novel retrospective historical analysis of the built environment at unprecedented spatial and temporal resolution. The datasets are available at https://dataverse.harvard.edu/dataverse/hisdacus (Uhl and Leyk, 2020a, b, c, d).


1970 ◽  
Vol 27 (4) ◽  
pp. 737-742 ◽  
Author(s):  
R. G. Dowd ◽  
E. Bakken ◽  
O. Nakken

Two sonic methods for estimation of abundance of fish stocks, the echo integrator and the digital counter methods, were compared on single and schooling fish in the Lofoten area of Norway during March 1969. Good correlation was obtained between the two systems for both situations, but the slopes of the regressions of integrated values on the digital counter differed significantly between low and high density fish concentrations. This suggests that the two systems treated the echo information differently, but nevertheless maintained a linear relation between themselves over a wide range of counts.


2022 ◽  
Vol 31 (2) ◽  
pp. 1-32
Author(s):  
Luca Ardito ◽  
Andrea Bottino ◽  
Riccardo Coppola ◽  
Fabrizio Lamberti ◽  
Francesco Manigrasso ◽  
...  

In automated Visual GUI Testing (VGT) for Android devices, the available tools often suffer from low robustness to mobile fragmentation, leading to incorrect results when running the same tests on different devices. To soften these issues, we evaluate two feature matching-based approaches for widget detection in VGT scripts, which use, respectively, the complete full-screen snapshot of the application ( Fullscreen ) and the cropped images of its widgets ( Cropped ) as visual locators to match on emulated devices. Our analysis includes validating the portability of different feature-based visual locators over various apps and devices and evaluating their robustness in terms of cross-device portability and correctly executed interactions. We assessed our results through a comparison with two state-of-the-art tools, EyeAutomate and Sikuli. Despite a limited increase in the computational burden, our Fullscreen approach outperformed state-of-the-art tools in terms of correctly identified locators across a wide range of devices and led to a 30% increase in passing tests. Our work shows that VGT tools’ dependability can be improved by bridging the testing and computer vision communities. This connection enables the design of algorithms targeted to domain-specific needs and thus inherently more usable and robust.


Sign in / Sign up

Export Citation Format

Share Document