scholarly journals Identifying animal species in camera trap images using deep learning and citizen science

2018 ◽  
Vol 10 (1) ◽  
pp. 80-91 ◽  
Author(s):  
Marco Willi ◽  
Ross T. Pitman ◽  
Anabelle W. Cardoso ◽  
Christina Locke ◽  
Alexandra Swanson ◽  
...  
2019 ◽  
Author(s):  
◽  
Hayder Yousif

[ACCESS RESTRICTED TO THE UNIVERSITY OF MISSOURI AT REQUEST OF AUTHOR.] Camera traps are a popular tool to sample animal populations because they are noninvasive, detect a variety of species, and can record many thousands of animal detections per deployment. Cameras are typically set to take bursts of multiple images for each detection, and are deployed in arrays of dozens or hundreds of sites, often resulting in millions of images per study. The task of converting images to animal detection records from such large image collections is daunting, and made worse by situations that generate copious empty pictures from false triggers (e.g. camera malfunction or moving vegetation) or pictures of humans. We offer the first widely available computer vision tool for processing camera trap images. Our results show that the tool is accurate and results in substantial time savings for processing large image datasets, thus improving our ability to monitor wildlife across large scales with camera traps. In this dissertation, we have developed new image/video processing and computer vision algorithms for efficient and accurate object detection and sequence-level classiffication from natural scene camera-trap images. This work addresses the following five major tasks: (1) Human-animal detection. We develop a fast and accurate scheme for human-animal detection from highly cluttered camera-trap images using joint background modeling and deep learning classification. Specifically, first, We develop an effective background modeling and subtraction scheme to generate region proposals for the foreground objects. We then develop a cross-frame image patch verification to reduce the number of foreground object proposals. Finally, We perform complexity-accuracy analysis of deep convolutional neural networks (DCNN) to develop a fast deep learning classification scheme to classify these region proposals into three categories: human, animals, and background patches. The optimized DCNN is able to maintain high level of accuracy while reducing the computational complexity by 14 times. Our experimental results demonstrate that the proposed method outperforms existing methods on the camera-trap dataset. (2) Object segmentation from natural scene. We first design and train a fast DCNN for animal-human-background object classification, which is used to analyze the input image to generate multi-layer feature maps, representing the responses of different image regions to the animal-human-background classifier. From these feature maps, we construct the so-called deep objectness graph for accurate animal-human object segmentation with graph cut. The segmented object regions from each image in the sequence are then verfied and fused in the temporal domain using background modeling. Our experimental results demonstrate that our proposed method outperforms existing state-of-the-art methods on the camera-trap dataset with highly cluttered natural scenes. (3) DCNN domain background modeling. We replaced the background model with a new more efficient deep learning based model. The input frames are segmented into regions through the deep objectness graph then the region boundaries of the input frames are multiplied by each other to obtain the regions of movement patches. We construct the background representation using the temporal information of the co-located patches. We propose to fuse the subtraction and foreground/background pixel classiffcation of two representation : a) chromaticity and b) deep pixel information. (4) Sequence-level object classiffcation. We proposed a new method for sequence-level video recognition with application to animal species recognition from camera trap images. First, using background modeling and cross-frame patch verification, we developed a scheme to generate candidate object regions or object proposals in the spatiotemporal domain. Second, we develop a dynamic programming optimization approach to identify the best temporal subset of object proposals. Third, we aggregate and fuse the features of these selected object proposals for efficient sequence-level animal species classification.


Drones ◽  
2021 ◽  
Vol 5 (1) ◽  
pp. 6
Author(s):  
Apostolos Papakonstantinou ◽  
Marios Batsaris ◽  
Spyros Spondylidis ◽  
Konstantinos Topouzelis

Marine litter (ML) accumulation in the coastal zone has been recognized as a major problem in our time, as it can dramatically affect the environment, marine ecosystems, and coastal communities. Existing monitoring methods fail to respond to the spatiotemporal changes and dynamics of ML concentrations. Recent works showed that unmanned aerial systems (UAS), along with computer vision methods, provide a feasible alternative for ML monitoring. In this context, we proposed a citizen science UAS data acquisition and annotation protocol combined with deep learning techniques for the automatic detection and mapping of ML concentrations in the coastal zone. Five convolutional neural networks (CNNs) were trained to classify UAS image tiles into two classes: (a) litter and (b) no litter. Testing the CCNs’ generalization ability to an unseen dataset, we found that the VVG19 CNN returned an overall accuracy of 77.6% and an f-score of 77.42%. ML density maps were created using the automated classification results. They were compared with those produced by a manual screening classification proving our approach’s geographical transferability to new and unknown beaches. Although ML recognition is still a challenging task, this study provides evidence about the feasibility of using a citizen science UAS-based monitoring method in combination with deep learning techniques for the quantification of the ML load in the coastal zone using density maps.


2018 ◽  
Vol 10 (4) ◽  
pp. 585-590 ◽  
Author(s):  
Michael A. Tabak ◽  
Mohammad S. Norouzzadeh ◽  
David W. Wolfson ◽  
Steven J. Sweeney ◽  
Kurt C. Vercauteren ◽  
...  

2020 ◽  
Author(s):  
Thel Lucie ◽  
Chamaillé-Jammes Simon ◽  
Keurinck Léa ◽  
Catala Maxime ◽  
Packer Craig ◽  
...  

AbstractEcologists increasingly rely on camera trap data to estimate a wide range of biological parameters such as occupancy, population abundance or activity patterns. Because of the huge amount of data collected, the assistance of non-scientists is often sought after, but an assessment of the data quality is a prerequisite to their use.We tested whether citizen science data from one of the largest citizen science projects - Snapshot Serengeti - could be used to study breeding phenology, an important life-history trait. In particular, we tested whether the presence of juveniles (less than one or 12 months old) of three ungulate species in the Serengeti: topi Damaliscus jimela, kongoni Alcelaphus buselaphus and Grant’s gazelle Nanger granti could be reliably detected by the “naive” volunteers vs. trained observers. We expected a positive correlation between the proportion of volunteers identifying juveniles and their effective presence within photographs, assessed by the trained observers.We first checked the agreement between the trained observers for age classes and species and found a good agreement between them (Fleiss’ κ > 0.61 for juveniles of less than one and 12 month(s) old), suggesting that morphological criteria can be used successfully to determine age. The relationship between the proportion of volunteers detecting juveniles less than a month old and their actual presence plateaued at 0.45 for Grant’s gazelle and reached 0.70 for topi and 0.56 for kongoni. The same relationships were however much stronger for juveniles younger than 12 months, to the point that their presence was perfectly detected by volunteers for topi and kongoni.Volunteers’ classification allows a rough, moderately accurate, but quick, sorting of photograph sequences with/without juveniles. Obtaining accurate data however appears more difficult. We discuss the limitations of using citizen science camera traps data to study breeding phenology, and the options to improve the detection of juveniles, such as the addition of aging criteria on the online citizen science platforms, or the use of machine learning.


Author(s):  
Herman Njoroge Chege

Point 1: Deep learning algorithms are revolutionizing how hypothesis generation, pattern recognition, and prediction occurs in the sciences. In the life sciences, particularly biology and its subfields,  the use of deep learning is slowly but steadily increasing. However, prototyping or development of tools for practical applications remains in the domain of experienced coders. Furthermore, many tools can be quite costly and difficult to put together without expertise in Artificial intelligence (AI) computing. Point 2: We built a biological species classifier that leverages existing open-source tools and libraries. We designed the corresponding tutorial for users with basic skills in python and a small, but well-curated image dataset. We included annotated code in form of a Jupyter Notebook that can be adapted to any image dataset, ranging from satellite images, animals to bacteria. The prototype developer is publicly available and can be adapted for citizen science as well as other applications not envisioned in this paper. Point 3: We illustrate our approach with a case study of 219 images of 3 three seastar species. We show that with minimal parameter tuning of the AI pipeline we can create a classifier with superior accuracy. We include additional approaches to understand the misclassified images and to curate the dataset to increase accuracy. Point 4: The power of AI approaches is becoming increasingly accessible. We can now readily build and prototype species classifiers that can have a great impact on research that requires species identification and other types of image analysis. Such tools have implications for citizen science, biodiversity monitoring, and a wide range of ecological applications.


PLoS ONE ◽  
2019 ◽  
Vol 14 (6) ◽  
pp. e0218086 ◽  
Author(s):  
Daniel Langenkämper ◽  
Erik Simon-Lledó ◽  
Brett Hosking ◽  
Daniel O. B. Jones ◽  
Tim W. Nattkemper

2021 ◽  
Vol 6 (1) ◽  
Author(s):  
Monica Lasky ◽  
Arielle Parsons ◽  
Stephanie Schuttler ◽  
Alexandra Mash ◽  
Lincoln Larson ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document