scholarly journals FoxMask: a new automated tool for animal detection in camera trap images

2019 ◽  
Author(s):  
Eric Devost ◽  
Sandra Lai ◽  
Nicolas Casajus ◽  
Dominique Berteaux

SUMMARYCamera traps now represent a reliable, efficient and cost-effective technique to monitor wildlife and collect biological data in the field. However, efficiently extracting information from the massive amount of images generated is often extremely time-consuming and may now represent the most rate-limiting step in camera trap studies.To help overcome this challenge, we developed FoxMask, a new tool performing the automatic detection of animal presence in short sequences of camera trap images. FoxMask uses background estimation and foreground segmentation algorithms to detect the presence of moving objects (most likely, animals) on images.We analyzed a sample dataset from camera traps used to monitor activity on arctic fox Vulpes lagopus dens to test the parameter settings and the performance of the algorithm. The shape and color of arctic foxes, their background at snowmelt and during the summer growing season were highly variable, thus offering challenging testing conditions. We compared the automated animal detection performed by FoxMask to a manual review of the image series.The performance analysis indicated that the proportion of images correctly classified by FoxMask as containing an animal or not was very high (> 90%). FoxMask is thus highly efficient at reducing the workload by eliminating most false triggers (images without an animal). We provide parameter recommendations to facilitate usage and we present the cases where the algorithm performs less efficiently to stimulate further development.FoxMask is an easy-to-use tool freely available to ecologists performing camera trap data extraction. By minimizing analytical time, computer-assisted image analysis will allow collection of increased sample sizes and testing of new biological questions.

2020 ◽  
Author(s):  
Thel Lucie ◽  
Chamaillé-Jammes Simon ◽  
Keurinck Léa ◽  
Catala Maxime ◽  
Packer Craig ◽  
...  

AbstractEcologists increasingly rely on camera trap data to estimate a wide range of biological parameters such as occupancy, population abundance or activity patterns. Because of the huge amount of data collected, the assistance of non-scientists is often sought after, but an assessment of the data quality is a prerequisite to their use.We tested whether citizen science data from one of the largest citizen science projects - Snapshot Serengeti - could be used to study breeding phenology, an important life-history trait. In particular, we tested whether the presence of juveniles (less than one or 12 months old) of three ungulate species in the Serengeti: topi Damaliscus jimela, kongoni Alcelaphus buselaphus and Grant’s gazelle Nanger granti could be reliably detected by the “naive” volunteers vs. trained observers. We expected a positive correlation between the proportion of volunteers identifying juveniles and their effective presence within photographs, assessed by the trained observers.We first checked the agreement between the trained observers for age classes and species and found a good agreement between them (Fleiss’ κ > 0.61 for juveniles of less than one and 12 month(s) old), suggesting that morphological criteria can be used successfully to determine age. The relationship between the proportion of volunteers detecting juveniles less than a month old and their actual presence plateaued at 0.45 for Grant’s gazelle and reached 0.70 for topi and 0.56 for kongoni. The same relationships were however much stronger for juveniles younger than 12 months, to the point that their presence was perfectly detected by volunteers for topi and kongoni.Volunteers’ classification allows a rough, moderately accurate, but quick, sorting of photograph sequences with/without juveniles. Obtaining accurate data however appears more difficult. We discuss the limitations of using citizen science camera traps data to study breeding phenology, and the options to improve the detection of juveniles, such as the addition of aging criteria on the online citizen science platforms, or the use of machine learning.


2019 ◽  
Author(s):  
◽  
Hayder Yousif

[ACCESS RESTRICTED TO THE UNIVERSITY OF MISSOURI AT REQUEST OF AUTHOR.] Camera traps are a popular tool to sample animal populations because they are noninvasive, detect a variety of species, and can record many thousands of animal detections per deployment. Cameras are typically set to take bursts of multiple images for each detection, and are deployed in arrays of dozens or hundreds of sites, often resulting in millions of images per study. The task of converting images to animal detection records from such large image collections is daunting, and made worse by situations that generate copious empty pictures from false triggers (e.g. camera malfunction or moving vegetation) or pictures of humans. We offer the first widely available computer vision tool for processing camera trap images. Our results show that the tool is accurate and results in substantial time savings for processing large image datasets, thus improving our ability to monitor wildlife across large scales with camera traps. In this dissertation, we have developed new image/video processing and computer vision algorithms for efficient and accurate object detection and sequence-level classiffication from natural scene camera-trap images. This work addresses the following five major tasks: (1) Human-animal detection. We develop a fast and accurate scheme for human-animal detection from highly cluttered camera-trap images using joint background modeling and deep learning classification. Specifically, first, We develop an effective background modeling and subtraction scheme to generate region proposals for the foreground objects. We then develop a cross-frame image patch verification to reduce the number of foreground object proposals. Finally, We perform complexity-accuracy analysis of deep convolutional neural networks (DCNN) to develop a fast deep learning classification scheme to classify these region proposals into three categories: human, animals, and background patches. The optimized DCNN is able to maintain high level of accuracy while reducing the computational complexity by 14 times. Our experimental results demonstrate that the proposed method outperforms existing methods on the camera-trap dataset. (2) Object segmentation from natural scene. We first design and train a fast DCNN for animal-human-background object classification, which is used to analyze the input image to generate multi-layer feature maps, representing the responses of different image regions to the animal-human-background classifier. From these feature maps, we construct the so-called deep objectness graph for accurate animal-human object segmentation with graph cut. The segmented object regions from each image in the sequence are then verfied and fused in the temporal domain using background modeling. Our experimental results demonstrate that our proposed method outperforms existing state-of-the-art methods on the camera-trap dataset with highly cluttered natural scenes. (3) DCNN domain background modeling. We replaced the background model with a new more efficient deep learning based model. The input frames are segmented into regions through the deep objectness graph then the region boundaries of the input frames are multiplied by each other to obtain the regions of movement patches. We construct the background representation using the temporal information of the co-located patches. We propose to fuse the subtraction and foreground/background pixel classiffcation of two representation : a) chromaticity and b) deep pixel information. (4) Sequence-level object classiffcation. We proposed a new method for sequence-level video recognition with application to animal species recognition from camera trap images. First, using background modeling and cross-frame patch verification, we developed a scheme to generate candidate object regions or object proposals in the spatiotemporal domain. Second, we develop a dynamic programming optimization approach to identify the best temporal subset of object proposals. Third, we aggregate and fuse the features of these selected object proposals for efficient sequence-level animal species classification.


2016 ◽  
Vol 38 (1) ◽  
pp. 44 ◽  
Author(s):  
Paul D. Meek ◽  
Karl Vernes

Camera trapping is increasingly recognised as a survey tool akin to conventional small mammal survey methods such as Elliott trapping. While there are many cost and resource advantages of using camera traps, their adoption should not compromise scientific rigour. Rodents are a common element of most small mammal surveys. In 2010 we deployed camera traps to measure whether the endangered Hastings River mouse (Pseudomys oralis) could be detected and identified with an acceptable level of precision by camera traps when similar-looking sympatric small mammals were present. A comparison of three camera trap models revealed that camera traps can detect a wide range of small mammals, although white flash colour photography was necessary to capture characteristic features of morphology. However, the accurate identification of some small mammals, including P. oralis, was problematic; we conclude therefore that camera traps alone are not appropriate for P. oralis surveys, even though they might at times successfully detect them. We discuss the need for refinement of the methodology, further testing of camera trap technology, and the development of computer-assisted techniques to overcome problems associated with accurate species identification.


2021 ◽  
Author(s):  
Christophe Bonenfant ◽  
Ken Stratford ◽  
Stephanie Periquet

Camera-traps are a versatile and widely adopted tool to collect biological data in wildlife conservation and management. If estimating population abundance from camera-trap data is the primarily goal of many projects, what population estimator is suitable for such data needs to be investigated. We took advantage of a 21 days camera-trap monitoring on giraffes at Onvaga Game Reserve, Namibia to compare capture-recapture (CR), saturation curves and N-mixture estimators of population abundance. A marked variation in detection probability of giraffes was observed in time and between individuals. Giraffes were also less likely to be detected after they were seen at a waterhole with cameras (visit frequency of f = 0.25). We estimated population size to 119 giraffes with a Cv = 0.10 with the best CR estimator. All other estimators we a applied over-estimated population size by ca. -20 to >+80%, because they did not account for the main sources of heterogeneity in detection probability. We found that modelling choices was much less forgiving for N-mixture than CR estimators. Double counts were problematic for N-mixture models, challenging the use of raw counts at waterholes to monitor giraffes abundance.


2018 ◽  
Vol 115 (25) ◽  
pp. E5716-E5725 ◽  
Author(s):  
Mohammad Sadegh Norouzzadeh ◽  
Anh Nguyen ◽  
Margaret Kosmala ◽  
Alexandra Swanson ◽  
Meredith S. Palmer ◽  
...  

Having accurate, detailed, and up-to-date information about the location and behavior of animals in the wild would improve our ability to study and conserve ecosystems. We investigate the ability to automatically, accurately, and inexpensively collect such data, which could help catalyze the transformation of many fields of ecology, wildlife biology, zoology, conservation biology, and animal behavior into “big data” sciences. Motion-sensor “camera traps” enable collecting wildlife pictures inexpensively, unobtrusively, and frequently. However, extracting information from these pictures remains an expensive, time-consuming, manual task. We demonstrate that such information can be automatically extracted by deep learning, a cutting-edge type of artificial intelligence. We train deep convolutional neural networks to identify, count, and describe the behaviors of 48 species in the 3.2 million-image Snapshot Serengeti dataset. Our deep neural networks automatically identify animals with >93.8% accuracy, and we expect that number to improve rapidly in years to come. More importantly, if our system classifies only images it is confident about, our system can automate animal identification for 99.3% of the data while still performing at the same 96.6% accuracy as that of crowdsourced teams of human volunteers, saving >8.4 y (i.e., >17,000 h at 40 h/wk) of human labeling effort on this 3.2 million-image dataset. Those efficiency gains highlight the importance of using deep neural networks to automate data extraction from camera-trap images, reducing a roadblock for this widely used technology. Our results suggest that deep learning could enable the inexpensive, unobtrusive, high-volume, and even real-time collection of a wealth of information about vast numbers of animals in the wild.


2007 ◽  
Vol 6 (1) ◽  
pp. 81-95
Author(s):  
Surendra Varma ◽  
André Pittet ◽  
H. S. Jamadagni

To evaluate the application of camera-trap technology in population dynamics studies of the Asian elephant, indigenously designed, cost-effective, infrared-triggered camera-traps were used.Usability of pictures was defined based on quality, clarity and positioning of the subject.With 99 pictures of 330 elephants, 20 sequences were obtained and 44 distinct individuals were identified.It was found that 38.6% were adult females, 4.5% adult males, 13.6% sub-adult females, 6.8% sub-adult males, 20.4% juvenile females,while juvenile males were poorly represented(2%), and 13.6% were calves.These results were surprising identical with those of other systematic and long-term studies.


2020 ◽  
Author(s):  
Michael A. Tabak ◽  
Mohammad S. Norouzzadeh ◽  
David W. Wolfson ◽  
Erica J. Newton ◽  
Raoul K. Boughton ◽  
...  

AbstractMotion-activated wildlife cameras (or “camera traps”) are frequently used to remotely and non-invasively observe animals. The vast number of images collected from camera trap projects have prompted some biologists to employ machine learning algorithms to automatically recognize species in these images, or at least filter-out images that do not contain animals. These approaches are often limited by model transferability, as a model trained to recognize species from one location might not work as well for the same species in different locations. Furthermore, these methods often require advanced computational skills, making them inaccessible to many biologists.We used 3 million camera trap images from 18 studies in 10 states across the United States of America to train two deep neural networks, one that recognizes 58 species, the “species model,” and one that determines if an image is empty or if it contains an animal, the “empty-animal model.”Our species model and empty-animal model had accuracies of 96.8% and 97.3%, respectively. Furthermore, the models performed well on some out-of-sample datasets, as the species model had 91% accuracy on species from Canada (accuracy range 36-91% across all out-of-sample datasets) and the empty-animal model achieved an accuracy of 91-94% on out-of-sample datasets from different continents.Our software addresses some of the limitations of using machine learning to classify images from camera traps. By including many species from several locations, our species model is potentially applicable to many camera trap studies in North America. We also found that our empty-animal model can facilitate removal of images without animals globally. We provide the trained models in an R package (MLWIC2: Machine Learning for Wildlife Image Classification in R), which contains Shiny Applications that allow scientists with minimal programming experience to use trained models and train new models in six neural network architectures with varying depths.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Katie O’Hearn ◽  
Cameron MacDonald ◽  
Anne Tsampalieros ◽  
Leo Kadota ◽  
Ryan Sandarage ◽  
...  

Abstract Background Standard practice for conducting systematic reviews (SRs) is time consuming and involves the study team screening hundreds or thousands of citations. As the volume of medical literature grows, the citation set sizes and corresponding screening efforts increase. While larger team size and alternate screening methods have the potential to reduce workload and decrease SR completion times, it is unknown whether investigators adapt team size or methods in response to citation set sizes. Using a cross-sectional design, we sought to understand how citation set size impacts (1) the total number of authors or individuals contributing to screening and (2) screening methods. Methods MEDLINE was searched in April 2019 for SRs on any health topic. A total of 1880 unique publications were identified and sorted into five citation set size categories (after deduplication): < 1,000, 1,001–2,500, 2,501–5,000, 5,001–10,000, and > 10,000. A random sample of 259 SRs were selected (~ 50 per category) for data extraction and analysis. Results With the exception of the pairwise t test comparing the under 1000 and over 10,000 categories (median 5 vs. 6, p = 0.049) no statistically significant relationship was evident between author number and citation set size. While visual inspection was suggestive, statistical testing did not consistently identify a relationship between citation set size and number of screeners (title-abstract, full text) or data extractors. However, logistic regression identified investigators were significantly more likely to deviate from gold-standard screening methods (i.e. independent duplicate screening) with larger citation sets. For every doubling of citation size, the odds of using gold-standard screening decreased by 15 and 20% at title-abstract and full text review, respectively. Finally, few SRs reported using crowdsourcing (n = 2) or computer-assisted screening (n = 1). Conclusions Large citation set sizes present a challenge to SR teams, especially when faced with time-sensitive health policy questions. Our study suggests that with increasing citation set size, authors are less likely to adhere to gold-standard screening methods. It is possible that adjunct screening methods, such as crowdsourcing (large team) and computer-assisted technologies, may provide a viable solution for authors to complete their SRs in a timely manner.


Breathe ◽  
2016 ◽  
Vol 12 (2) ◽  
pp. 113-119 ◽  
Author(s):  
Phyllis Murphie ◽  
Nick Hex ◽  
Jo Setters ◽  
Stuart Little

“Non-delivery” home oxygen technologies that allow self-filling of ambulatory oxygen cylinders are emerging. They can offer a relatively unlimited supply of ambulatory oxygen in suitably assessed people who require long-term oxygen therapy (LTOT), providing they can use these systems safely and effectively. This allows users to be self-sufficient and facilitates longer periods of time away from home. The evolution and evidence base of this technology is reported with the experience of a national service review in Scotland (UK). Given that domiciliary oxygen services represent a significant cost to healthcare providers globally, these systems offer potential cost savings, are appealing to remote and rural regions due to the avoidance of cylinder delivery and have additional lower environmental impact due to reduced fossil fuel consumption and subsequently reduced carbon emissions. Evidence is emerging that self-fill/non-delivery oxygen systems can meet the ambulatory oxygen needs of many patients using LTOT and can have a positive impact on quality of life, increase time spent away from home and offer significant financial savings to healthcare providers.Educational aimsProvide update for oxygen prescribers on options for home oxygen provision.Provide update on the evidence base for available self-fill oxygen technologies.Provide and update for healthcare commissioners on the potential cost-effective and environmental benefits of increased utilisation of self-fill oxygen systems.


1990 ◽  
Vol 258 (1) ◽  
pp. R274-R280 ◽  
Author(s):  
H. W. Reinhardt ◽  
U. Palm ◽  
R. Mohnhaupt ◽  
K. Dannenberg ◽  
W. Boemke

A computerized system is described, combining automatic collection of urine in short intervals (minutes) over long periods (days) and recordings of body temperature, MABP, and heart rate in chronically instrumented conscious dogs. During the studies the dogs are housed in metabolic cages. Indwelling catheters and electrical wires are connected to a specially designed swivel and directed out of the cage to the next room. Infusions, blood sampling, and monitoring can be performed from this room without disturbance to the dogs. Three examples of recordings are given. In one of these examples the sodium excretion patterns on 5 consecutive days under continuous saline infusion in one dog is evaluated. Urine was collected every 20 min. Sodium excretion showed cyclic variations. Fourier analysis exhibited 18-h periods and 4- to 8-h periods. The described system renders, e.g., coherent time series analysis possible for a variety of simultaneously recorded physiological variables and may thus acquire considerable importance for integrative physiology.


Sign in / Sign up

Export Citation Format

Share Document