AnimalFinder: A semi-automated system for animal detection in time-lapse camera trap images

2016 ◽  
Vol 36 ◽  
pp. 145-151 ◽  
Author(s):  
Jennifer L. Price Tack ◽  
Brian S. West ◽  
Conor P. McGowan ◽  
Stephen S. Ditchkoff ◽  
Stanley J. Reeves ◽  
...  
2019 ◽  
Author(s):  
◽  
Hayder Yousif

[ACCESS RESTRICTED TO THE UNIVERSITY OF MISSOURI AT REQUEST OF AUTHOR.] Camera traps are a popular tool to sample animal populations because they are noninvasive, detect a variety of species, and can record many thousands of animal detections per deployment. Cameras are typically set to take bursts of multiple images for each detection, and are deployed in arrays of dozens or hundreds of sites, often resulting in millions of images per study. The task of converting images to animal detection records from such large image collections is daunting, and made worse by situations that generate copious empty pictures from false triggers (e.g. camera malfunction or moving vegetation) or pictures of humans. We offer the first widely available computer vision tool for processing camera trap images. Our results show that the tool is accurate and results in substantial time savings for processing large image datasets, thus improving our ability to monitor wildlife across large scales with camera traps. In this dissertation, we have developed new image/video processing and computer vision algorithms for efficient and accurate object detection and sequence-level classiffication from natural scene camera-trap images. This work addresses the following five major tasks: (1) Human-animal detection. We develop a fast and accurate scheme for human-animal detection from highly cluttered camera-trap images using joint background modeling and deep learning classification. Specifically, first, We develop an effective background modeling and subtraction scheme to generate region proposals for the foreground objects. We then develop a cross-frame image patch verification to reduce the number of foreground object proposals. Finally, We perform complexity-accuracy analysis of deep convolutional neural networks (DCNN) to develop a fast deep learning classification scheme to classify these region proposals into three categories: human, animals, and background patches. The optimized DCNN is able to maintain high level of accuracy while reducing the computational complexity by 14 times. Our experimental results demonstrate that the proposed method outperforms existing methods on the camera-trap dataset. (2) Object segmentation from natural scene. We first design and train a fast DCNN for animal-human-background object classification, which is used to analyze the input image to generate multi-layer feature maps, representing the responses of different image regions to the animal-human-background classifier. From these feature maps, we construct the so-called deep objectness graph for accurate animal-human object segmentation with graph cut. The segmented object regions from each image in the sequence are then verfied and fused in the temporal domain using background modeling. Our experimental results demonstrate that our proposed method outperforms existing state-of-the-art methods on the camera-trap dataset with highly cluttered natural scenes. (3) DCNN domain background modeling. We replaced the background model with a new more efficient deep learning based model. The input frames are segmented into regions through the deep objectness graph then the region boundaries of the input frames are multiplied by each other to obtain the regions of movement patches. We construct the background representation using the temporal information of the co-located patches. We propose to fuse the subtraction and foreground/background pixel classiffcation of two representation : a) chromaticity and b) deep pixel information. (4) Sequence-level object classiffcation. We proposed a new method for sequence-level video recognition with application to animal species recognition from camera trap images. First, using background modeling and cross-frame patch verification, we developed a scheme to generate candidate object regions or object proposals in the spatiotemporal domain. Second, we develop a dynamic programming optimization approach to identify the best temporal subset of object proposals. Third, we aggregate and fuse the features of these selected object proposals for efficient sequence-level animal species classification.


2019 ◽  
Author(s):  
Eric Devost ◽  
Sandra Lai ◽  
Nicolas Casajus ◽  
Dominique Berteaux

SUMMARYCamera traps now represent a reliable, efficient and cost-effective technique to monitor wildlife and collect biological data in the field. However, efficiently extracting information from the massive amount of images generated is often extremely time-consuming and may now represent the most rate-limiting step in camera trap studies.To help overcome this challenge, we developed FoxMask, a new tool performing the automatic detection of animal presence in short sequences of camera trap images. FoxMask uses background estimation and foreground segmentation algorithms to detect the presence of moving objects (most likely, animals) on images.We analyzed a sample dataset from camera traps used to monitor activity on arctic fox Vulpes lagopus dens to test the parameter settings and the performance of the algorithm. The shape and color of arctic foxes, their background at snowmelt and during the summer growing season were highly variable, thus offering challenging testing conditions. We compared the automated animal detection performed by FoxMask to a manual review of the image series.The performance analysis indicated that the proportion of images correctly classified by FoxMask as containing an animal or not was very high (> 90%). FoxMask is thus highly efficient at reducing the workload by eliminating most false triggers (images without an animal). We provide parameter recommendations to facilitate usage and we present the cases where the algorithm performs less efficiently to stimulate further development.FoxMask is an easy-to-use tool freely available to ecologists performing camera trap data extraction. By minimizing analytical time, computer-assisted image analysis will allow collection of increased sample sizes and testing of new biological questions.


PeerJ ◽  
2020 ◽  
Vol 8 ◽  
pp. e9681
Author(s):  
Akira Yoshioka ◽  
Akira Shimizu ◽  
Hiroyuki Oguma ◽  
Nao Kumada ◽  
Keita Fukasawa ◽  
...  

Although dragonflies are excellent environmental indicators for monitoring terrestrial water ecosystems, automatic monitoring techniques using digital tools are limited. We designed a novel camera trapping system with an original dragonfly detector based on the hypothesis that perching dragonflies can be automatically detected using inexpensive and energy-saving photosensors built in a perch-like structure. A trial version of the camera trap was developed and evaluated in a case study targeting red dragonflies (Sympetrum spp.) in Japan. During an approximately 2-month period, the detector successfully detected Sympetrum dragonflies while using extremely low power consumption (less than 5 mW). Furthermore, a short-term field experiment using time-lapse cameras for validation at three locations indicated that the detection accuracy was sufficient for practical applications. The frequency of false positive detection ranged from 17 to 51 over an approximately 2-day period. The detection sensitivities were 0.67 and 1.0 at two locations, where a time-lapse camera confirmed that Sympetrum dragonflies perched on the trap more than once. However, the correspondence between the detection frequency by the camera trap and the abundance of Sympetrum dragonflies determined by field observations conducted in parallel was low when the dragonfly density was relatively high. Despite the potential for improvements in our camera trap and its application to the quantitative monitoring of dragonflies, the low cost and low power consumption of the detector make it a promising tool.


Animals ◽  
2019 ◽  
Vol 9 (6) ◽  
pp. 388 ◽  
Author(s):  
D. J. Welbourne ◽  
A. W. Claridge ◽  
D. J. Paull ◽  
F. Ford

Camera-traps are used widely around the world to census a range of vertebrate fauna, particularly mammals but also other groups including birds, as well as snakes and lizards (squamates). In an attempt to improve the reliability of camera-traps for censusing squamates, we examined whether programming options involving time lapse capture of images increased detections. This was compared to detections by camera-traps set to trigger by the standard passive infrared sensor setting (PIR), and camera-traps set to take images using time lapse in combination with PIR. We also examined the effect of camera trap focal length on the ability to tell different species of small squamate apart. In a series of side-by-side field comparisons, camera-traps programmed to take images at standard intervals, as well as through routine triggering of the PIR, captured more images of squamates than camera-traps using the PIR sensor setting alone or time lapse alone. Similarly, camera traps with their lens focal length set at closer distances improved our ability to discriminate species of small squamates. With these minor alterations to camera-trap programming and hardware, the quantity and quality of squamate detections was markedly better. These gains provide a platform for exploring other aspects of camera-trapping for squamates that might to lead to even greater survey advances, bridging the gap in knowledge of this otherwise poorly known faunal group.


2018 ◽  
Vol 45 (8) ◽  
pp. 706 ◽  
Author(s):  
Helen R. Morgan ◽  
Guy Ballard ◽  
Peter J. S. Fleming ◽  
Nick Reid ◽  
Remy Van der Ven ◽  
...  

Context When measuring grazing impacts of vertebrates, the density of animals and time spent foraging are important. Traditionally, dung pellet counts are used to index macropod grazing density, and a direct relationship between herbivore density and foraging impact is assumed. However, rarely are pellet deposition rates measured or compared with camera-trap indices. Aims The aims were to pilot an efficient and reliable camera-trapping method for monitoring macropod grazing density and activity patterns, and to contrast pellet counts with macropod counts from camera trapping, for estimating macropod grazing density. Methods Camera traps were deployed on stratified plots in a fenced enclosure containing a captive macropod population and the experiment was repeated in the same season in the following year after population reduction. Camera-based macropod counts were compared with pellet counts and pellet deposition rates were estimated using both datasets. Macropod frequency was estimated, activity patterns developed, and the variability between resting and grazing plots and the two estimates of macropod density was investigated. Key Results Camera-trap grazing density indices initially correlated well with pellet count indices (r2=0.86), but were less reliable between years. Site stratification enabled a significant relationship to be identified between camera-trap counts and pellet counts in grazing plots. Camera-trap indices were consistent for estimating grazing density in both surveys but were not useful for estimating absolute abundance in this study. Conclusions Camera trapping was efficient and reliable for estimating macropod activity patterns. Although significant, the relationship between pellet count indices and macropod grazing density based on camera-trapping indices was not strong; this was due to variability in macropod pellet deposition rates over different years. Time-lapse camera imagery has potential for simultaneously assessing herbivore foraging activity budgets with grazing densities and vegetation change. Further work is required to refine the use of camera-trapping indices for estimation of absolute abundance. Implications Time-lapse camera trapping and site-stratified sampling allow concurrent assessment of grazing density and grazing behaviour at plot and landscape scale.


Author(s):  
Gyanendra K. Verma ◽  
Pragya Gupta

Monitoring wild animals became easy due to camera trap network, a technique to explore wildlife using automatically triggered camera on the presence of wild animal and yields a large volume of multimedia data. Wild animal detection is a dynamic research field since the last several decades. In this paper, we propose a wild animal detection system to monitor wildlife and detect wild animals from highly cluttered natural images. The data acquired from the camera-trap network comprises of scenes that are highly cluttered that poses a challenge for detection of wild animals bringing about low recognition rates and high false discovery rates. To deal with the issue, we have utilized a camera trap database that provides candidate regions utilizing multilevel graph cut in the spatiotemporal area. The regions are utilized to make a validation stage that recognizes whether animals are present or not in a scene. These features from cluttered images are extracted using Deep Convolutional Neural Network (CNN). We have implemented the system using two prominent CNN models namely VGGNet and ResNet, on standard camera trap database. Finally, the CNN features fed to some of the best in class machine learning techniques for classification. Our outcomes demonstrate that our proposed system is superior compared to existing systems reported in the literature.


Sign in / Sign up

Export Citation Format

Share Document