scholarly journals Mating behavior of Eastern Spotted Skunk Spilogale putorius Linnaeus, 1758 (Mammalia: Carnivora: Mephitidae) revealed by camera trap in Texas, USA

2021 ◽  
Vol 13 (6) ◽  
pp. 18660-18662
Author(s):  
Alexandra C. Avrin ◽  
Charles E. Pekins ◽  
Maximilian L. Allen

Eastern Spotted Skunks Spilogale putorius are an understudied Vulnerable small carnivore.  Here we report a novel capture of Eastern Spotted Skunks mating via a camera trap in central Texas.  This detection adds to the minimal natural history knowledge of the species and highlights the utility of camera traps for documenting rarely observed behaviors.

PLoS ONE ◽  
2021 ◽  
Vol 16 (3) ◽  
pp. e0247536
Author(s):  
Bart J. Harmsen ◽  
Nicola Saville ◽  
Rebecca J. Foster

Population assessments of wide-ranging, cryptic, terrestrial mammals rely on camera trap surveys. While camera trapping is a powerful method of detecting presence, it is difficult distinguishing rarity from low detection rate. The margay (Leopardus wiedii) is an example of a species considered rare based on its low detection rates across its range. Although margays have a wide distribution, detection rates with camera traps are universally low; consequently, the species is listed as Near Threatened. Our 12-year camera trap study of margays in protected broadleaf forest in Belize suggests that while margays have low detection rate, they do not seem to be rare, rather that they are difficult to detect with camera traps. We detected a maximum of 187 individuals, all with few or no recaptures over the years (mean = 2.0 captures/individual ± SD 2.1), with two-thirds of individuals detected only once. The few individuals that were recaptured across years exhibited long tenures up to 9 years and were at least 10 years old at their final detection. We detected multiple individuals of both sexes at the same locations during the same survey, suggesting overlapping ranges with non-exclusive territories, providing further evidence of a high-density population. By studying the sparse annual datasets across multiple years, we found evidence of an abundant margay population in the forest of the Cockscomb Basin, which might have been deemed low density and rare, if studied in the short term. We encourage more long-term camera trap studies to assess population status of semi-arboreal carnivore species that have hitherto been considered rare based on low detection rates.


2019 ◽  
Author(s):  
Sadoune Ait Kaci Azzou ◽  
Liam Singer ◽  
Thierry Aebischer ◽  
Madleina Caduff ◽  
Beat Wolf ◽  
...  

SummaryCamera traps and acoustic recording devices are essential tools to quantify the distribution, abundance and behavior of mobile species. Varying detection probabilities among device locations must be accounted for when analyzing such data, which is generally done using occupancy models. We introduce a Bayesian Time-dependent Observation Model for Camera Trap data (Tomcat), suited to estimate relative event densities in space and time. Tomcat allows to learn about the environmental requirements and daily activity patterns of species while accounting for imperfect detection. It further implements a sparse model that deals well will a large number of potentially highly correlated environmental variables. By integrating both spatial and temporal information, we extend the notation of overlap coefficient between species to time and space to study niche partitioning. We illustrate the power of Tomcat through an application to camera trap data of eight sympatrically occurring duiker Cephalophinae species in the savanna - rainforest ecotone in the Central African Republic and show that most species pairs show little overlap. Exceptions are those for which one species is very rare, likely as a result of direct competition.


2020 ◽  
Author(s):  
Thel Lucie ◽  
Chamaillé-Jammes Simon ◽  
Keurinck Léa ◽  
Catala Maxime ◽  
Packer Craig ◽  
...  

AbstractEcologists increasingly rely on camera trap data to estimate a wide range of biological parameters such as occupancy, population abundance or activity patterns. Because of the huge amount of data collected, the assistance of non-scientists is often sought after, but an assessment of the data quality is a prerequisite to their use.We tested whether citizen science data from one of the largest citizen science projects - Snapshot Serengeti - could be used to study breeding phenology, an important life-history trait. In particular, we tested whether the presence of juveniles (less than one or 12 months old) of three ungulate species in the Serengeti: topi Damaliscus jimela, kongoni Alcelaphus buselaphus and Grant’s gazelle Nanger granti could be reliably detected by the “naive” volunteers vs. trained observers. We expected a positive correlation between the proportion of volunteers identifying juveniles and their effective presence within photographs, assessed by the trained observers.We first checked the agreement between the trained observers for age classes and species and found a good agreement between them (Fleiss’ κ > 0.61 for juveniles of less than one and 12 month(s) old), suggesting that morphological criteria can be used successfully to determine age. The relationship between the proportion of volunteers detecting juveniles less than a month old and their actual presence plateaued at 0.45 for Grant’s gazelle and reached 0.70 for topi and 0.56 for kongoni. The same relationships were however much stronger for juveniles younger than 12 months, to the point that their presence was perfectly detected by volunteers for topi and kongoni.Volunteers’ classification allows a rough, moderately accurate, but quick, sorting of photograph sequences with/without juveniles. Obtaining accurate data however appears more difficult. We discuss the limitations of using citizen science camera traps data to study breeding phenology, and the options to improve the detection of juveniles, such as the addition of aging criteria on the online citizen science platforms, or the use of machine learning.


2020 ◽  
Vol 20 (4) ◽  
Author(s):  
Paula Ribeiro Prist ◽  
Guilherme S. T. Garbino ◽  
Fernanda Delborgo Abra ◽  
Thais Pagotto ◽  
Osnir Ormon Giacon

Abstract The water opossum (Chironectes minimus) is a semi-aquatic mammal that is infrequently sampled in Atlantic rainforest areas in Brazil. Here we report on new records of C. minimus in the state of São Paulo, southeastern Brazil, and comment on its behavior and ecology. We placed nine camera traps in culverts and cattle boxes under a highway, between 2017 and 2019. From a total of 6,750 camera-trap-days, we obtained 16 records of C. minimus (0.24 records/100 camera-trap-days) in two cameras placed in culverts over streams. Most of the records were made between May and August, in the dry season and in the first six hours after sunset. The new records are from a highly degraded area with some riparian forests. The records lie approximately 30 km away from the nearest protected area where the species is known to occur. We suggest that C. minimus has some tolerance to degraded habitats, as long as the water bodies and riparian forests are minimally preserved. The new records presented here also fill a distribution gap in western São Paulo state.


2018 ◽  
Vol 40 (1) ◽  
pp. 118 ◽  
Author(s):  
Bronwyn A. Fancourt ◽  
Mark Sweaney ◽  
Don B. Fletcher

Camera traps are being used increasingly for wildlife management and research. When choosing camera models, practitioners often consider camera trigger speed to be one of the most important factors to maximise species detections. However, factors such as detection zone will also influence detection probability. As part of a rabbit eradication program, we performed a pilot study to compare rabbit (Oryctolagus cuniculus) detections using the Reconyx PC900 (faster trigger speed, narrower detection zone) and the Ltl Acorn Ltl-5310A (slower trigger speed, wider detection zone). Contrary to our predictions, the slower-trigger-speed cameras detected rabbits more than twice as often as the faster-trigger-speed cameras, suggesting that the wider detection zone more than compensated for the relatively slower trigger time. We recommend context-specific field trials to ensure cameras are appropriate for the required purpose. Missed detections could lead to incorrect inferences and potentially misdirected management actions.


2019 ◽  
Author(s):  
◽  
Hayder Yousif

[ACCESS RESTRICTED TO THE UNIVERSITY OF MISSOURI AT REQUEST OF AUTHOR.] Camera traps are a popular tool to sample animal populations because they are noninvasive, detect a variety of species, and can record many thousands of animal detections per deployment. Cameras are typically set to take bursts of multiple images for each detection, and are deployed in arrays of dozens or hundreds of sites, often resulting in millions of images per study. The task of converting images to animal detection records from such large image collections is daunting, and made worse by situations that generate copious empty pictures from false triggers (e.g. camera malfunction or moving vegetation) or pictures of humans. We offer the first widely available computer vision tool for processing camera trap images. Our results show that the tool is accurate and results in substantial time savings for processing large image datasets, thus improving our ability to monitor wildlife across large scales with camera traps. In this dissertation, we have developed new image/video processing and computer vision algorithms for efficient and accurate object detection and sequence-level classiffication from natural scene camera-trap images. This work addresses the following five major tasks: (1) Human-animal detection. We develop a fast and accurate scheme for human-animal detection from highly cluttered camera-trap images using joint background modeling and deep learning classification. Specifically, first, We develop an effective background modeling and subtraction scheme to generate region proposals for the foreground objects. We then develop a cross-frame image patch verification to reduce the number of foreground object proposals. Finally, We perform complexity-accuracy analysis of deep convolutional neural networks (DCNN) to develop a fast deep learning classification scheme to classify these region proposals into three categories: human, animals, and background patches. The optimized DCNN is able to maintain high level of accuracy while reducing the computational complexity by 14 times. Our experimental results demonstrate that the proposed method outperforms existing methods on the camera-trap dataset. (2) Object segmentation from natural scene. We first design and train a fast DCNN for animal-human-background object classification, which is used to analyze the input image to generate multi-layer feature maps, representing the responses of different image regions to the animal-human-background classifier. From these feature maps, we construct the so-called deep objectness graph for accurate animal-human object segmentation with graph cut. The segmented object regions from each image in the sequence are then verfied and fused in the temporal domain using background modeling. Our experimental results demonstrate that our proposed method outperforms existing state-of-the-art methods on the camera-trap dataset with highly cluttered natural scenes. (3) DCNN domain background modeling. We replaced the background model with a new more efficient deep learning based model. The input frames are segmented into regions through the deep objectness graph then the region boundaries of the input frames are multiplied by each other to obtain the regions of movement patches. We construct the background representation using the temporal information of the co-located patches. We propose to fuse the subtraction and foreground/background pixel classiffcation of two representation : a) chromaticity and b) deep pixel information. (4) Sequence-level object classiffcation. We proposed a new method for sequence-level video recognition with application to animal species recognition from camera trap images. First, using background modeling and cross-frame patch verification, we developed a scheme to generate candidate object regions or object proposals in the spatiotemporal domain. Second, we develop a dynamic programming optimization approach to identify the best temporal subset of object proposals. Third, we aggregate and fuse the features of these selected object proposals for efficient sequence-level animal species classification.


2020 ◽  
Vol 47 (4) ◽  
pp. 326 ◽  
Author(s):  
Harry A. Moore ◽  
Jacob L. Champney ◽  
Judy A. Dunlop ◽  
Leonie E. Valentine ◽  
Dale G. Nimmo

Abstract ContextEstimating animal abundance often relies on being able to identify individuals; however, this can be challenging, especially when applied to large animals that are difficult to trap and handle. Camera traps have provided a non-invasive alternative by using natural markings to individually identify animals within image data. Although camera traps have been used to individually identify mammals, they are yet to be widely applied to other taxa, such as reptiles. AimsWe assessed the capacity of camera traps to provide images that allow for individual identification of the world’s fourth-largest lizard species, the perentie (Varanus giganteus), and demonstrate other basic morphological and behavioural data that can be gleaned from camera-trap images. MethodsVertically orientated cameras were deployed at 115 sites across a 10000km2 area in north-western Australia for an average of 216 days. We used spot patterning located on the dorsal surface of perenties to identify individuals from camera-trap imagery, with the assistance of freely available spot ID software. We also measured snout-to-vent length (SVL) by using image-analysis software, and collected image time-stamp data to analyse temporal activity patterns. ResultsNinety-two individuals were identified, and individuals were recorded moving distances of up to 1975m. Confidence in identification accuracy was generally high (91%), and estimated SVL measurements varied by an average of 6.7% (min=1.8%, max=21.3%) of individual SVL averages. Larger perenties (SVL of >45cm) were detected mostly between dawn and noon, and in the late afternoon and early evening, whereas small perenties (SVL of <30cm) were rarely recorded in the evening. ConclusionsCamera traps can be used to individually identify large reptiles with unique markings, and can also provide data on movement, morphology and temporal activity. Accounting for uneven substrates under cameras could improve the accuracy of morphological estimates. Given that camera traps struggle to detect small, nocturnal reptiles, further research is required to examine whether cameras miss smaller individuals in the late afternoon and evening. ImplicationsCamera traps are increasingly being used to monitor reptile species. The ability to individually identify animals provides another tool for herpetological research worldwide.


2020 ◽  
Vol 47 (4) ◽  
pp. 338
Author(s):  
Bracy W. Heinlein ◽  
Rachael E. Urbanek ◽  
Colleen Olfenbuttel ◽  
Casey G. Dukes

Abstract ContextCamera traps paired with baits and scented lures can be used to monitor mesocarnivore populations, but not all attractants are equally effective. Several studies have investigated the efficacy of different attractants on the success of luring mesocarnivores to camera traps; fewer studies have examined the effect of human scent at camera traps. AimsWe sought to determine the effects of human scent, four attractants and the interaction between attractants and human scent in luring mesocarnivores to camera traps. Methods We compared the success of synthetic fermented egg (SFE), fatty acid scent (FAS) tablets, castor oil, and sardines against a control of no attractant in luring mesocarnivores to camera traps. We deployed each attractant and the control with either no regard to masking human scent or attempting to restrict human scent for a total of 10 treatments, and replicated treatments eight to nine times in two different phases. We investigated whether: (1) any attractants increased the probability of capturing a mesocarnivore at a camera trap; (2) not masking human scent affected the probability of capturing a mesocarnivore at a camera trap; and (3) any attractants increased the probability of repeat detections at a given camera trap. We also analysed the behaviour (i.e. speed and distance to attractant) of each mesocarnivore in relation to the attractants. Key resultsSardines improved capture success compared with the control treatments, whereas SFE, castor oil, and FAS tablets had no effect when all mesocarnivores were included in the analyses. Masking human scent did not affect detection rates in the multispecies analyses. Individually, the detection of some species depended on the interactions between masking (or not masking) human scent and some attractants. ConclusionsSardines were the most effective as a broad-based attractant for mesocarnivores. Mesocarnivores approached traps baited with sardines at slower rates, which allows for a higher success of capturing an image of the animal. ImplicationsHuman scent may not need to be masked when deploying camera traps for multispecies mesocarnivore studies, but researchers should be aware that individual species respond differently to attractants and may have higher capture success with species-specific attractants.


2017 ◽  
Vol 23 (3) ◽  
pp. 302 ◽  
Author(s):  
Paul D. Meek ◽  
Jason Wishart

Camera traps provide a novel and quasicovert method of gathering information on animal behaviour that may otherwise remain undetected without sophisticated and expensive filming equipment. In a rangelands pest management project at Mt Hope in the central west of New South Wales, Australia, we recorded foxes seemingly hunting kangaroos on three occasions. While we did not record direct instances of predation, our observations provide camera trap photographic evidence suggesting that foxes will attempt to tackle mammals above the critical weight range, including large macropod species such as western grey kangaroos.


Sign in / Sign up

Export Citation Format

Share Document