scholarly journals The Role of Citizen Science and Deep Learning in Camera Trapping

2021 ◽  
Vol 13 (18) ◽  
pp. 10287
Author(s):  
Matyáš Adam ◽  
Pavel Tomášek ◽  
Jiří Lehejček ◽  
Jakub Trojan ◽  
Tomáš Jůnek

Camera traps are increasingly one of the fundamental pillars of environmental monitoring and management. Even outside the scientific community, thousands of camera traps in the hands of citizens may offer valuable data on terrestrial vertebrate fauna, bycatch data in particular, when guided according to already employed standards. This provides a promising setting for Citizen Science initiatives. Here, we suggest a possible pathway for isolated observations to be aggregated into a single database that respects the existing standards (with a proposed extension). Our approach aims to show a new perspective and to update the recent progress in engaging the enthusiasm of citizen scientists and in including machine learning processes into image classification in camera trap research. This approach (combining machine learning and the input from citizen scientists) may significantly assist in streamlining the processing of camera trap data while simultaneously raising public environmental awareness. We have thus developed a conceptual framework and analytical concept for a web-based camera trap database, incorporating the above-mentioned aspects that respect a combination of the roles of experts’ and citizens’ evaluations, the way of training a neural network and adding a taxon complexity index. This initiative could well serve scientists and the general public, as well as assisting public authorities to efficiently set spatially and temporarily well-targeted conservation policies.

2021 ◽  
Vol 8 (2) ◽  
pp. 54-75
Author(s):  
Meredith S. Palmer ◽  
Sarah E. Huebner ◽  
Marco Willi ◽  
Lucy Fortson ◽  
Craig Packer

Camera traps - remote cameras that capture images of passing wildlife - have become a ubiquitous tool in ecology and conservation. Systematic camera trap surveys generate ‘Big Data’ across broad spatial and temporal scales, providing valuable information on environmental and anthropogenic factors affecting vulnerable wildlife populations. However, the sheer number of images amassed can quickly outpace researchers’ ability to manually extract data from these images (e.g., species identities, counts, and behaviors) in timeframes useful for making scientifically-guided conservation and management decisions. Here, we present ‘Snapshot Safari’ as a case study for merging citizen science and machine learning to rapidly generate highly accurate ecological Big Data from camera trap surveys. Snapshot Safari is a collaborative cross-continental research and conservation effort with 1500+ cameras deployed at over 40 eastern and southern Africa protected areas, generating millions of images per year. As one of the first and largest-scale camera trapping initiatives, Snapshot Safari spearheaded innovative developments in citizen science and machine learning. We highlight the advances made and discuss the issues that arose using each of these methods to annotate camera trap data. We end by describing how we combined human and machine classification methods (‘Crowd AI’) to create an efficient integrated data pipeline. Ultimately, by using a feedback loop in which humans validate machine learning predictions and machine learning algorithms are iteratively retrained on new human classifications, we can capitalize on the strengths of both methods of classification while mitigating the weaknesses. Using Crowd AI to quickly and accurately ‘unlock’ ecological Big Data for use in science and conservation is revolutionizing the way we take on critical environmental issues in the Anthropocene era.


2020 ◽  
Author(s):  
Thel Lucie ◽  
Chamaillé-Jammes Simon ◽  
Keurinck Léa ◽  
Catala Maxime ◽  
Packer Craig ◽  
...  

AbstractEcologists increasingly rely on camera trap data to estimate a wide range of biological parameters such as occupancy, population abundance or activity patterns. Because of the huge amount of data collected, the assistance of non-scientists is often sought after, but an assessment of the data quality is a prerequisite to their use.We tested whether citizen science data from one of the largest citizen science projects - Snapshot Serengeti - could be used to study breeding phenology, an important life-history trait. In particular, we tested whether the presence of juveniles (less than one or 12 months old) of three ungulate species in the Serengeti: topi Damaliscus jimela, kongoni Alcelaphus buselaphus and Grant’s gazelle Nanger granti could be reliably detected by the “naive” volunteers vs. trained observers. We expected a positive correlation between the proportion of volunteers identifying juveniles and their effective presence within photographs, assessed by the trained observers.We first checked the agreement between the trained observers for age classes and species and found a good agreement between them (Fleiss’ κ > 0.61 for juveniles of less than one and 12 month(s) old), suggesting that morphological criteria can be used successfully to determine age. The relationship between the proportion of volunteers detecting juveniles less than a month old and their actual presence plateaued at 0.45 for Grant’s gazelle and reached 0.70 for topi and 0.56 for kongoni. The same relationships were however much stronger for juveniles younger than 12 months, to the point that their presence was perfectly detected by volunteers for topi and kongoni.Volunteers’ classification allows a rough, moderately accurate, but quick, sorting of photograph sequences with/without juveniles. Obtaining accurate data however appears more difficult. We discuss the limitations of using citizen science camera traps data to study breeding phenology, and the options to improve the detection of juveniles, such as the addition of aging criteria on the online citizen science platforms, or the use of machine learning.


Author(s):  
Sara Beery ◽  
Dan Morris ◽  
Siyu Yang ◽  
Marcel Simon ◽  
Arash Norouzzadeh ◽  
...  

Camera traps are heat- or motion-activated cameras placed in the wild to monitor and investigate animal populations and behavior. They are used to locate threatened species, identify important habitats, monitor sites of interest, and analyze wildlife activity patterns. At present, the time required to manually review images severely limits productivity. Additionally, ~70% of camera trap images are empty, due to a high rate of false triggers. Previous work has shown good results on automated species classification in camera trap data (Norouzzadeh et al. 2018), but further analysis has shown that these results do not generalize to new cameras or new geographic regions (Beery et al. 2018). Additionally, these models will fail to recognize any species they were not trained on. In theory, it is possible to re-train an existing model in order to add missing species, but in practice, this is quite difficult and requires just as much machine learning expertise as training models from scratch. Consequently, very few organizations have successfully deployed machine learning tools for accelerating camera trap image annotation. We propose a different approach to applying machine learning to camera trap projects, combining a generalizable detector with project-specific classifiers. We have trained an animal detector that is able to find and localize (but not identify) animals, even species not seen during training, in diverse ecosystems worldwide. See Fig. 1 for examples of the detector run over camera trap data covering a diverse set of regions and species, unseen at training time. By first finding and localizing animals, we are able to: drastically reduce the time spent filtering empty images, and dramatically simplify the process of training species classifiers, because we can crop images to individual animals (and thus classifiers need only worry about animal pixels, not background pixels). drastically reduce the time spent filtering empty images, and dramatically simplify the process of training species classifiers, because we can crop images to individual animals (and thus classifiers need only worry about animal pixels, not background pixels). With this detector model as a powerful new tool, we have established a modular pipeline for on-boarding new organizations and building project-specific image processing systems. We break our pipeline into four stages: 1. Data ingestion First we transfer images to the cloud, either by uploading to a drop point or by mailing an external hard drive. Data comes in a variety of formats; we convert each data set to the COCO-Camera Traps format, i.e. we create a Javascript Object Notation (JSON) file that encodes the annotations and the image locations within the organization’s file structure. 2. Animal detection We next run our (generic) animal detector on all the images to locate animals. We have developed an infrastructure for efficiently running this detector on millions of images, dividing the load over multiple nodes. We find that a single detector works for a broad range of regions and species. If the detection results (as validated by the organization) are not sufficiently accurate, it is possible to collect annotations for a small set of their images and fine-tune the detector. Typically these annotations would be fed back into a new version of the general detector, improving results for subsequent projects. 3. Species classification Using species labels provided by the organization, we train a (project-specific) classifier on the cropped-out animals. 4. Applying the system to new data We use the general detector and the project-specific classifier to power tools facilitating accelerated verification and image review, e.g. visualizing the detections, selecting images for review based on model confidence, etc. The aim of this presentation is to present a new approach to structuring camera trap projects, and to formalize discussion around the steps that are required to successfully apply machine learning to camera trap images. The work we present is available at http://github.com/microsoft/cameratraps, and we welcome new collaborating organizations.


Animals ◽  
2020 ◽  
Vol 10 (1) ◽  
pp. 132 ◽  
Author(s):  
Siân E. Green ◽  
Jonathan P. Rees ◽  
Philip A. Stephens ◽  
Russell A. Hill ◽  
Anthony J. Giordano

Camera trapping has become an increasingly reliable and mainstream tool for surveying a diversity of wildlife species. Concurrent with this has been an increasing effort to involve the wider public in the research process, in an approach known as ‘citizen science’. To date, millions of people have contributed to research across a wide variety of disciplines as a result. Although their value for public engagement was recognised early on, camera traps were initially ill-suited for citizen science. As camera trap technology has evolved, cameras have become more user-friendly and the enormous quantities of data they now collect has led researchers to seek assistance in classifying footage. This has now made camera trap research a prime candidate for citizen science, as reflected by the large number of camera trap projects now integrating public participation. Researchers are also turning to Artificial Intelligence (AI) to assist with classification of footage. Although this rapidly-advancing field is already proving a useful tool, accuracy is variable and AI does not provide the social and engagement benefits associated with citizen science approaches. We propose, as a solution, more efforts to combine citizen science with AI to improve classification accuracy and efficiency while maintaining public involvement.


2018 ◽  
Author(s):  
Michael A. Tabak ◽  
Mohammad S. Norouzzadeh ◽  
David W. Wolfson ◽  
Steven J. Sweeney ◽  
Kurt C. VerCauteren ◽  
...  

Abstract1. Motion-activated cameras (“camera traps”) are increasingly used in ecological and management studies for remotely observing wildlife and have been regarded as among the most powerful tools for wildlife research. However, studies involving camera traps result in millions of images that need to be analyzed, typically by visually observing each image, in order to extract data that can be used in ecological analyses.2. We trained machine learning models using convolutional neural networks with the ResNet-18 architecture and 3,367,383 images to automatically classify wildlife species from camera trap images obtained from five states across the United States. We tested our model on an independent subset of images not seen during training from the United States and on an out-of-sample (or “out-of-distribution” in the machine learning literature) dataset of ungulate images from Canada. We also tested the ability of our model to distinguish empty images from those with animals in another out-of-sample dataset from Tanzania, containing a faunal community that was novel to the model.3. The trained model classified approximately 2,000 images per minute on a laptop computer with 16 gigabytes of RAM. The trained model achieved 98% accuracy at identifying species in the United States, the highest accuracy of such a model to date. Out-of-sample validation from Canada achieved 82% accuracy, and correctly identified 94% of images containing an animal in the dataset from Tanzania. We provide an R package (Machine Learning for Wildlife Image Classification; MLWIC) that allows the users to A) implement the trained model presented here and B) train their own model using classified images of wildlife from their studies.4. The use of machine learning to rapidly and accurately classify wildlife in camera trap images can facilitate non-invasive sampling designs in ecological studies by reducing the burden of manually analyzing images. We present an R package making these methods accessible to ecologists. We discuss the implications of this technology for ecology and considerations that should be addressed in future implementations of these methods.


Author(s):  
Evan Amber ◽  
Gregory J. Lipps Jr. ◽  
William E. Peterman

Traditional surveys for small mammals and herpetofauna require intensive field effort because these taxa are often difficult to detect. Field surveys are further hampered by dynamic environmental conditions and dense vegetative cover, which are both attributes of biodiverse wet meadow ecosystems. Camera traps may be a solution, but commonly used passive infrared game cameras face difficulties photographing herpetofauna and small mammals. The Adapted-Hunt Drift Fence Technique (AHDriFT) is a camera trap and drift fence system designed to overcome traditional limitations, but has not been extensively evaluated. We deployed 15 Y-shaped AHDriFT arrays (three cameras per array) in northern Ohio wet meadows from March 10 to October 5, 2019. Equipment for each array cost approximately US$1,570. Construction and deployment of each array took about three hours, with field servicing requiring 15 minutes per array. Arrays proved durable under wind, ice, snow, flooding and heat. Processing two-weeks of images of 45 cameras averaged about 13 person-hours. We obtained 9,018 unique-capture events of 41 vertebrate species comprised of 5 amphibians, 13 reptiles (11 snakes), 16 mammals and 7 birds. We imaged differing animal size classes ranging from invertebrates to weasels. We assessed detection efficacy using expected biodiversity baselines. We determined snake communities from three years of traditional surveys and possible small mammal and amphibian biodiversity from prior observations and species ranges and habitat requirements. We cumulatively detected all amphibians and 92% of snakes and small mammals that we expected to be present. We also imaged four mammal and two snake species where they were not previously observed. However, capture consistency was variable by taxa and species, and low-mobility species or species in low densities may not be detected. In its current design, AHDriFT proved to be effective for terrestrial vertebrate biodiversity surveying.


2020 ◽  
Author(s):  
Michael A. Tabak ◽  
Mohammad S. Norouzzadeh ◽  
David W. Wolfson ◽  
Erica J. Newton ◽  
Raoul K. Boughton ◽  
...  

AbstractMotion-activated wildlife cameras (or “camera traps”) are frequently used to remotely and non-invasively observe animals. The vast number of images collected from camera trap projects have prompted some biologists to employ machine learning algorithms to automatically recognize species in these images, or at least filter-out images that do not contain animals. These approaches are often limited by model transferability, as a model trained to recognize species from one location might not work as well for the same species in different locations. Furthermore, these methods often require advanced computational skills, making them inaccessible to many biologists.We used 3 million camera trap images from 18 studies in 10 states across the United States of America to train two deep neural networks, one that recognizes 58 species, the “species model,” and one that determines if an image is empty or if it contains an animal, the “empty-animal model.”Our species model and empty-animal model had accuracies of 96.8% and 97.3%, respectively. Furthermore, the models performed well on some out-of-sample datasets, as the species model had 91% accuracy on species from Canada (accuracy range 36-91% across all out-of-sample datasets) and the empty-animal model achieved an accuracy of 91-94% on out-of-sample datasets from different continents.Our software addresses some of the limitations of using machine learning to classify images from camera traps. By including many species from several locations, our species model is potentially applicable to many camera trap studies in North America. We also found that our empty-animal model can facilitate removal of images without animals globally. We provide the trained models in an R package (MLWIC2: Machine Learning for Wildlife Image Classification in R), which contains Shiny Applications that allow scientists with minimal programming experience to use trained models and train new models in six neural network architectures with varying depths.


Emotion ◽  
2019 ◽  
Vol 19 (4) ◽  
pp. 726-732 ◽  
Author(s):  
Andras Norbert Zsido ◽  
Anita Deak ◽  
Laszlo Bernath
Keyword(s):  

2020 ◽  
Vol 15 (2) ◽  
Author(s):  
Antung Deddy Radiansyah

Gaps in biodiversity conservation management within the Conservation Area that are the responsibility of the central government and outside the Conservation Areas or as the Essential Ecosystems Area (EEA) which are the authority of the Regional Government, have caused various spatial conflicts between wildlife /wild plants and land management activities. Several obstacles faced by the Local Government to conduct its authority to manage (EEA), caused the number and area of EEA determined by the Local Government to be still low. At present only 703,000 ha are determined from the 67 million ha indicated by EEA. This study aims to overview biodiversity conservation policies by local governments and company perceptions in implementing conservation policies and formulate strategies for optimizing the role of Local Governments. From the results of this study, there has not been found any legal umbrella for the implementation of Law number 23/ 2014 related to the conservation of important ecosystems in the regions. This regulatory vacuum leaves the local government in a dilemma for continuing various conservation programs. By using a SWOT to the internal strategic environment and external stratetegic environment of the Environment and Forestry Service, Bengkulu Province , as well as using an analysis of company perceptions of the conservation policies regulatary , this study has been formulated a “survival strategy” through collaboration between the Central Government, Local Governments and the Private Sector to optimize the role of Local Government’s to establish EEA in the regions.Keywords: Management gaps, Essential Ecosystems Area (EEA), Conservation Areas, SWOT analysis and perception analysis


2017 ◽  
Vol 1 (1) ◽  
pp. 44-49
Author(s):  
Nur Azizah ◽  
Dedeh Supriyanti ◽  
Siti Fairuz Aminah Mustapha ◽  
Holly Yang

In a company, the process of income and expense of money must have a profit-generating goal base. The success of financial management within the company, can be monitored from the ability of the financial management in managing the finances and utilize all the opportunities that exist with as much as possible with the aim to control the company's cash (cash flow) and the impact of generating profits in accordance with expectations. With a web-based online accounting system version 2.0, companies can be given the ease to manage money in and out of the company's cash. It has a user friendly system with navigation that makes it easy for the financial management to use it. Starting from the creation of a company's cash account used as a cash account and corporate bank account on the system, deletion or filing of cash accounts, up to the transfer invoice creation feature, receive and send money. Thus, this system is very effective and efficient in the management of income and corporate cash disbursements.   Keywords:​Accounting Online System, Financial Management, Cash and Bank


Sign in / Sign up

Export Citation Format

Share Document