Animal Species Recognition Using Deep Learning

Author(s):  
Mai Ibraheam ◽  
Fayez Gebali ◽  
Kin Fun Li ◽  
Leonard Sielecki
Author(s):  
Deepthi K

Animals watching is a common hobby but to identify their species requires the assistance of Animal books. To provide Animal watchers a handy tool to admire the beauty of Animals, we developed a deep learning platform to assist users in recognizing species of Animals endemic to using app named the Imagenet of Animals (IoA). Animal images were learned by a convolutional neural network (CNN) to localize prominent features in the images. First, we established and generated a bounded region of interest to the shapes and colors of the object granularities and subsequently balanced the distribution of Animals species. Then, a skip connection method was used to linearly combine the outputs of the previous and current layers to improve feature extraction. Finally, we applied the SoftMax function to obtain a probability distribution of Animals features. The learned parameters of Animals features were used to identify pictures uploaded by mobile users. The proposed CNN model with skip connections achieved higher accuracy of 99.00 % compared with the 93.98% from a CNN and 89.00% from the SVM for the training images. As for the test dataset, the average sensitivity, specificity, and accuracy were 93.79%, 96.11%, and 95.37%, respectively.


2019 ◽  
Author(s):  
◽  
Hayder Yousif

[ACCESS RESTRICTED TO THE UNIVERSITY OF MISSOURI AT REQUEST OF AUTHOR.] Camera traps are a popular tool to sample animal populations because they are noninvasive, detect a variety of species, and can record many thousands of animal detections per deployment. Cameras are typically set to take bursts of multiple images for each detection, and are deployed in arrays of dozens or hundreds of sites, often resulting in millions of images per study. The task of converting images to animal detection records from such large image collections is daunting, and made worse by situations that generate copious empty pictures from false triggers (e.g. camera malfunction or moving vegetation) or pictures of humans. We offer the first widely available computer vision tool for processing camera trap images. Our results show that the tool is accurate and results in substantial time savings for processing large image datasets, thus improving our ability to monitor wildlife across large scales with camera traps. In this dissertation, we have developed new image/video processing and computer vision algorithms for efficient and accurate object detection and sequence-level classiffication from natural scene camera-trap images. This work addresses the following five major tasks: (1) Human-animal detection. We develop a fast and accurate scheme for human-animal detection from highly cluttered camera-trap images using joint background modeling and deep learning classification. Specifically, first, We develop an effective background modeling and subtraction scheme to generate region proposals for the foreground objects. We then develop a cross-frame image patch verification to reduce the number of foreground object proposals. Finally, We perform complexity-accuracy analysis of deep convolutional neural networks (DCNN) to develop a fast deep learning classification scheme to classify these region proposals into three categories: human, animals, and background patches. The optimized DCNN is able to maintain high level of accuracy while reducing the computational complexity by 14 times. Our experimental results demonstrate that the proposed method outperforms existing methods on the camera-trap dataset. (2) Object segmentation from natural scene. We first design and train a fast DCNN for animal-human-background object classification, which is used to analyze the input image to generate multi-layer feature maps, representing the responses of different image regions to the animal-human-background classifier. From these feature maps, we construct the so-called deep objectness graph for accurate animal-human object segmentation with graph cut. The segmented object regions from each image in the sequence are then verfied and fused in the temporal domain using background modeling. Our experimental results demonstrate that our proposed method outperforms existing state-of-the-art methods on the camera-trap dataset with highly cluttered natural scenes. (3) DCNN domain background modeling. We replaced the background model with a new more efficient deep learning based model. The input frames are segmented into regions through the deep objectness graph then the region boundaries of the input frames are multiplied by each other to obtain the regions of movement patches. We construct the background representation using the temporal information of the co-located patches. We propose to fuse the subtraction and foreground/background pixel classiffcation of two representation : a) chromaticity and b) deep pixel information. (4) Sequence-level object classiffcation. We proposed a new method for sequence-level video recognition with application to animal species recognition from camera trap images. First, using background modeling and cross-frame patch verification, we developed a scheme to generate candidate object regions or object proposals in the spatiotemporal domain. Second, we develop a dynamic programming optimization approach to identify the best temporal subset of object proposals. Third, we aggregate and fuse the features of these selected object proposals for efficient sequence-level animal species classification.


2021 ◽  
Vol 15 ◽  
Author(s):  
XiaoLe Liu ◽  
Si-yang Yu ◽  
Nico A. Flierman ◽  
Sebastián Loyola ◽  
Maarten Kamermans ◽  
...  

Animal pose estimation tools based on deep learning have greatly improved animal behaviour quantification. These tools perform pose estimation on individual video frames, but do not account for variability of animal body shape in their prediction and evaluation. Here, we introduce a novel multi-frame animal pose estimation framework, referred to as OptiFlex. This framework integrates a flexible base model (i.e., FlexibleBaseline), which accounts for variability in animal body shape, with an OpticalFlow model that incorporates temporal context from nearby video frames. Pose estimation can be optimised using multi-view information to leverage all four dimensions (3D space and time). We evaluate FlexibleBaseline using datasets of four different lab animal species (mouse, fruit fly, zebrafish, and monkey) and introduce an intuitive evaluation metric—adjusted percentage of correct key points (aPCK). Our analyses show that OptiFlex provides prediction accuracy that outperforms current deep learning based tools, highlighting its potential for studying a wide range of behaviours across different animal species.


2018 ◽  
Vol 2 ◽  
pp. e25268 ◽  
Author(s):  
Maarten Schermer ◽  
Laurens Hogeweg

Volunteers, researchers and citizen scientists are important contributors to observation and monitoring databases. Their contributions thus become part of a global digital data pool, that forms the basis for important and powerful tools for conservation, research, education and policy. With the data contributed by citizen scientists also come concerns about data completeness and quality. For data generated by citizen scientists taxonomic bias effects, where certain species (groups) are underrepresented in observations, are even stronger than for professionally collected data. Identification tools that help citizen scientists to access more difficult, underrepresented groups, can help to close this gap. We are exploring the possibilities of using artificial intelligence for automatic species identification as a tool to support the registration of field observations. Our aim is to offer nature enthusiasts the possibility of automatically identifying species, based on photos they have taken as part of an observation. Furthermore, by allowing them to register these identifications as part of the observation, we aim to enhance the completeness and quality of the observation database. We will demonstrate the use of automatic species recognition as part of the process of observation registration, using a recognition model that is based on deep learning techniques. We investigated the automatic species recognition using deep learning models trained with observation data of the popular website Observation.org (https://observation.org/). At Observation.org data quality is ensured by a review process of all observations by experts. Using the pictures and corresponding validated metadata from their database, models were developed covering several species groups. These techniques were based on earlier work that culminated in ObsIdentify, an free offline mobile app for identifying species based on pictures taken in the field. The models are also made available as an API web service, which allows for identification by offering a photo through common HTTP-communication - essentially like uploading it through a webpage. This web service was implemented in the observation entry workflows of Observation.org. By providing an automatically generated taxonomic identification with each image, we expect to stimulate existing citizen scientists to generate a larger quantity of and more biodiverse observations. Additionally we hope to motivate new citizen scientists to start contributing. Additionally, we investigated the use of image recognition for the identification of additional species in the photo other than the primary subject, for example the identification of the host plant in photos of insects. The Observation.org database contains many of such photos which are associated with a single species observation, while additional, other species are also present in the photo, but are unidentified. Combining object detection to detect individual species with species recognition models opens up the possibility of automatically identifying and counting these species, enhancing the quality of the observations. In the presentation we will present the initial results of this application of deep learning technology, and discuss the possibilities and challenges.


Author(s):  
Ahmad Heidary-Sharifabad ◽  
Mohsen Sardari Zarchi ◽  
Sima Emadi ◽  
Gholamreza Zarei

The Chenopodiaceae species are ecologically and financially important, and play a significant role in biodiversity around the world. Biodiversity protection is critical for the survival and sustainability of each ecosystem and since plant species recognition in their natural habitats is the first process in plant diversity protection, an automatic species classification in the wild would greatly help the species analysis and consequently biodiversity protection on earth. Computer vision approaches can be used for automatic species analysis. Modern computer vision approaches are based on deep learning techniques. A standard dataset is essential in order to perform a deep learning algorithm. Hence, the main goal of this research is to provide a standard dataset of Chenopodiaceae images. This dataset is called ACHENY and contains 27030 images of 30 Chenopodiaceae species in their natural habitats. The other goal of this study is to investigate the applicability of ACHENY dataset by using deep learning models. Therefore, two novel deep learning models based on ACHENY dataset are introduced: First, a lightweight deep model which is trained from scratch and is designed innovatively to be agile and fast. Second, a model based on the EfficientNet-B1 architecture, which is pre-trained on ImageNet and is fine-tuned on ACHENY. The experimental results show that the two proposed models can do Chenopodiaceae fine-grained species recognition with promising accuracy. To evaluate our models, their performance was compared with the well-known VGG-16 model after fine-tuning it on ACHENY. Both VGG-16 and our first model achieved about 80% accuracy while the size of VGG-16 is about 16[Formula: see text] larger than the first model. Our second model has an accuracy of about 90% and outperforms the other models where its number of parameters is 5[Formula: see text] than the first model but it is still about one-third of the VGG-16 parameters.


F1000Research ◽  
2016 ◽  
Vol 4 ◽  
pp. 115
Author(s):  
Wladimir J. Alonso

Because the ability to hide in plain sight provides a major selective advantage to both prey and predator species, the emergence of the striking colouration of some animal species (such as many coral reef fish) represents an evolutionary conundrum that remains unsolved to date. Here I propose a framework by which conspicuous colours can emerge when the selective pressures for camouflage are relaxed (1) because camouflage is not essential under specific prey/predator conditions or (2) due to the impossibility of reducing the signal-to-background noise in the environment. The first case is found among non-predator-species that possess effective defences against predators (hence a “Carefree World”), such as the strong macaws’ beaks and the flight abilities of hummingbirds. The second case is found in diurnal mobile fish of coral reef communities, which swim in clear waters against highly contrasting and unpredictable background (hence an "Hyper-Visible World”). In those contexts the selective pressures that usually come secondary to camouflage (such as sexual, warning, species recognition or territorial display) are free to drive the evolution of brilliant and diverse colouration. This theoretical framework can also be useful for studying the conditions that allow for conspicuousness in other sensory contexts (acoustic, chemical, electrical, etc.).


2020 ◽  
Vol 11 (3) ◽  
pp. 144
Author(s):  
Yonatan Adiwinata ◽  
Akane Sasaoka ◽  
I Putu Agung Bayupati ◽  
Oka Sudana

Fish species conservation had a big impact on the natural ecosystems balanced. The existence of efficient technology in identifying fish species could help fish conservation. The most recent research related to was a classification of fish species using the Deep Learning method. Most of the deep learning methods used were Convolutional Layer or Convolutional Neural Network (CNN). This research experimented with using object detection method based on deep learning like Faster R-CNN, which possible to recognize the species of fish inside of the image without more image preprocessing. This research aimed to know the performance of the Faster R-CNN method against other object detection methods like SSD in fish species detection. The fish dataset used in the research reference was QUT FISH Dataset. The accuracy of the Faster R-CNN reached 80.4%, far above the accuracy of the Single Shot Detector (SSD) Model with an accuracy of 49.2%.  


Sign in / Sign up

Export Citation Format

Share Document