scholarly journals Improving the Accuracy of Species Identification by Combining Deep Learning With Field Occurrence Records

2021 ◽  
Vol 9 ◽  
Author(s):  
Jianqiang Sun ◽  
Ryo Futahashi ◽  
Takehiko Yamanaka

Citizen science is essential for nationwide ecological surveys of species distribution. While the accuracy of the information collected by beginner participants is not guaranteed, it is important to develop an automated system to assist species identification. Deep learning techniques for image recognition have been successfully applied in many fields and may contribute to species identification. However, deep learning techniques have not been utilized in ecological surveys of citizen science, because they require the collection of a large number of images, which is time-consuming and labor-intensive. To counter these issues, we propose a simple and effective strategy to construct species identification systems using fewer images. As an example, we collected 4,571 images of 204 species of Japanese dragonflies and damselflies from open-access websites (i.e., web scraping) and scanned 4,005 images from books and specimens for species identification. In addition, we obtained field occurrence records (i.e., range of distribution) of all species of dragonflies and damselflies from the National Biodiversity Center, Japan. Using the images and records, we developed a species identification system for Japanese dragonflies and damselflies. We validated that the accuracy of the species identification system was improved by combining web-scraped and scanned images; the top-1 accuracy of the system was 0.324 when trained using only web-scraped images, whereas it improved to 0.546 when trained using both web-scraped and scanned images. In addition, the combination of images and field occurrence records further improved the top-1 accuracy to 0.668. The values of top-3 accuracy under the three conditions were 0.565, 0.768, and 0.873, respectively. Thus, combining images with field occurrence records markedly improved the accuracy of the species identification system. The strategy of species identification proposed in this study can be applied to any group of organisms. Furthermore, it has the potential to strike a balance between continuously recruiting beginner participants and updating the data accuracy of citizen science.

Drones ◽  
2021 ◽  
Vol 5 (1) ◽  
pp. 6
Author(s):  
Apostolos Papakonstantinou ◽  
Marios Batsaris ◽  
Spyros Spondylidis ◽  
Konstantinos Topouzelis

Marine litter (ML) accumulation in the coastal zone has been recognized as a major problem in our time, as it can dramatically affect the environment, marine ecosystems, and coastal communities. Existing monitoring methods fail to respond to the spatiotemporal changes and dynamics of ML concentrations. Recent works showed that unmanned aerial systems (UAS), along with computer vision methods, provide a feasible alternative for ML monitoring. In this context, we proposed a citizen science UAS data acquisition and annotation protocol combined with deep learning techniques for the automatic detection and mapping of ML concentrations in the coastal zone. Five convolutional neural networks (CNNs) were trained to classify UAS image tiles into two classes: (a) litter and (b) no litter. Testing the CCNs’ generalization ability to an unseen dataset, we found that the VVG19 CNN returned an overall accuracy of 77.6% and an f-score of 77.42%. ML density maps were created using the automated classification results. They were compared with those produced by a manual screening classification proving our approach’s geographical transferability to new and unknown beaches. Although ML recognition is still a challenging task, this study provides evidence about the feasibility of using a citizen science UAS-based monitoring method in combination with deep learning techniques for the quantification of the ML load in the coastal zone using density maps.


2018 ◽  
Vol 2 ◽  
pp. e25261
Author(s):  
Erick Mata-Montero ◽  
Dagoberto Arias-Aguilar ◽  
Geovanni Figueroa-Mata ◽  
Juan Carlos Valverde

The fast and accurate identification of forest species is critical to support their sustainable management, to combat illegal logging, and ultimately to conserve them. Traditionally, the anatomical identification of forest species is a manual process that requires a human expert with a high level of knowledge to observe and differentiate certain anatomical structures present in a wood sample (Wiedenhoeft (2011)). In recent years, deep learning techniques have drastically improved the state of the art in many areas such as speech recognition, visual object recognition, and image and music information retrieval, among others (LeCun et al. (2015)). In the context of the automatic identification of plants, these techniques have recently been applied with great success (Carranza-Rojas et al. (2017)) and even mobile apps such as Pl@ntNet have been developed to identify a species from images captured on-the-fly (Joly et al. (2014)). In contrast to conventional machine learning techniques, deep learning techniques extract and learn by themselves the relevant features from large datasets. One of the main limitations for the application of deep learning techniques to forest species identification is the lack of comprehensive datasets for the training and testing of convolutional neural network (CNN) models. For this work, we used a dataset developed at the Federal University of Parana (UFPR) in Curitiba, Brazil, that comprises 2939 images in JPG format without compression and a resolution of 3.264 x 2.448 pixels. It includes 41 different forest species of the Brazilian flora that were cataloged by the Laboratory of Wood Anatomy at UFPR (Paula Filho et al. (2014)). Due to the lack of comprehensive datasets world wide, this has become a benchmark dataset in previous research (Paula Filho et al. (2014), Hafemann et al. (2014)). In this work, we propose and demonstrate the power of deep CNNs to identify forest species based on macroscopic images. We use a pre-trained model which is built from the resnet50 model and uses weights pre-trained on ImageNet. We apply fine-tuning by first truncating the top layer (softmax layer) of the pre-trained network and replacing it with a new softmax layer. Then we train again the model with the dataset of macroscopic images of species of the Brazilian flora used in (Hafemann et al. (2014), Paula Filho et al. (2014)). Using the proposed model we achieve a top-1 98% accuracy which is better than the 95.77% reported in (Hafemann et al. (2014) )using the same data set. In addition, our result is slightly better than the reported in (Paula Filho et al. (2014)) of 97.77% which was obtained by combining several conventional techniques of computer vision.


According to the World Health Organization, diseases such as malaria and dengue account for almost one million deaths every year. Carrier mosquitoes for a particular disease remain exclusive to it. A majority of carrier mosquitoes spread the disease throughout a region by reproducing in it. With advancements in Machine Learning and Computer Vision technologies, the species of mosquitoes in a particular region can be easily and swiftly detected using recordings of their wing movements. The wingbeats of a particular mosquito species are unique, making this a reliable method to identify them. Once these solutions are deployed on mosquito traps, a particular region can be alerted if, for example, an Aedes Aegypti mosquito is found. This mosquito species is widely known to carry the Zika virus. The identification of such carrier species can also help in detecting the spread of mosquito-borne diseases in the surveyed region. In this paper, we go through various techniques that show promising results in the identification of mosquito species. The trained models can be deployed on constrained devices to make a cost-effective and efficient mosquito species identification system.


2017 ◽  
Vol 1 ◽  
pp. e20569
Author(s):  
Ignacio Heredia ◽  
Lara Lloret ◽  
Jesús Marco ◽  
Francisco Pando

Face recognition plays a vital role in security purpose. In recent years, the researchers have focused on the pose illumination, face recognition, etc,. The traditional methods of face recognition focus on Open CV’s fisher faces which results in analyzing the face expressions and attributes. Deep learning method used in this proposed system is Convolutional Neural Network (CNN). Proposed work includes the following modules: [1] Face Detection [2] Gender Recognition [3] Age Prediction. Thus the results obtained from this work prove that real time age and gender detection using CNN provides better accuracy results compared to other existing approaches.


Sign in / Sign up

Export Citation Format

Share Document