scholarly journals Supporting citizen scientists with automatic species identification using deep learning image recognition models

2018 ◽  
Vol 2 ◽  
pp. e25268 ◽  
Author(s):  
Maarten Schermer ◽  
Laurens Hogeweg

Volunteers, researchers and citizen scientists are important contributors to observation and monitoring databases. Their contributions thus become part of a global digital data pool, that forms the basis for important and powerful tools for conservation, research, education and policy. With the data contributed by citizen scientists also come concerns about data completeness and quality. For data generated by citizen scientists taxonomic bias effects, where certain species (groups) are underrepresented in observations, are even stronger than for professionally collected data. Identification tools that help citizen scientists to access more difficult, underrepresented groups, can help to close this gap. We are exploring the possibilities of using artificial intelligence for automatic species identification as a tool to support the registration of field observations. Our aim is to offer nature enthusiasts the possibility of automatically identifying species, based on photos they have taken as part of an observation. Furthermore, by allowing them to register these identifications as part of the observation, we aim to enhance the completeness and quality of the observation database. We will demonstrate the use of automatic species recognition as part of the process of observation registration, using a recognition model that is based on deep learning techniques. We investigated the automatic species recognition using deep learning models trained with observation data of the popular website Observation.org (https://observation.org/). At Observation.org data quality is ensured by a review process of all observations by experts. Using the pictures and corresponding validated metadata from their database, models were developed covering several species groups. These techniques were based on earlier work that culminated in ObsIdentify, an free offline mobile app for identifying species based on pictures taken in the field. The models are also made available as an API web service, which allows for identification by offering a photo through common HTTP-communication - essentially like uploading it through a webpage. This web service was implemented in the observation entry workflows of Observation.org. By providing an automatically generated taxonomic identification with each image, we expect to stimulate existing citizen scientists to generate a larger quantity of and more biodiverse observations. Additionally we hope to motivate new citizen scientists to start contributing. Additionally, we investigated the use of image recognition for the identification of additional species in the photo other than the primary subject, for example the identification of the host plant in photos of insects. The Observation.org database contains many of such photos which are associated with a single species observation, while additional, other species are also present in the photo, but are unidentified. Combining object detection to detect individual species with species recognition models opens up the possibility of automatically identifying and counting these species, enhancing the quality of the observations. In the presentation we will present the initial results of this application of deep learning technology, and discuss the possibilities and challenges.

Author(s):  
Laurens Hogeweg ◽  
Maarten Schermer ◽  
Sander Pieterse ◽  
Timo Roeke ◽  
Wilfred Gerritsen

The potential of citizen scientists to contribute to information about occurrences of species and other biodiversity questions is large because of the ubiquitous presence of organisms and friendly nature of the subject. Online platforms that collect observations of species from the public have existed for several years now. They have seen a rapid growth recently, partly due to the widespread availability of mobile phones. These online platforms, and many scientific studies as well, suffer from a taxonomic bias: the effect that certain species groups are overrepresented in the data (Troudet et al. 2017). One of the reasons for this bias is that the accurate identification of species, by non-experts and experts, has been limited by the large number of species that exist. Even in the geographically limited area of the Netherlands and Belgium, the number of species that are regularly observed are in the thousands. This makes the ability to identify all those species difficult or impossible for an individual. Recent advances in species identification powered by deep learning, based on images (Norouzzadeh et al. 2018), suggest a large potential for a new set of digital tools that can help the public (and experts) to identify species automatically. The online observation platform Observation.org has collected over 93 million occurrences in the Netherlands and Belgium in the last 15 years. About 20% of these occurrences are supported by photographs, giving a rich database of 17 million photographs covering all major species groups (e.g., birds, mammals, plants, insects, fungi). Most of the observations with photos were validated by human experts at Observation.org, creating a unique database suitable for machine learning. We have developed a deep learning-based species identification model using this database containing 13,767 species, 1,530 species-groups, 734 subspecies and 117 hybrids. The model is made available to the public through a web service (https://identify.biodiversityanalysis.nl) and through a set of mobile apps (ObsIdentify). In this talk we will discuss our technical approach for dealing with the large number of species in a deep learning model. We will evaluate the results in terms of performance for different species groups and what this could mean to address part of the taxonomic bias. We will also consider limitations of (image-based) automated species identification and determine venues to further improve identification. We will illustrate how the web service and mobile apps are applied to support citizen scientists and the observation validation workflows at Observation.org. Finally, we will examine the potential of these methods to provide large scale automated analysis of biodiversity data.


2021 ◽  
Vol 2021 ◽  
pp. 1-15
Author(s):  
Mingyuan Xin ◽  
Yong Wang

Deep learning algorithms have the advantages of clear structure and high accuracy in image recognition. Accurate identification of pests and diseases in crops can improve the pertinence of pest control in farmland, which is beneficial to agricultural production. This paper proposes a DCNN-G model based on deep learning and fusion of Google data analysis, using this model to train 640 data samples, and then using 5000 test samples for testing, selecting 80% as the training set and 20% as the test set, and compare the accuracy of the model with the conventional recognition model. Research results show that after degrading a quality level 1 image using the degradation parameters above, 9 quality level images are obtained. Use YOLO’s improved network, YOLO-V4, to test and validate images after quality level classification. Images of different quality levels, especially images of adjacent levels, are subjectively observed by human eyes, and it is difficult to distinguish the quality of the images. Using the algorithm model proposed in this article, the recognition accuracy is 95%, which is much higher than the basic 84% of the DCNN model. The quality level classification of crop disease and insect pest images can provide important prior information for the understanding of crop disease and insect pest images and can also provide a scientific basis for testing the imaging capabilities of sensors and objectively evaluating the image quality of crop diseases and pests. The use of convolutional neural networks to realize the classification of crop pest and disease image quality not only expands the application field of deep learning but also provides a new method for crop pest and disease image quality assessment.


2018 ◽  
Vol 2 ◽  
pp. e25917 ◽  
Author(s):  
Maarten Schermer ◽  
Laurens Hogeweg ◽  
Max Caspers

The completeness and quality of the information in natural history museum collections is essential to support its use, such as in collection management. Currently, the accuracy of the taxonomic information largely depends on expert provided metadata, such as species identification. At present an increase in the use of digitization techniques coincides with a dwindling of the number of taxonomic specialists, creating a growing backlog in specimen identifications. We are investigating the role of artificial intelligence for automatic species identification in supporting collection management. When identifying collection specimens, common species are predominantly present, taking up a large amount of the expert’s time, who has to deal with a relatively easy, repetitive task. Therefore, one of our aims is to use human expertise where it is most needed, for complex tasks, and use properly validated computational methods for repetitive, less difficult identifications. To this end, we demonstrate the use of automatic species identification in digitization workflows, using deep learning based image recognition. We investigated potential gains in the identification process of a large digitization project of papered Lepidoptera (>500,000 specimens). In this ongoing project, volunteers unpack, register and photograph the unmounted butterflies and repack them sustainably, still unmounted. Using only the individual images made by volunteers, taxonomic experts identify the specimens. Considering that the speed of digitization currently exceeds that of identification, a growing backlog of yet-to-be-identified specimens has formed, limiting the speed of publication of this biodiversity information. The test case for image recognition concerns specimens of the families Papilionidae and Lycaenidae, mostly collected in Indonesia. By allowing the volunteers to provide an automatically generated identification with each image, we enable the taxonomic specialists to quickly validate the more easily identifiable specimens. This reduces their workload, allows them to focus on the more demanding specimens and increases the rate of specimen identification. We demonstrate how to combine computer and human decisions to ensure both high data quality standards and reduction of expert time.


Author(s):  
Laurens Hogeweg ◽  
Theo Zeegers ◽  
Ioannis Katramados ◽  
Eelke Jongejans

Recent studies have shown a worrying decline in the quantity and diversity of insects at a number of locations in Europe (Hallmann et al. 2017) and elsewhere (Lister and Garcia 2018). Although the downward trend that these studies show is clear, they are limited to certain insect groups and geographical locations. Most available studies (see overview in Sánchez-Bayo and Wyckhuys 2019) were performed in nature reserves, leaving rural and urban areas largely understudied. Most studies are based on the long-term collaborative efforts of entomologists and volunteers performing labor-intensive repeat measurements, inherently limiting the number of locations that can be monitored. We propose a monitoring network for insects in the Netherlands, consisting of a large number of smart insect cameras spread across nature, rural, and urban areas. The aim of the network is to provide a labor-extensive continuous monitoring of different insect groups. In addition, we aimed to develop the cameras at a relatively cheap price point so that cameras can be installed at a large number of locations and encourage participation by citizen science enthusiasts. The cameras are made smart with image processing, consisting of image enhancement, insect detection and species identification being performed, using deep learning based algorithms. The cameras take pictures of a screen, measuring ca. 30×40 cm, every 10 seconds, capturing insects that have landed on the screen (Fig. 1). Several screen setups were evaluated. Vertical screens were used to attract flying insects. Different screen colors and lighting at night, to attract night flying insects such as moths, were used. In addition two horizontal screen orientations were used (1) to emulate pan traps to attract several pollinator species (bees and hoverflies) and (2) to capture ground-based insects and arthropods such as beetles and spiders. Time sequences of images were analyzed semi-automatically, in the following way. First, single insects are outlined and cropped using boxes at every captured image. Then the cropped single insects in every image were preliminarily identified, using a previously developed deep-learning-based automatic species identification software, Nature Identification API (https://identify.biodiversityanalysis.nl). In the next step, single insects were linked between consecutive images using a tracking algorithm that uses screen position and the preliminary identifications. This step yields for every individual insect a linked series of outlines and preliminary identifications. The preliminary identifications for individual insects can differ between multiple captured images and were therefore combined into one identification using a fusing algorithm. The result of the algorithm is a series of tracks of individual insects with species identifications, which can be subsequently translated into an estimate of the counts of insects per species or species complexes. Here we show the first set of results acquired during the spring and summer of 2019. We will discuss practical experiences with setting up cameras in the field, including the effectiveness of the different set-ups. We will also show the effectiveness of using automatic species identification in the type of images that were acquired (see attached figure) and discuss to what extent individual species can be identified reliably. Finally, we will discuss the ecological information that can be extracted from the smart insect cameras.


Author(s):  
Mario Lasseck

The detection and identification of individual species based on images or audio recordings has shown significant performance increase over the last few years, thanks to recent advances in deep learning. Reliable automatic species recognition provides a promising tool for biodiversity monitoring, research and education. Image-based plant identification, for example, now comes close to the most advanced human expertise (Bonnet et al. 2018, Lasseck 2018a). Besides improved machine learning algorithms, neural network architectures, deep learning frameworks and computer hardware, a major reason for the gain in performance is the increasing abundance of biodiversity training data, either from observational networks and data providers like GBIF, Xeno-canto, iNaturalist, etc. or natural history museum collections like the Animal Sound Archive of the Museum für Naturkunde. However, in many cases, this occurrence data is still insufficient for data-intensive deep learning approaches and is often unbalanced, with only few examples for very rare species. To overcome these limitations, data augmentation can be used. This technique synthetically creates more training samples by applying various subtle random manipulations to the original data in a label-preserving way without changing the content. In the talk, we will present augmentation methods for images and audio data. The positive effect on identification performance will be evaluated on different large-scale data sets from recent plant and bird identification (LifeCLEF 2017, 2018) and detection (DCASE 2018) challenges (Lasseck 2017, Lasseck 2018b, Lasseck 2018c).


Author(s):  
Rosamaria Donnici ◽  
Antonio Coronato ◽  
Muddasar Naeem

The treatment process at home after hospitalization may become challenging for elders and people having any physical or cognitive disability. Such patients can, nowadays, be supported by Autonomous and Intelligent Monitoring Systems (AIMSs) that may get new levels of functionalities thanks to technologies like Reinforcement Learning, Deep Learning and Internet of Things. We present an AIMS that can assist impaired patients in taking medicines in accordance with their treatment plans. The demonstration of the AIMS via mobile app shows promising results and can improve the quality of healthcare at home.


2020 ◽  
Vol 39 (4) ◽  
pp. 5699-5711
Author(s):  
Shirong Long ◽  
Xuekong Zhao

The smart teaching mode overcomes the shortcomings of traditional teaching online and offline, but there are certain deficiencies in the real-time feature extraction of teachers and students. In view of this, this study uses the particle swarm image recognition and deep learning technology to process the intelligent classroom video teaching image and extracts the classroom task features in real time and sends them to the teacher. In order to overcome the shortcomings of the premature convergence of the standard particle swarm optimization algorithm, an improved strategy for multiple particle swarm optimization algorithms is proposed. In order to improve the premature problem in the search performance algorithm of PSO algorithm, this paper combines the algorithm with the useful attributes of other algorithms to improve the particle diversity in the algorithm, enhance the global search ability of the particle, and achieve effective feature extraction. The research indicates that the method proposed in this paper has certain practical effects and can provide theoretical reference for subsequent related research.


2020 ◽  
Vol 71 (7) ◽  
pp. 868-880
Author(s):  
Nguyen Hong-Quan ◽  
Nguyen Thuy-Binh ◽  
Tran Duc-Long ◽  
Le Thi-Lan

Along with the strong development of camera networks, a video analysis system has been become more and more popular and has been applied in various practical applications. In this paper, we focus on person re-identification (person ReID) task that is a crucial step of video analysis systems. The purpose of person ReID is to associate multiple images of a given person when moving in a non-overlapping camera network. Many efforts have been made to person ReID. However, most of studies on person ReID only deal with well-alignment bounding boxes which are detected manually and considered as the perfect inputs for person ReID. In fact, when building a fully automated person ReID system the quality of the two previous steps that are person detection and tracking may have a strong effect on the person ReID performance. The contribution of this paper are two-folds. First, a unified framework for person ReID based on deep learning models is proposed. In this framework, the coupling of a deep neural network for person detection and a deep-learning-based tracking method is used. Besides, features extracted from an improved ResNet architecture are proposed for person representation to achieve a higher ReID accuracy. Second, our self-built dataset is introduced and employed for evaluation of all three steps in the fully automated person ReID framework.


2020 ◽  
Author(s):  
Saeed Nosratabadi ◽  
Amir Mosavi ◽  
Puhong Duan ◽  
Pedram Ghamisi ◽  
Ferdinand Filip ◽  
...  

This paper provides a state-of-the-art investigation of advances in data science in emerging economic applications. The analysis was performed on novel data science methods in four individual classes of deep learning models, hybrid deep learning models, hybrid machine learning, and ensemble models. Application domains include a wide and diverse range of economics research from the stock market, marketing, and e-commerce to corporate banking and cryptocurrency. Prisma method, a systematic literature review methodology, was used to ensure the quality of the survey. The findings reveal that the trends follow the advancement of hybrid models, which, based on the accuracy metric, outperform other learning algorithms. It is further expected that the trends will converge toward the advancements of sophisticated hybrid deep learning models.


Sign in / Sign up

Export Citation Format

Share Document