scholarly journals An Image is Worth a Thousand Species: Scaling high-resolution plant biodiversity prediction to biome-level using citizen science data and remote sensing imagery

Author(s):  
Lauren Gillespie ◽  
Megan Ruffley ◽  
Moisés Expósito-Alonso

Accurately mapping biodiversity at high resolution across ecosystems has been a historically difficult task. One major hurdle to accurate biodiversity modeling is that there is a power law relationship between the abundance of different types of species in an environment, with few species being relatively abundant while many species are more rare. This “commonness of rarity,” confounded with differential detectability of species, can lead to misestimations of where a species lives. To overcome these confounding factors, many biodiversity models employ species distribution models (SDMs) to predict the full extent of where a species lives, using observations of where a species has been found, correlated with environmental variables. Most SDMs use bioclimatic environmental variables as the dependent variable to predict a species’ range, but these approaches often rely on biased pseudo-absence generation methods and model species using coarse-grained bioclimatic variables with a useful resolution floor of 1 km-pixel. Here, we pair iNaturalist citizen science plant observations from the Global Biodiversity Information Facility with RGB-Infrared aerial imagery from the National Aerial Imagery Program to develop a deep convolutional neural network model that can predict the presence of nearly 2,500 plant species across California. We utilize a state-of-the-art multilabel image recognition model from the computer vision community, paired with a cutting-edge multilabel classification loss, which leads to comparable or better accuracy to traditional SDM models, but at a resolution of 250m (Ben-Baruch et al. 2020, Ridnik et al. 2020). Furthermore, this deep convolutional model is able to accurately predict species presence across multiple biomes of California with good accuracy and can be used to build a plant biodiversity map across California with unparalleled accuracy. Given the widespread availability of citizen science observations and remote sensing imagery across the globe, this deep learning-enabled method could be deployed to automatically map biodiversity at large scales.

Author(s):  
Pierre Bonnet ◽  
Julien Champ ◽  
Hervé Goëau ◽  
Fabian-Robert Stöter ◽  
Benjamin Deneu ◽  
...  

Pl@ntNet is a scientific and citizen platform based on artificial intelligence techniques to help participants more easily identify plants with their smartphones. The identification of plant species is indeed an important step for many scientific, educational and land management activities (for natural or cultivated spaces). This step, which is integrated into various biology training courses, is difficult to develop on a large scale, even for professionals. This difficulty in naming plants by a very large part of the society limits the positive interactions between humans and their environment, and thus reduces awareness of the contribution of plants to the well-being of humanity. It is therefore essential to attempt to solve this problem on a large geographical, taxonomic and sociological scale in order to develop more responsible and environmentally precautionary societies. In the framework of the European Cos4Cloud project (2019 - 2023), which involves several European citizen science platforms (such as iSpot in the UK, Natusfera in Spain or Artportalen in Sweden), Pl@ntNet is developing innovative digital services aimed at: facilitating the integration of automated species identification into other citizen science portals and enabling researchers to use Pl@ntNet data and tools for their own research. facilitating the integration of automated species identification into other citizen science portals and enabling researchers to use Pl@ntNet data and tools for their own research. The services implemented will be provided on the European Open Science Cloud (EOSC), to increase the capacity and interest of European scientists to implement citizen science projects aimed at contributing to the sustainable development goals identified by the United Nations. The tools currently available on the Pl@ntNet monitoring platform, as well as the data extraction methodologies that have enabled the publication of two large datasets on the Global Biodiversity Information Facility (GBIF) portal, will be presented. In particular, we will present our latest advances allowing: the adaptation of Pl@ntNet services to nature reserves or botanical gardens, the monitoring of target species for a given geographical area, the prediction of species distribution at the national scale, based on high-resolution remote sensing imagery. the adaptation of Pl@ntNet services to nature reserves or botanical gardens, the monitoring of target species for a given geographical area, the prediction of species distribution at the national scale, based on high-resolution remote sensing imagery. The results of the LifeCLEF campaign, an annual international scientific and technological benchmark based on Pl@ntNet data, will be presented to compare the latest machine-learning techniques used to enable automated species identification and species prediction based on geolocation. The fruit of this work, involving a large multi-disciplinary team will illustrate the benefits of working at the frontier between biological, digital and citizen sciences.


1994 ◽  
Vol 29 (1-2) ◽  
pp. 135-144 ◽  
Author(s):  
C. Deguchi ◽  
S. Sugio

This study aims to evaluate the applicability of satellite imagery in estimating the percentage of impervious area in urbanized areas. Two methods of estimation are proposed and applied to a small urbanized watershed in Japan. The area is considered under two different cases of subdivision; i.e., 14 zones and 17 zones. The satellite imageries of LANDSAT-MSS (Multi-Spectral Scanner) in 1984, MOS-MESSR(Multi-spectral Electronic Self-Scanning Radiometer) in 1988 and SPOT-HRV(High Resolution Visible) in 1988 are classified. The percentage of imperviousness in 17 zones is estimated by using these classification results. These values are compared with the ones obtained from the aerial photographs. The percent imperviousness derived from the imagery agrees well with those derived from aerial photographs. The estimation errors evaluated are less than 10%, the same as those obtained from aerial photographs.


2021 ◽  
Vol 13 (15) ◽  
pp. 2862
Author(s):  
Yakun Xie ◽  
Dejun Feng ◽  
Sifan Xiong ◽  
Jun Zhu ◽  
Yangge Liu

Accurately building height estimation from remote sensing imagery is an important and challenging task. However, the existing shadow-based building height estimation methods have large errors due to the complex environment in remote sensing imagery. In this paper, we propose a multi-scene building height estimation method based on shadow in high resolution imagery. First, the shadow of building is classified and described by analyzing the features of building shadow in remote sensing imagery. Second, a variety of shadow-based building height estimation models is established in different scenes. In addition, a method of shadow regularization extraction is proposed, which can solve the problem of mutual adhesion shadows in dense building areas effectively. Finally, we propose a method for shadow length calculation combines with the fish net and the pauta criterion, which means that the large error caused by the complex shape of building shadow can be avoided. Multi-scene areas are selected for experimental analysis to prove the validity of our method. The experiment results show that the accuracy rate is as high as 96% within 2 m of absolute error of our method. In addition, we compared our proposed approach with the existing methods, and the results show that the absolute error of our method are reduced by 1.24 m-3.76 m, which can achieve high-precision estimation of building height.


Sign in / Sign up

Export Citation Format

Share Document