scholarly journals Automatic Inundation Mapping Using Sentinel-2 Data Applicable to Both Camargue and Doñana Biosphere Reserves

2019 ◽  
Vol 11 (19) ◽  
pp. 2251 ◽  
Author(s):  
Georgios A. Kordelas ◽  
Ioannis Manakos ◽  
Gaëtan Lefebvre ◽  
Brigitte Poulin

Flooding periodicity is crucial for biomass production and ecosystem functions in wetland areas. Local monitoring networks may be enriched by spaceborne derived products with a temporal resolution of a few days. Unsupervised computer vision techniques are preferred, since human interference and the use of training data may be kept to a minimum. Recently, a novel automatic local thresholding unsupervised methodology for separating inundated areas from non-inundated ones led to successful results for the Doñana Biosphere Reserve. This study examines the applicability of this approach to Camarque Biosphere Reserve, and proposes alternatives to the original approach to enhance accuracy and applicability for both Camargue and Doñana wetlands in a scientific quest for methods that may serve accurately biomes at both protected areas. In particular, it examines alternative inputs for automatically estimating thresholds while applying various algorithms for estimating the splitting thresholds. Reference maps for Camargue are provided by local authorities, and generated using Sentinel-2 Band 8A (NIR) and Band 12 (SWIR-2). The alternative approaches examined led to high inundation mapping accuracy. In particular, for the Camargue study area and 39 different dates, the alternative approach with the highest overall Kappa coefficient is 0.84, while, for the Doñana Biosphere Reserve and Doñana marshland (a subset of Doñana Reserve) and 7 different dates, is 0.85 and 0.94, respectively. Moreover, there are alternative approaches with high overall Kappa for all areas, i.e., at 0.79 for Camargue, over 0.91 for Doñana marshland, and over 0.82 for Doñana Reserve. Additionally, this study identifies the alternative approaches that perform better when the study area is extensively covered by temporary flooded and emergent vegetation areas (i.e., Camargue Reserve and Doñana marshland) or when it contains a large percentage of dry areas (i.e., Doñana Reserve). The development of credible automatic thresholding techniques that can be applied to different wetlands could lead to a higher degree of automation for map production, while enhancing service utilization by non-trained personnel.

2021 ◽  
Vol 13 (12) ◽  
pp. 2301
Author(s):  
Zander Venter ◽  
Markus Sydenham

Land cover maps are important tools for quantifying the human footprint on the environment and facilitate reporting and accounting to international agreements addressing the Sustainable Development Goals. Widely used European land cover maps such as CORINE (Coordination of Information on the Environment) are produced at medium spatial resolutions (100 m) and rely on diverse data with complex workflows requiring significant institutional capacity. We present a 10 m resolution land cover map (ELC10) of Europe based on a satellite-driven machine learning workflow that is annually updatable. A random forest classification model was trained on 70K ground-truth points from the LUCAS (Land Use/Cover Area Frame Survey) dataset. Within the Google Earth Engine cloud computing environment, the ELC10 map can be generated from approx. 700 TB of Sentinel imagery within approx. 4 days from a single research user account. The map achieved an overall accuracy of 90% across eight land cover classes and could account for statistical unit land cover proportions within 3.9% (R2 = 0.83) of the actual value. These accuracies are higher than that of CORINE (100 m) and other 10 m land cover maps including S2GLC and FROM-GLC10. Spectro-temporal metrics that capture the phenology of land cover classes were most important in producing high mapping accuracies. We found that the atmospheric correction of Sentinel-2 and the speckle filtering of Sentinel-1 imagery had a minimal effect on enhancing the classification accuracy (< 1%). However, combining optical and radar imagery increased accuracy by 3% compared to Sentinel-2 alone and by 10% compared to Sentinel-1 alone. The addition of auxiliary data (terrain, climate and night-time lights) increased accuracy by an additional 2%. By using the centroid pixels from the LUCAS Copernicus module polygons we increased accuracy by <1%, revealing that random forests are robust against contaminated training data. Furthermore, the model requires very little training data to achieve moderate accuracies—the difference between 5K and 50K LUCAS points is only 3% (86 vs. 89%). This implies that significantly less resources are necessary for making in situ survey data (such as LUCAS) suitable for satellite-based land cover classification. At 10 m resolution, the ELC10 map can distinguish detailed landscape features like hedgerows and gardens, and therefore holds potential for aerial statistics at the city borough level and monitoring property-level environmental interventions (e.g., tree planting). Due to the reliance on purely satellite-based input data, the ELC10 map can be continuously updated independent of any country-specific geographic datasets.


Author(s):  
Y. A. Lumban-Gaol ◽  
K. A. Ohori ◽  
R. Y. Peters

Abstract. Satellite-Derived Bathymetry (SDB) has been used in many applications related to coastal management. SDB can efficiently fill data gaps obtained from traditional measurements with echo sounding. However, it still requires numerous training data, which is not available in many areas. Furthermore, the accuracy problem still arises considering the linear model could not address the non-relationship between reflectance and depth due to bottom variations and noise. Convolutional Neural Networks (CNN) offers the ability to capture the connection between neighbouring pixels and the non-linear relationship. These CNN characteristics make it compelling to be used for shallow water depth extraction. We investigate the accuracy of different architectures using different window sizes and band combinations. We use Sentinel-2 Level 2A images to provide reflectance values, and Lidar and Multi Beam Echo Sounder (MBES) datasets are used as depth references to train and test the model. A set of Sentinel-2 and in-situ depth subimage pairs are extracted to perform CNN training. The model is compared to the linear transform and applied to two other study areas. Resulting accuracy ranges from 1.3 m to 1.94 m, and the coefficient of determination reaches 0.94. The SDB model generated using a window size of 9x9 indicates compatibility with the reference depths, especially at areas deeper than 15 m. The addition of both short wave infrared bands to the four visible bands in training improves the overall accuracy of SDB. The implementation of the pre-trained model to other study areas provides similar results depending on the water conditions.


2011 ◽  
pp. 123-136
Author(s):  
Sean Eom

Chapter II introduced online cocitation counts retrieval using Dialog Classic and citation index files. Certainly Dialog Classic is an attractive alternative in that the user is using the readily available bibliographic databases and retrieval software. The majority of ACA research has used ISI databases and Dialog Classic to retrieve cocitation counts. However, this approach has well-known technical limitations as discussed earlier. They include the issue of Multiple Authorship, Name-Homographs, and Synonyms. This chapter introduces an alternative approach to retrieving a cocitation count from the custom databases through the system we have designed and implemented. Custom database and retrieval systems need time and investment to develop, but they can manage most of the technical limitations discussed. The book presents two other alternative approaches that can be used to retrieve cocitation counts in lieu of using ISI citation index files and Dialog Classic. This chapter introduces the fox-base approach in developing custom databases and the cocitation matrix generation system. The first part is concerned with the design of databases. The second part describes the cocitation retrieval system. We also discuss how our system can eliminate or minimize the technical limitations of the Thomson ISI database and Dialog Classic Software system.


2019 ◽  
Vol 8 (3) ◽  
pp. 150 ◽  
Author(s):  
Joongbin Lim ◽  
Kyoung-Min Kim ◽  
Ri Jin

Remote sensing (RS) has been used to monitor inaccessible regions. It is considered a useful technique for deriving important environmental information from inaccessible regions, especially North Korea. In this study, we aim to develop a tree species classification model based on RS and machine learning techniques, which can be utilized for classification in North Korea. Two study sites were chosen, the Korea National Arboretum (KNA) in South Korea and Mt. Baekdu (MTB; a.k.a., Mt. Changbai in Chinese) in China, located in the border area between North Korea and China, and tree species classifications were examined in both regions. As a preliminary step in developing a classification algorithm that can be applied in North Korea, common coniferous species at both study sites, Korean pine (Pinus koraiensis) and Japanese larch (Larix kaempferi), were chosen as targets for investigation. Hyperion data have been used for tree species classification due to the abundant spectral information acquired from across more than 200 spectral bands (i.e., hyperspectral satellite data). However, it is impossible to acquire recent Hyperion data because the satellite ceased operation in 2017. Recently, Sentinel-2 satellite multispectral imagery has been used in tree species classification. Thus, it is necessary to compare these two kinds of satellite data to determine the possibility of reliably classifying species. Therefore, Hyperion and Sentinel-2 data were employed, along with machine learning techniques, such as random forests (RFs) and support vector machines (SVMs), to classify tree species. Three questions were answered, showing that: (1) RF and SVM are well established in the hyperspectral imagery for tree species classification, (2) Sentinel-2 data can be used to classify tree species with RF and SVM algorithms instead of Hyperion data, and (3) training data that were built in the KNA cannot be used for the tree classification of MTB. Random forests and SVMs showed overall accuracies of 0.60 and 0.51 and kappa values of 0.20 and 0.00, respectively. Moreover, combined training data from the KNA and MTB showed high classification accuracies in both regions; RF and SVM values exhibited accuracies of 0.99 and 0.97 and kappa values of 0.98 and 0.95, respectively.


Author(s):  
M. Schwieder ◽  
M. Buddeberg ◽  
K. Kowalski ◽  
K. Pfoch ◽  
J. Bartsch ◽  
...  

Abstract Grassland plays an important role in German agriculture. The interplay of ecological processes in grasslands secures important ecosystem functions and, thus, ultimately contributes to essential ecosystem services. To sustain, e.g., the provision of fodder or the filter function of soils, agricultural management needs to adapt to site-specific grassland characteristics. Spatially explicit information derived from remote sensing data has been proven instrumental for achieving this. In this study, we analyze the potential of Sentinel-2 data for deriving grassland-relevant parameters. We compare two well-established methods to calculate the aboveground biomass and leaf area index (LAI), first using a random forest regression and second using the soil–leaf-canopy (SLC) radiative transfer model. Field data were recorded on a grassland area in Brandenburg in August 2019, and were used to train the empirical model and to validate both models. Results confirm that both methods are suitable for mapping the spatial distribution of LAI and for quantifying aboveground biomass. Uncertainties generally increased with higher biomass and LAI values in the empirical model and varied on average by a relative RMSE of 11% for modeling of dry biomass and a relative RMSE of 23% for LAI. Similar estimates were achieved using SLC with a relative RMSE of 30% for LAI retrieval, and a relative RMSE of 47% for the estimation of dry biomass. Resulting maps from both approaches showed comprehensible spatial patterns of LAI and dry biomass distributions. Despite variations in the value ranges of both maps, the average estimates and spatial patterns of LAI and dry biomass were very similar. Based on the results of the two compared modeling approaches and the comparison to the validation data, we conclude that the relationship between Sentinel-2 spectra and grassland-relevant variables can be quantified to map their spatial distributions from space. Future research needs to investigate how similar approaches perform across different grassland types, seasons and grassland management regimes.


2019 ◽  
Vol 11 (2) ◽  
pp. 119 ◽  
Author(s):  
Cheng-Chien Liu ◽  
Yu-Cheng Zhang ◽  
Pei-Yin Chen ◽  
Chien-Chih Lai ◽  
Yi-Hsin Chen ◽  
...  

Detecting changes in land use and land cover (LULC) from space has long been the main goal of satellite remote sensing (RS), yet the existing and available algorithms for cloud classification are not reliable enough to attain this goal in an automated fashion. Clouds are very strong optical signals that dominate the results of change detection if they are not removed completely from imagery. As various architectures of deep learning (DL) have been proposed and advanced quickly, their potential in perceptual tasks has been widely accepted and successfully applied to many fields. A comprehensive survey of DL in RS has been reviewed, and the RS community has been suggested to be leading researchers in DL. Based on deep residual learning, semantic image segmentation, and the concept of atrous convolution, we propose a new DL architecture, named CloudNet, with an enhanced capability of feature extraction for classifying cloud and haze from Sentinel-2 imagery, with the intention of supporting automatic change detection in LULC. To ensure the quality of the training dataset, scene classification maps of Taiwan processed by Sen2cor were visually examined and edited, resulting in a total of 12,769 sub-images with a standard size of 224 × 224 pixels, cut from the Sen2cor-corrected images and compiled in a trainset. The data augmentation technique enabled CloudNet to have stable cirrus identification capability without extensive training data. Compared to the traditional method and other DL methods, CloudNet had higher accuracy in cloud and haze classification, as well as better performance in cirrus cloud recognition. CloudNet will be incorporated into the Open Access Satellite Image Service to facilitate change detection by using Sentinel-2 imagery on a regular and automatic basis.


2019 ◽  
Author(s):  
Anthony Devlin ◽  
Courtney J. Mycroft-West ◽  
Marco Guerrini ◽  
Edwin A. Yates ◽  
Mark A. Skidmore

AbstractThe widely used anticoagulant pharmaceutical, heparin, is a polydisperse, heterogeneous polysaccharide. Heparin is one of the essential medicines defined by the World Health Organisation but, during 2007-2008, was the subject of adulteration. The intrinsic heterogeneity and variability of heparin makes it a challenge to monitor its purity by conventional means. This has led to the adoption of alternative approaches for its analysis and quality control, some of which are based on multivariate analysis of 1H NMR spectra, or exploit correlation techniques. Such NMR spectroscopy-based analyses, however, require costly and technically demanding NMR instrumentation. Here, an alternative approach based on the use of attenuated total reflectance Fourier transform infrared spectroscopy (FTIR-ATR) combined with multivariate analysis is proposed. FTIR-ATR employs more affordable and easy-to-use technology and, when combined with multivariate analysis of the resultant spectra, readily differentiates between glycosaminoglycans of different types, between heparin samples of distinct animal origins and enables the detection of both known heparin contaminants, such as over-sulphated chondroitin sulfate (OSCS), as well as other alien sulphated polysaccharides in heparin samples to a degree of sensitivity comparable to that achievable by NMR. The approach will permit the rapid and cost-effective monitoring of pharmaceutical heparin at any stage of the production process and indeed, in principle, the quality control of any heterogeneous or variable material.


2021 ◽  
Vol 13 (24) ◽  
pp. 5035
Author(s):  
Shahab Jozdani ◽  
Dongmei Chen ◽  
Wenjun Chen ◽  
Sylvain G. Leblanc ◽  
Julie Lovitt ◽  
...  

Illumination variations in non-atmospherically corrected high-resolution satellite (HRS) images acquired at different dates/times/locations pose a major challenge for large-area environmental mapping and monitoring. This problem is exacerbated in cases where a classification model is trained only on one image (and often limited training data) but applied to other scenes without collecting additional samples from these new images. In this research, by focusing on caribou lichen mapping, we evaluated the potential of using conditional Generative Adversarial Networks (cGANs) for the normalization of WorldView-2 (WV2) images of one area to a source WV2 image of another area on which a lichen detector model was trained. In this regard, we considered an extreme case where the classifier was not fine-tuned on the normalized images. We tested two main scenarios to normalize four target WV2 images to a source 50 cm pansharpened WV2 image: (1) normalizing based only on the WV2 panchromatic band, and (2) normalizing based on the WV2 panchromatic band and Sentinel-2 surface reflectance (SR) imagery. Our experiments showed that normalizing even based only on the WV2 panchromatic band led to a significant lichen-detection accuracy improvement compared to the use of original pansharpened target images. However, we found that conditioning the cGAN on both the WV2 panchromatic band and auxiliary information (in this case, Sentinel-2 SR imagery) further improved normalization and the subsequent classification results due to adding a more invariant source of information. Our experiments showed that, using only the panchromatic band, F1-score values ranged from 54% to 88%, while using the fused panchromatic and SR, F1-score values ranged from 75% to 91%.


Author(s):  
M. L. R. Gonzaga ◽  
M. T. S. Wong ◽  
A. C. Blanco ◽  
J. A. Principe

Abstract. With the Philippines ranking as the third largest source of plastics that end up in the oceans, there is a need to further explore methodologies that will become an aid in plastic waste removal from the ocean. Manila Bay is a natural harbor in the Philippines that serves as the center of different economic activities. However, the bay is also threatened with plastic pollution due to increasing population and industrial activities. BASECO is one of the areas in Manila Bay where clean-up activities are focused as this is where trash accumulates. Sentinel-2 images are provided free of charge by the European Commission's Copernicus Programme. Satellite images from June 2019 to May 2020 were inspected, then cloud-free images were downloaded. After downloading and pre-processing, spectral data of different types of plastic such as shipping pouch, bubble wrap, styrofoam, PET bottle, sando bag and snack packaging that were measured by a spectrometer during a fieldwork by the Development of Integrated Mapping, Monitoring, and Analytical Network System for Manila Bay and Linked Environments (project MapABLE) were utilized in the selection of training data. Then, indices such as the Normalized Vegetation Index (NDVI), Floating Debris Index (FDI) and Plastic Index (PI) from previous studies were analyzed for further separation of classes used as training data. These training data served as an input to the two supervised classification methods, Naive Bayes and Mixture Tuned Matched Filtering (MTMF). Both methods were validated by reports and articles from Philippine agencies indicating the spots where trash frequently accumulates.


2021 ◽  
Vol 13 (21) ◽  
pp. 4255
Author(s):  
Alina Ciocarlan ◽  
Andrei Stoian

Automatic ship detection provides an essential function towards maritime domain awareness for security or economic monitoring purposes. This work presents an approach for training a deep learning ship detector in Sentinel-2 multi-spectral images with few labeled examples. We design a network architecture for detecting ships with a backbone that can be pre-trained separately. By using self supervised learning, an emerging unsupervised training procedure, we learn good features on Sentinel-2 images, without requiring labeling, to initialize our network’s backbone. The full network is then fine-tuned to learn to detect ships in challenging settings. We evaluated this approach versus pre-training on ImageNet and versus a classical image processing pipeline. We examined the impact of variations in the self-supervised learning step and we show that in the few-shot learning setting self-supervised pre-training achieves better results than ImageNet pre-training. When enough training data are available, our self-supervised approach is as good as ImageNet pre-training. We conclude that a better design of the self-supervised task and bigger non-annotated dataset sizes can lead to surpassing ImageNet pre-training performance without any annotation costs.


Sign in / Sign up

Export Citation Format

Share Document