Integration of remote sensing data in national and European spatial data infrastructures – derivation of CORINE Land Cover data from the DLM-DE

2009 ◽  
Vol 2009 (2) ◽  
pp. 129-141 ◽  
Author(s):  
Stephan Arnold
Author(s):  
Á. Barsi ◽  
Zs. Kugler ◽  
I. László ◽  
Gy. Szabó ◽  
H. M. Abdulmutalib

The technological developments in remote sensing (RS) during the past decade has contributed to a significant increase in the size of data user community. For this reason data quality issues in remote sensing face a significant increase in importance, particularly in the era of Big Earth data. Dozens of available sensors, hundreds of sophisticated data processing techniques, countless software tools assist the processing of RS data and contributes to a major increase in applications and users. In the past decades, scientific and technological community of spatial data environment were focusing on the evaluation of data quality elements computed for point, line, area geometry of vector and raster data. Stakeholders of data production commonly use standardised parameters to characterise the quality of their datasets. Yet their efforts to estimate the quality did not reach the general end-user community running heterogeneous applications who assume that their spatial data is error-free and best fitted to the specification standards. The non-specialist, general user group has very limited knowledge how spatial data meets their needs. These parameters forming the external quality dimensions implies that the same data system can be of different quality to different users. The large collection of the observed information is uncertain in a level that can decry the reliability of the applications.<br> Based on prior paper of the authors (in cooperation within the Remote Sensing Data Quality working group of ISPRS), which established a taxonomy on the dimensions of data quality in GIS and remote sensing domains, this paper is aiming at focusing on measures of uncertainty in remote sensing data lifecycle, focusing on land cover mapping issues. In the paper we try to introduce how quality of the various combination of data and procedures can be summarized and how services fit the users’ needs.<br> The present paper gives the theoretic overview of the issue, besides selected, practice-oriented approaches are evaluated too, finally widely-used dimension metrics like Root Mean Squared Error (RMSE) or confusion matrix are discussed. The authors present data quality features of well-defined and poorly defined object. The central part of the study is the land cover mapping, describing its accuracy management model, presented relevance and uncertainty measures of its influencing quality dimensions. In the paper theory is supported by a case study, where the remote sensing technology is used for supporting the area-based agricultural subsidies of the European Union, in Hungarian administration.


2021 ◽  
Vol 13 (21) ◽  
pp. 4483
Author(s):  
W. Gareth Rees ◽  
Jack Tomaney ◽  
Olga Tutubalina ◽  
Vasily Zharko ◽  
Sergey Bartalev

Growing stock volume (GSV) is a fundamental parameter of forests, closely related to the above-ground biomass and hence to carbon storage. Estimation of GSV at regional to global scales depends on the use of satellite remote sensing data, although accuracies are generally lower over the sparse boreal forest. This is especially true of boreal forest in Russia, for which knowledge of GSV is currently poor despite its global importance. Here we develop a new empirical method in which the primary remote sensing data source is a single summer Sentinel-2 MSI image, augmented by land-cover classification based on the same MSI image trained using MODIS-derived data. In our work the method is calibrated and validated using an extensive set of field measurements from two contrasting regions of the Russian arctic. Results show that GSV can be estimated with an RMS uncertainty of approximately 35–55%, comparable to other spaceborne estimates of low-GSV forest areas, with 70% spatial correspondence between our GSV maps and existing products derived from MODIS data. Our empirical approach requires somewhat laborious data collection when used for upscaling from field data, but could also be used to downscale global data.


2021 ◽  
Author(s):  
Simon Jirka ◽  
Benedikt Gräler ◽  
Matthes Rieke ◽  
Christian Autermann

&lt;p&gt;For many scientific domains such as hydrology, ocean sciences, geophysics and social sciences, geospatial observations are an important source of information. Scientists conduct extensive measurement campaigns or operate comprehensive monitoring networks to collect data that helps to understand and to model current and past states of complex environment. The variety of data underpinning research stretches from in-situ observations to remote sensing data (e.g., from the European Copernicus programme) and contributes to rapidly increasing large volumes of geospatial data.&lt;/p&gt;&lt;p&gt;However, with the growing amount of available data, new challenges arise. Within our contribution, we will focus on two specific aspects: On the one hand, we will discuss the specific challenges which result from the large volumes of remote sensing data that have become available for answering scientific questions. For this purpose, we will share practical experiences with the use of cloud infrastructures such as the German platform CODE-DE and will discuss concepts that enable data processing close to the data stores. On the other hand, we will look into the question of interoperability in order to facilitate the integration and collaborative use of data from different sources. For this aspect, we will give special consideration to the currently emerging new generation of standards of the Open Geospatial Consortium (OGC) and will discuss how specifications such as the OGC API for Processes can help to provide flexible processing capabilities directly within Cloud-based research data infrastructures.&lt;/p&gt;


Sign in / Sign up

Export Citation Format

Share Document