The Drought & Flood Mitigation Service in Uganda – First Results

Author(s):  
Hermen Westerbeeke ◽  
Deus Bamanya ◽  
George Gibson

<p>Since 2017, the governments of Uganda and the United Kingdom have been taking an innovative approach to mitigating the impacts of drought and floods on Ugandan society in the DFMS Project. Recognising both that the only sustainable solution to this issue is the continued capacity development in Uganda’s National Meteorological and Hydrological Services, and that it will take time for this capacity development to deliver results, the Drought & Flood Mitigation Service Project developed DFMS, bringing together meteorological, hydrological, and Earth observation information products and making these available to decision-makers in Uganda.</p><p>After the DFMS Platform was designed and developed in cooperation between a group of UK organisations that includes the Met Office and is led by the REA Group and five Ugandan government agencies including UNMA, led by the Ministry of Water and Environment (MWE), 2020 saw the start of a 2.5-year Demonstration Phase in which UNMA, MWE, and the other agencies will trial DFMS and DFMS will be fine-tuned to their needs. We will be presenting the first experiences with DFMS, including how it is being used related to SDG monitoring, and will showcase the platform itself in what we hope will be a very interactive session.</p><p>DFMS is a suite of information products and access only requires an Internet-connected device (e.g. PC, laptop, tablet, smart phone). Data and information are provided as maps or in graphs and tables, and several analysis tools allow for bespoke data processing and visualisation. Alarms can be tailored to indicate when observed or forecast parameters exceed user-defined thresholds. DFMS also comes with automatic programmable interfaces allowing it to be integrated with other automatic systems. The DFMS Platform is built using Open Source software, including Open Data Cube technology for storing and analysing Earth Observation data. It extensively uses (free) satellite remote sensing data, but also takes in data gathered in situ. By making the platform scalable and replicable, DFMS can be extended to contain additional features (e.g. related to landslides or crop diseases) or be rolled out in other countries in the region and beyond.</p>

Author(s):  
V. C. F. Gomes ◽  
F. M. Carlos ◽  
G. R. Queiroz ◽  
K. R. Ferreira ◽  
R. Santos

Abstract. Recently, several technologies have emerged to address the need to process and analyze large volumes of Earth Observations (EO) data. The concept of Earth Observations Data Cubes (EODC) appears, in this context, as the paradigm of technologies that aim to structure and facilitate the way users handle this type of data. Some projects have adopted this concept in developing their technologies, such as the Open Data Cube (ODC) framework and the Brazil Data Cube (BDC) platform, which provide open-source tools capable of managing, processing, analyzing, and disseminating EO data. This work presents an approach to integrate these technologies through the access and processing of data products from the BDC platform in the ODC framework. For this, we developed a tool to automate the process of searching, converting, and indexing data between these two systems. Besides, four ODC functional modules have been customized to work with BDC data. The tool developed and the changes made to the ODC modules expand the potential for other initiatives to take advantage of the features available in the ODC.


2021 ◽  
Author(s):  
Edzer Pebesma ◽  
Patrick Griffiths ◽  
Christian Briese ◽  
Alexander Jacob ◽  
Anze Skerlevaj ◽  
...  

<p>The OpenEO API allows the analysis of large amounts of Earth Observation data using a high-level abstraction of data and processes. Rather than focusing on the management of virtual machines and millions of imagery files, it allows to create jobs that take a spatio-temporal section of an image collection (such as Sentinel L2A), and treat it as a data cube. Processes iterate or aggregate over pixels, spatial areas, spectral bands, or time series, while working at arbitrary spatial resolution. This pattern, pioneered by Google Earth Engine™ (GEE), lets the user focus on the science rather than on data management.</p><p>The openEO H2020 project (2017-2020) has developed the API as well as an ecosystem of software around it, including clients (JavaScript, Python, R, QGIS, browser-based), back-ends that translate API calls into existing image analysis or GIS software or services (for Sentinel Hub, WCPS, Open Data Cube, GRASS GIS, GeoTrellis/GeoPySpark, and GEE) as well as a hub that allows querying and searching openEO providers for their capabilities and datasets. The project demonstrated this software in a number of use cases, where identical processing instructions were sent to different implementations, allowing comparison of returned results.</p><p>A follow-up, ESA-funded project “openEO Platform” realizes the API and progresses the software ecosystem into operational services and applications that are accessible to everyone, that involve federated deployment (using the clouds managed by EODC, Terrascope, CreoDIAS and EuroDataCube), that will provide payment models (“pay per compute job”) conceived and implemented following the user community needs and that will use the EOSC (European Open Science Cloud) marketplace for dissemination and authentication. A wide range of large-scale cases studies will demonstrate the ability of the openEO Platform to scale to large data volumes.  The case studies to be addressed include on-demand ARD generation for SAR and multi-spectral data, agricultural demonstrators like crop type and condition monitoring, forestry services like near real time forest damage assessment as well as canopy cover mapping, environmental hazard monitoring of floods and air pollution as well as security applications in terms of vessel detection in the mediterranean sea.</p><p>While the landscape of cloud-based EO platforms and services has matured and diversified over the past decade, we believe there are strong advantages for scientists and government agencies to adopt the openEO approach. Beyond the absence of vendor/platform lock-in or EULA’s we mention the abilities to (i) run arbitrary user code (e.g. written in R or Python) close to the data, (ii) carry out scientific computations on an entirely open source software stack, (iii) integrate different platforms (e.g., different cloud providers offering different datasets), and (iv) help create and extend this software ecosystem. openEO uses the OpenAPI standard, aligns with modern OGC API standards, and uses the STAC (SpatioTemporal Asset Catalog) to describe image collections and image tiles.</p>


Data ◽  
2019 ◽  
Vol 4 (3) ◽  
pp. 94 ◽  
Author(s):  
Steve Kopp ◽  
Peter Becker ◽  
Abhijit Doshi ◽  
Dawn J. Wright ◽  
Kaixi Zhang ◽  
...  

Earth observation imagery have traditionally been expensive, difficult to find and access, and required specialized skills and software to transform imagery into actionable information. This has limited adoption by the broader science community. Changes in cost of imagery and changes in computing technology over the last decade have enabled a new approach for how to organize, analyze, and share Earth observation imagery, broadly referred to as a data cube. The vision and promise of image data cubes is to lower these hurdles and expand the user community by making analysis ready data readily accessible and providing modern approaches to more easily analyze and visualize the data, empowering a larger community of users to improve their knowledge of place and make better informed decisions. Image data cubes are large collections of temporal, multivariate datasets typically consisting of analysis ready multispectral Earth observation data. Several flavors and variations of data cubes have emerged. To simplify access for end users we developed a flexible approach supporting multiple data cube styles, referencing images in their existing structure and storage location, enabling fast access, visualization, and analysis from a wide variety of web and desktop applications. We provide here an overview of that approach and three case studies.


Author(s):  
Gregory Giuliani ◽  
Bruno Chatenoux ◽  
Thomas Piller ◽  
Frédéric Moser ◽  
Pierre Lacroix

2021 ◽  
pp. 49-61
Author(s):  
Miguel Ángel Esbrí

AbstractIn this chapter we present the concepts of remote sensing and Earth Observation and, explain why several of their characteristics (volume, variety and velocity) make us consider Earth Observation as Big Data. Thereafter, we discuss the most commonly open data formats used to store and share the data. The main sources of Earth Observation data are also described, with particular focus on the constellation of Sentinel satellites, Copernicus Hub and its six thematic services, as well as other private initiatives like the five Copernicus-related Data and Information Access Services and  Sentinel Hub. Next, we present an overview of representative software technologies for efficiently describing, storing, querying and accessing Earth Observation datasets. The chapter concludes with a summary of the Earth Observation datasets used in each DataBio pilot.


Data ◽  
2019 ◽  
Vol 4 (4) ◽  
pp. 143 ◽  
Author(s):  
Richard Lucas ◽  
Norman Mueller ◽  
Anders Siggins ◽  
Christopher Owers ◽  
Daniel Clewley ◽  
...  

This study establishes the use of the Earth Observation Data for Ecosystem Monitoring (EODESM) to generate land cover and change classifications based on the United Nations Food and Agriculture Organisation (FAO) Land Cover Classification System (LCCS) and environmental variables (EVs) available within, or accessible from, Geoscience Australia’s (GA) Digital Earth Australia (DEA). Classifications representing the LCCS Level 3 taxonomy (8 categories representing semi-(natural) and/or cultivated/managed vegetation or natural or artificial bare or water bodies) were generated for two time periods and across four test sites located in the Australian states of Queensland and New South Wales. This was achieved by progressively and hierarchically combining existing time-static layers relating to (a) the extent of artificial surfaces (urban, water) and agriculture and (b) annual summaries of EVs relating to the extent of vegetation (fractional cover) and water (hydroperiod, intertidal area, mangroves) generated through DEA. More detailed classifications that integrated information on, for example, forest structure (based on vegetation cover (%) and height (m); time-static for 2009) and hydroperiod (months), were subsequently produced for each time-step. The overall accuracies of the land cover classifications were dependent upon those reported for the individual input layers, with these ranging from 80% (for cultivated, urban and artificial water) to over 95% (for hydroperiod and fractional cover). The changes identified include mangrove dieback in the southeastern Gulf of Carpentaria and reduced dam water levels and an associated expansion of vegetation in Lake Ross, Burdekin. The extent of detected changes corresponded with those observed using time-series of RapidEye data (2014 to 2016; for the Gulf of Carpentaria) and Google Earth imagery (2009–2016 for Lake Ross). This use case demonstrates the capacity and a conceptual framework to implement EODESM within DEA and provides countries using the Open Data Cube (ODC) environment with the opportunity to routinely generate land cover maps from Landsat or Sentinel-1/2 data, at least annually, using a consistent and internationally recognised taxonomy.


2019 ◽  
Vol 1 ◽  
pp. 1-2
Author(s):  
Kwangseob Kim ◽  
Kiwon Lee

<p><strong>Abstract.</strong> Data cube terms a multi-dimensional stack of gridded datasets aligned for analysis. Open Data Cube (ODC) is an open-source based information processing and managing platform on the viewpoint from web-based infrastructure. This open platform is for a large volume of geo-spatial information with geo-rectified coordinates, and it has been applied by non-profit international organizations such as the Committee on Earth Observation Satellites (CEOS) for an international coordination and management of space-borne missions, Global Earth Observation System of Systems (GEOSS) in the Group on Earth Observations (GEO), as an intergovernmental organization to improve the applicability, accessibility and usability of Earth observations for benefit of human society.</p><p>The building of Analysis Ready Data (ARD), which means the preparation of radiometric calibration and geo-rectification, is for the data cube utilization. The platform converts large-scale satellite image data into analytic information, providing functions for time series analysis. Internationally, there has been an ever-increasing number of country-based data cube deployments with freely available satellite images, including Australia Data Cube, Vietnam Data Cube, Swiss Data Cube, and Colombia Data Cube, as a computing environment for information distribution, sharing, and analysis.</p><p>However, there is no program yet to register Korea Multi-purpose satellite (KOMPSAT) optical and radar images on this platform, so this study developed the registering and ingestion script codes for KOMPSAT optical and radar image sets into ODC. Data ingestion is the process of obtaining and importing data for immediate use or storage in a database. Thus, an ingestion process is required to add satellite data to the ODC platform and the process can be divided into three main stages in Figure 1. First, it is to define the data type in the YAML format. Then, the indexing process of datasets for metadata registration is necessary. The next step is a data ingestion process that users can be used directly data sets collected in ODC.</p><p>Figure 2 shows some of the Python module results for index datasets and the process of metadata generation. The metadata YAML, which is required for indexing, has many advantages in many respects in the creation of metadata through Python modules. This is why we added Python modules to create metadata YAML. In particular, the KOMPSAT data ingestion process was designed so that all of them were possible through one module.</p><p>Using script modules for these steps, the functional accuracy was tested with actual satellite data. Color composite images using RGB bands of KOMPSAT optical images were generated in the ODC environment in Figure 3. In this process, image data formats of GeoTiff and netCDF are also supported.</p><p>In this study, consideration points for implementation of ODC applications are also discussed. KOMPSAT data is basically commercial-based products, unlike other freely accessible satellite images in the ODC applications. For the practical contribution for ODC-GEOSS, careful considerations for data policy are needed, because it can be applied as a reference model for other commercial satellite data for GEOSS.</p>


Author(s):  
Pham Vu Dong ◽  
Bui Quang Thanh ◽  
Nguyen Quoc Huy ◽  
Vo Hong Anh ◽  
Pham Van Manh

Cloud detection is a significant task in optical remote sensing to reconstruct the contaminated cloud area from multi-temporal satellite images. Besides, the rapid development of machine learning techniques, especially deep learning algorithms, can detect clouds over a large area in optical remote sensing data. In this study, the method based on the proposed deep-learning method called ODC-Cloud, which was built on convolutional blocks and integrating with the Open Data Cube (ODC) platform. The results showed that our proposed model achieved an overall 90% accuracy in detecting cloud in Landsat 8 OLI imagery and successfully integrated with the ODC to perform multi-scale and multi-temporal analysis. This is a pioneer study in techniques of storing and analyzing big optical remote sensing data.


2022 ◽  
Vol 14 (2) ◽  
pp. 351
Author(s):  
Fang Yuan ◽  
Marko Repse ◽  
Alex Leith ◽  
Ake Rosenqvist ◽  
Grega Milcinski ◽  
...  

Digital Earth Africa is now providing an operational Sentinel-1 normalized radar backscatter dataset for Africa. This is the first free and open continental scale analysis ready data of this kind that has been developed to be compliant with the CEOS Analysis Ready Data for Land (CARD4L) specification for normalized radar backscatter (NRB) products. Partnership with Sinergise, a European geospatial company and Earth observation data provider, has ensured this dataset is produced efficiently in the cloud infrastructure and can be sustained in the long term. The workflow applies radiometric terrain correction (RTC) to the Sentinel-1 ground range detected (GRD) product, using the Copernicus 30 m digital elevation model (DEM). The method is used to generate data for a range of sites around the world and has been validated as producing good results. This dataset over Africa is made available publicly as a AWS public dataset and can be accessed through the Digital Earth Africa platform and its Open Data Cube API. We expect this dataset to support a wide range of applications, including natural resource monitoring, agriculture, and land cover mapping across Africa.


Sign in / Sign up

Export Citation Format

Share Document