scholarly journals Effective, Low-cost Methods of Applying Computer Vision to Public Earth Observation Data

Author(s):  
Michael Evans ◽  
Taylor Minich

We have an unprecedented ability to analyze and map the Earth’s surface, as deep learning technologies are applied to an abundance of Earth observation systems collecting images of the planet daily. In order to realize the potential of these data to improve conservation outcomes, simple, free, and effective methods are needed to enable a wide variety of stakeholders to derive actionable insights from these tools. In this paper we demonstrate simple methods and workflows using free, open computing resources to train well-studied convolutional neural networks and use these to delineate objects of interest in publicly available Earth observation images. With limited training datasets (<1000 observations), we used Google Earth Engine and Tensorflow to process Sentinel-2 and National Agricultural Imaging Program data, and use these to train U-Net and DeepLab models that delineate ground mounted solar arrays and parking lots in satellite imagery. The trained models achieved 81.5% intersection over union between predictions and ground-truth observations in validation images. These images were generated at different times and from different places from those upon which they were trained, indicating the ability of models to generalize outside of data on which they were trained. The two case studies we present illustrate how these methods can be used to inform and improve the development of renewable energy in a manner that is consistent with wildlife conservation.

Author(s):  
C. Fish ◽  
S. Slagowski ◽  
L. Dyrud ◽  
J. Fentzke ◽  
B. Hargis ◽  
...  

Until very recently, the commercialization of Earth observation systems has largely occurred in two ways: either through the detuning of government satellites or the repurposing of NASA (or other science) data for commercial use. However, the convergence of cloud computing and low-cost satellites is enabling Earth observation companies to tailor observation data to specific markets. Now, underserved constituencies, such as agriculture and energy, can tap into Earth observation data that is provided at a cadence, resolution and cost that can have a real impact to their bottom line. To connect with these markets, OmniEarth fuses data from a variety of sources, synthesizes it into useful and valuable business information, and delivers it to customers via web or mobile interfaces. The “secret sauce” is no longer about having the highest resolution imagery, but rather it is about using that imagery – in conjunction with a number of other sources – to solve complex problems that require timely and contextual information about our dynamic and changing planet. OmniEarth improves subscribers’ ability to visualize the world around them by enhancing their ability to see, analyze, and react to change in real time through a solutions-as-a-service platform.


2021 ◽  
Author(s):  
Edzer Pebesma ◽  
Patrick Griffiths ◽  
Christian Briese ◽  
Alexander Jacob ◽  
Anze Skerlevaj ◽  
...  

<p>The OpenEO API allows the analysis of large amounts of Earth Observation data using a high-level abstraction of data and processes. Rather than focusing on the management of virtual machines and millions of imagery files, it allows to create jobs that take a spatio-temporal section of an image collection (such as Sentinel L2A), and treat it as a data cube. Processes iterate or aggregate over pixels, spatial areas, spectral bands, or time series, while working at arbitrary spatial resolution. This pattern, pioneered by Google Earth Engine™ (GEE), lets the user focus on the science rather than on data management.</p><p>The openEO H2020 project (2017-2020) has developed the API as well as an ecosystem of software around it, including clients (JavaScript, Python, R, QGIS, browser-based), back-ends that translate API calls into existing image analysis or GIS software or services (for Sentinel Hub, WCPS, Open Data Cube, GRASS GIS, GeoTrellis/GeoPySpark, and GEE) as well as a hub that allows querying and searching openEO providers for their capabilities and datasets. The project demonstrated this software in a number of use cases, where identical processing instructions were sent to different implementations, allowing comparison of returned results.</p><p>A follow-up, ESA-funded project “openEO Platform” realizes the API and progresses the software ecosystem into operational services and applications that are accessible to everyone, that involve federated deployment (using the clouds managed by EODC, Terrascope, CreoDIAS and EuroDataCube), that will provide payment models (“pay per compute job”) conceived and implemented following the user community needs and that will use the EOSC (European Open Science Cloud) marketplace for dissemination and authentication. A wide range of large-scale cases studies will demonstrate the ability of the openEO Platform to scale to large data volumes.  The case studies to be addressed include on-demand ARD generation for SAR and multi-spectral data, agricultural demonstrators like crop type and condition monitoring, forestry services like near real time forest damage assessment as well as canopy cover mapping, environmental hazard monitoring of floods and air pollution as well as security applications in terms of vessel detection in the mediterranean sea.</p><p>While the landscape of cloud-based EO platforms and services has matured and diversified over the past decade, we believe there are strong advantages for scientists and government agencies to adopt the openEO approach. Beyond the absence of vendor/platform lock-in or EULA’s we mention the abilities to (i) run arbitrary user code (e.g. written in R or Python) close to the data, (ii) carry out scientific computations on an entirely open source software stack, (iii) integrate different platforms (e.g., different cloud providers offering different datasets), and (iv) help create and extend this software ecosystem. openEO uses the OpenAPI standard, aligns with modern OGC API standards, and uses the STAC (SpatioTemporal Asset Catalog) to describe image collections and image tiles.</p>


Author(s):  
P. Rufin ◽  
A. Rabe ◽  
L. Nill ◽  
P. Hostert

Abstract. Earth observation analysis workflows commonly require mass processing of time series data, with data volumes easily exceeding terabyte magnitude, even for relatively small areas of interest. Cloud processing platforms such as Google Earth Engine (GEE) leverage accessibility to satellite image archives and thus facilitate time series analysis workflows. Instant visualization of time series data and integration with local data sources is, however, currently not implemented or requires coding customized scripts or applications. Here, we present the GEE Timeseries Explorer plugin which grants instant access to GEE from within QGIS. It seamlessly integrates the QGIS user interface with a compact widget for visualizing time series from any predefined or customized GEE image collection. Users can visualize time series profiles for a given coordinate as an interactive plot or visualize images with customized band rendering within the QGIS map canvas. The plugin is available through the QGIS plugin repository and detailed documentation is available online (https://geetimeseriesexplorer.readthedocs.io/).


2019 ◽  
Vol 3 ◽  
pp. 965
Author(s):  
Safran Yusri ◽  
Vincentius P. Siregar ◽  
Suharsono Suharsono

Long term Earth observation data stored in Google Earth Engine (GEE) can be ingested and derived to biologically relevant environmental variables that can used as the predictors of a species niche. The aim of this research was to create a script using GEE to generate biologically meaningful environmental variables from various Earth observation data and models in Indonesia. Elevation and bathymetry raster data from GEBCO were land masked and benthic terrain modelling were done in order to get the aspect, depth, curvature, and slope. HYCOM and MODIS AQUA dataset were filtered using spatial (Indonesia and surrounding region) and temporal filter (from 2002–2017), and reduced to biologically meaningful variables, the maximum, minimum, and mean. Water speed vector (northward and eastward) data were also converted in to scalar unit. In order to fill data gaps, kriging was done using Bayesian slope. Results shows the water depth in Indonesia ranges from 0 – 6827 m, with slope ranging from 0 – 34.33°, aspect from 0 – 359.99°, and curvature from 0 – 0.94. Variables representing water energy, mean sea surface elevation ranges from 0 – 0.85 m, and mean scalar water velocity 0 – 4 m/s. Mean surface salinity ranges from 20.09 – 35.32‰. Variables representing water quality includes mean of particulate organic carbon which ranges from 25.31 – 953.47‰ and mean of clorophyll-A concentration from 0.05 – 13.63‰. These data can be used as the input for species distribution models or spatially explicit decision support systems such as Marxan for spatial planning and zonation in Marine and Coastal Zone Management Plan.


Author(s):  
Michael Evans ◽  
Taylor Minich ◽  
Rachel Soobitsky ◽  
Kumar Mainali

We have an unprecedented ability to map the Earth’s surface as deep learning technologies are applied to an abundance of high-frequency Earth observation data. Simple, free, and effective methods are needed to enable a variety of stakeholders to use these tools to improve scientific knowledge and decision making. Here we present a trained U-Net model that can map and delineate ground mounted solar arrays using publicly available Sentinel-2 imagery, and that requires minimal data pre-processing and no feature engineering. By using label overloading and image augmentation during training, the model is robust to temporal and spatial variation in imagery. The trained model achieved a precision and recall of 91.5% each and an intersection over union of 84.3% on independent validation data from two distinct geographies. This generalizability in space and time makes the model useful for repeatedly mapping solar arrays. We use this model to delineate all ground mounted solar arrays in North Carolina and the Chesapeake Bay watershed to illustrate how these methods can be used to quickly and easily produce accurate maps of solar infrastructure.


2020 ◽  
Vol 12 (8) ◽  
pp. 1253 ◽  
Author(s):  
Vitor Gomes ◽  
Gilberto Queiroz ◽  
Karine Ferreira

In recent years, Earth observation (EO) satellites have generated big amounts of geospatial data that are freely available for society and researchers. This scenario brings challenges for traditional spatial data infrastructures (SDI) to properly store, process, disseminate and analyze these big data sets. To meet these demands, novel technologies have been proposed and developed, based on cloud computing and distributed systems, such as array database systems, MapReduce systems and web services to access and process big Earth observation data. Currently, these technologies have been integrated into cutting edge platforms in order to support a new generation of SDI for big Earth observation data. This paper presents an overview of seven platforms for big Earth observation data management and analysis—Google Earth Engine (GEE), Sentinel Hub, Open Data Cube (ODC), System for Earth Observation Data Access, Processing and Analysis for Land Monitoring (SEPAL), openEO, JEODPP, and pipsCloud. We also provide a comparison of these platforms according to criteria that represent capabilities of the EO community interest.


GIS Business ◽  
2019 ◽  
Vol 12 (3) ◽  
pp. 12-14
Author(s):  
Eicher, A

Our goal is to establish the earth observation data in the business world Unser Ziel ist es, die Erdbeobachtungsdaten in der Geschäftswelt zu etablieren


Author(s):  
Tais Grippa ◽  
Stefanos Georganos ◽  
Sabine Vanhuysse ◽  
Moritz Lennert ◽  
Nicholus Mboga ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document