scholarly journals THE POTENTIAL OF SENTINEL-1 DATA TO SUPPLEMENT HIGH RESOLUTION EARTH OBSERVATION DATA FOR MONITORING GREEN AREAS IN CITIES

Author(s):  
A. Iglseder ◽  
M. Bruggisser ◽  
A. Dostálová ◽  
N. Pfeifer ◽  
S. Schlaffer ◽  
...  

Abstract. Green areas play an important role within urban agglomerations due to their impact on local climate and their recreation function. For detailed monitoring, frameworks like the flora fauna habitat (FFH) classification scheme of the European Union’s Habitat Directive are broadly used. By date, FFH classifications are mostly expert-based. Within this study, a data-driven approach for FFH classification is tested. For two test areas in the municipality of Vienna, ALS point cloud data are used to derive predictor variables like terrain features, vegetation structure and potential insulation as well as reflection properties from full waveform analysis on a 1 m grid. In addition, Sentinel-1 C-Band time series data are used to increase the temporal resolution of the predicting features and to add phenological characteristics. For two 1.3 × 1.3 km test tiles, random forest classifiers are trained using different combinations (ALS, SAR, ALS+SAR) of input features. For all model test runs, the combination of ALS and SAR input features lead to best prediction accuracies when applied on test data.

Author(s):  
P. Rufin ◽  
A. Rabe ◽  
L. Nill ◽  
P. Hostert

Abstract. Earth observation analysis workflows commonly require mass processing of time series data, with data volumes easily exceeding terabyte magnitude, even for relatively small areas of interest. Cloud processing platforms such as Google Earth Engine (GEE) leverage accessibility to satellite image archives and thus facilitate time series analysis workflows. Instant visualization of time series data and integration with local data sources is, however, currently not implemented or requires coding customized scripts or applications. Here, we present the GEE Timeseries Explorer plugin which grants instant access to GEE from within QGIS. It seamlessly integrates the QGIS user interface with a compact widget for visualizing time series from any predefined or customized GEE image collection. Users can visualize time series profiles for a given coordinate as an interactive plot or visualize images with customized band rendering within the QGIS map canvas. The plugin is available through the QGIS plugin repository and detailed documentation is available online (https://geetimeseriesexplorer.readthedocs.io/).


Author(s):  
Juniana Husna ◽  
Sanusi Sanusi

The Asian-Australian monsoon circulation specifically causes the Indonesian region to go through climate changebility that impacts on rainfall variability in different Indonesia’s zone. Local climate conditions such as rainfall data are commonly simulated using GCM time series data. This study tries to model the statistical downscaling of GCM in the form of 7x7 matrix using Support Vector Regression (SVR) for rainfall forecasting during drought in Bireuen Regency, Aceh. The output yields optimal result using certain parameter i.e. C = 0.5, γ = 0.8, d = 1, and ↋= 0.01. The duration of computation during training and testing are ± 45 seconds for linear kernels and ± 2 minutes for polynomials. The correlation degree and RMSE values of GCM and the actually observed data at Gandapura wheather station are 0.672 and 21.106. The RSME value obtained in that region is the lowest compared to the Juli station which is equal to 31,428. However, the Juli station has the highest correlation value that is 0.677. On the other hand, the polynomial kernel has a correlation degree and RMSE value equal to 0.577 and 29,895 respectively. To summary, the best GCM using SVR kernel is the one at Gandapura weather station in consideration of having the lowest RMSE value with a high correlation degree.


2020 ◽  
Vol 496 (1) ◽  
pp. 629-637
Author(s):  
Ce Yu ◽  
Kun Li ◽  
Shanjiang Tang ◽  
Chao Sun ◽  
Bin Ma ◽  
...  

ABSTRACT Time series data of celestial objects are commonly used to study valuable and unexpected objects such as extrasolar planets and supernova in time domain astronomy. Due to the rapid growth of data volume, traditional manual methods are becoming extremely hard and infeasible for continuously analysing accumulated observation data. To meet such demands, we designed and implemented a special tool named AstroCatR that can efficiently and flexibly reconstruct time series data from large-scale astronomical catalogues. AstroCatR can load original catalogue data from Flexible Image Transport System (FITS) files or data bases, match each item to determine which object it belongs to, and finally produce time series data sets. To support the high-performance parallel processing of large-scale data sets, AstroCatR uses the extract-transform-load (ETL) pre-processing module to create sky zone files and balance the workload. The matching module uses the overlapped indexing method and an in-memory reference table to improve accuracy and performance. The output of AstroCatR can be stored in CSV files or be transformed other into formats as needed. Simultaneously, the module-based software architecture ensures the flexibility and scalability of AstroCatR. We evaluated AstroCatR with actual observation data from The three Antarctic Survey Telescopes (AST3). The experiments demonstrate that AstroCatR can efficiently and flexibly reconstruct all time series data by setting relevant parameters and configuration files. Furthermore, the tool is approximately 3× faster than methods using relational data base management systems at matching massive catalogues.


2016 ◽  
Vol 55 (10) ◽  
pp. 2165-2180 ◽  
Author(s):  
Takeshi Watanabe ◽  
Takahiro Takamatsu ◽  
Takashi Y. Nakajima

AbstractVariation in surface solar irradiance is investigated using ground-based observation data. The solar irradiance analyzed in this paper is scaled by the solar irradiance at the top of the atmosphere and is thus dimensionless. Three metrics are used to evaluate the variation in solar irradiance: the mean, standard deviation, and sample entropy. Sample entropy is a value representing the complexity of time series data, but it is not often used for investigation of solar irradiance. In analyses of solar irradiance, sample entropy represents the manner of its fluctuation; large sample entropy corresponds to rapid fluctuation and a high ramp rate, and small sample entropy suggests weak or slow fluctuations. The three metrics are used to cluster 47 ground-based observation stations in Japan into groups with similar features of variation in surface solar irradiance. This new approach clarifies regional features of variation in solar irradiance. The results of this study can be applied to renewable-energy engineering.


2019 ◽  
Author(s):  
Girish L

Network and Cloud Data Centers generate a lot of data every second, this data can be collected as a time series data. A time series is a sequence taken at successive equally spaced points in time, that means at a particular time interval to a specific time, the values of specific data that was taken is known as a data of a time series. This time series data can be collected using system metrics like CPU, Memory, and Disk utilization. The TICK Stack is an acronym for a platform of open source tools built to make collection, storage, graphing, and alerting on time series data incredibly easy. As a data collector, the authors are using both Telegraf and Collectd, for storing and analyzing data and the time series database InfluxDB. For plotting and visualizing, they use Chronograf along with Grafana. Kapacitor is used for alert refinement and once system metrics usage exceeds the specified threshold, the alert is generated and sends it to the system admin.


Author(s):  
Sibo Cheng ◽  
Mingming Qiu

AbstractData assimilation techniques are widely used to predict complex dynamical systems with uncertainties, based on time-series observation data. Error covariance matrices modeling is an important element in data assimilation algorithms which can considerably impact the forecasting accuracy. The estimation of these covariances, which usually relies on empirical assumptions and physical constraints, is often imprecise and computationally expensive, especially for systems of large dimensions. In this work, we propose a data-driven approach based on long short term memory (LSTM) recurrent neural networks (RNN) to improve both the accuracy and the efficiency of observation covariance specification in data assimilation for dynamical systems. Learning the covariance matrix from observed/simulated time-series data, the proposed approach does not require any knowledge or assumption about prior error distribution, unlike classical posterior tuning methods. We have compared the novel approach with two state-of-the-art covariance tuning algorithms, namely DI01 and D05, first in a Lorenz dynamical system and then in a 2D shallow water twin experiments framework with different covariance parameterization using ensemble assimilation. This novel method shows significant advantages in observation covariance specification, assimilation accuracy, and computational efficiency.


2021 ◽  
Author(s):  
Eberhard Voit ◽  
Jacob Davis ◽  
Daniel Olivenca

Abstract For close to a century, Lotka-Volterra (LV) models have been used to investigate interactions among populations of different species. For a few species, these investigations are straightforward. However, with the arrival of large and complex microbiomes, unprecedently rich data have become available and await analysis. In particular, these data require us to ask which microbial populations of a mixed community affect other populations, whether these influences are activating or inhibiting and how the interactions change over time. Here we present two new inference strategies for interaction parameters that are based on a new algebraic LV inference (ALVI) method. One strategy uses different survivor profiles of communities grown under similar conditions, while the other pertains to time series data. In addition, we address the question of whether observation data are compliant with the LV structure or require a richer modeling format.


2019 ◽  
Vol 23 (12) ◽  
pp. 5089-5110 ◽  
Author(s):  
Frederik Kratzert ◽  
Daniel Klotz ◽  
Guy Shalev ◽  
Günter Klambauer ◽  
Sepp Hochreiter ◽  
...  

Abstract. Regional rainfall–runoff modeling is an old but still mostly outstanding problem in the hydrological sciences. The problem currently is that traditional hydrological models degrade significantly in performance when calibrated for multiple basins together instead of for a single basin alone. In this paper, we propose a novel, data-driven approach using Long Short-Term Memory networks (LSTMs) and demonstrate that under a “big data” paradigm, this is not necessarily the case. By training a single LSTM model on 531 basins from the CAMELS dataset using meteorological time series data and static catchment attributes, we were able to significantly improve performance compared to a set of several different hydrological benchmark models. Our proposed approach not only significantly outperforms hydrological models that were calibrated regionally, but also achieves better performance than hydrological models that were calibrated for each basin individually. Furthermore, we propose an adaption to the standard LSTM architecture, which we call an Entity-Aware-LSTM (EA-LSTM), that allows for learning catchment similarities as a feature layer in a deep learning model. We show that these learned catchment similarities correspond well to what we would expect from prior hydrological understanding.


2020 ◽  
Author(s):  
Daniel Nüst ◽  
Eike H. Jürrens ◽  
Benedikt Gräler ◽  
Simon Jirka

<p>Time series data of in-situ measurements is the key to many environmental studies. The first challenge in any analysis typically arises when the data needs to be imported into the analysis framework. Standardisation is one way to lower this burden. Unfortunately, relevant interoperability standards might be challenging for non-IT experts as long as they are not dealt with behind the scenes of a client application. One standard to provide access to environmental time series data is the Sensor Observation Service (SOS, ) specification published by the Open Geospatial Consortium (OGC). SOS instances are currently used in a broad range of applications such as hydrology, air quality monitoring, and ocean sciences. Data sets provided via an SOS interface can be found around the globe from Europe to New Zealand.</p><p>The R package sos4R (Nüst et al., 2011) is an extension package for the R environment for statistical computing and visualization (), which has been demonstrated a a powerful tools for conducting and communicating geospatial research (cf. Pebesma et al., 2012; ). sos4R comprises a client that can connect to an SOS server. The user can use it to query data from SOS instances using simple R function calls. It provides a convenience layer for R users to integrate observation data from data access servers compliant with the SOS standard without any knowledge about the underlying technical standards. To further improve the usability for non-SOS experts, a recent update to sos4R includes a set of wrapper functions, which remove complexity and technical language specific to OGC specifications. This update also features specific consideration of the OGC SOS 2.0 Hydrology Profile and thereby opens up a new scientific domain.</p><p>In our presentation we illustrate use cases and examples building upon sos4R easing the access of time series data in an R and Shiny () context. We demonstrate how the abstraction provided in the client library makes sensor observation data for accessible and further show how sos4R allows the seamless integration of distributed observations data, i.e., across organisational boundaries, into transparent and reproducible data analysis workflows.</p><p><strong>References</strong></p><p>Nüst D., Stasch C., Pebesma E. (2011) Connecting R to the Sensor Web. In: Geertman S., Reinhardt W., Toppen F. (eds) Advancing Geoinformation Science for a Changing World. Lecture Notes in Geoinformation and Cartography, Springer. </p><p>Pebesma, E., Nüst, D., & Bivand, R. (2012). The R software environment in reproducible geoscientific research. Eos, Transactions American Geophysical Union, 93(16), 163–163. </p>


Sign in / Sign up

Export Citation Format

Share Document