scholarly journals Efficient Gaussian Process-Based Modelling and Prediction of Image Time Series

Author(s):  
Marco Lorenzi ◽  
Gabriel Ziegler ◽  
Daniel C. Alexander ◽  
Sebastien Ourselin
Keyword(s):  
Energies ◽  
2021 ◽  
Vol 14 (15) ◽  
pp. 4392
Author(s):  
Jia Zhou ◽  
Hany Abdel-Khalik ◽  
Paul Talbot ◽  
Cristian Rabiti

This manuscript develops a workflow, driven by data analytics algorithms, to support the optimization of the economic performance of an Integrated Energy System. The goal is to determine the optimum mix of capacities from a set of different energy producers (e.g., nuclear, gas, wind and solar). A stochastic-based optimizer is employed, based on Gaussian Process Modeling, which requires numerous samples for its training. Each sample represents a time series describing the demand, load, or other operational and economic profiles for various types of energy producers. These samples are synthetically generated using a reduced order modeling algorithm that reads a limited set of historical data, such as demand and load data from past years. Numerous data analysis methods are employed to construct the reduced order models, including, for example, the Auto Regressive Moving Average, Fourier series decomposition, and the peak detection algorithm. All these algorithms are designed to detrend the data and extract features that can be employed to generate synthetic time histories that preserve the statistical properties of the original limited historical data. The optimization cost function is based on an economic model that assesses the effective cost of energy based on two figures of merit: the specific cash flow stream for each energy producer and the total Net Present Value. An initial guess for the optimal capacities is obtained using the screening curve method. The results of the Gaussian Process model-based optimization are assessed using an exhaustive Monte Carlo search, with the results indicating reasonable optimization results. The workflow has been implemented inside the Idaho National Laboratory’s Risk Analysis and Virtual Environment (RAVEN) framework. The main contribution of this study addresses several challenges in the current optimization methods of the energy portfolios in IES: First, the feasibility of generating the synthetic time series of the periodic peak data; Second, the computational burden of the conventional stochastic optimization of the energy portfolio, associated with the need for repeated executions of system models; Third, the inadequacies of previous studies in terms of the comparisons of the impact of the economic parameters. The proposed workflow can provide a scientifically defendable strategy to support decision-making in the electricity market and to help energy distributors develop a better understanding of the performance of integrated energy systems.


Author(s):  
Aidin Tamhidi ◽  
Nicolas Kuehn ◽  
S. Farid Ghahari ◽  
Arthur J. Rodgers ◽  
Monica D. Kohler ◽  
...  

ABSTRACT Ground-motion time series are essential input data in seismic analysis and performance assessment of the built environment. Because instruments to record free-field ground motions are generally sparse, methods are needed to estimate motions at locations with no available ground-motion recording instrumentation. In this study, given a set of observed motions, ground-motion time series at target sites are constructed using a Gaussian process regression (GPR) approach, which treats the real and imaginary parts of the Fourier spectrum as random Gaussian variables. Model training, verification, and applicability studies are carried out using the physics-based simulated ground motions of the 1906 Mw 7.9 San Francisco earthquake and Mw 7.0 Hayward fault scenario earthquake in northern California. The method’s performance is further evaluated using the 2019 Mw 7.1 Ridgecrest earthquake ground motions recorded by the Community Seismic Network stations located in southern California. These evaluations indicate that the trained GPR model is able to adequately estimate the ground-motion time series for frequency ranges that are pertinent for most earthquake engineering applications. The trained GPR model exhibits proper performance in predicting the long-period content of the ground motions as well as directivity pulses.


Author(s):  
Puneet Agarwal ◽  
William Walker ◽  
Kenneth Bhalla

The most probable maximum (MPM) is the extreme value statistic commonly used in the offshore industry. The extreme value of vessel motions, structural response, and environment are often expressed using the MPM. For a Gaussian process, the MPM is a function of the root-mean square and the zero-crossing rate of the process. Accurate estimates of the MPM may be obtained in frequency domain from spectral moments of the known power spectral density. If the MPM is to be estimated from the time-series of a random process, either from measurements or from simulations, the time series data should be of long enough duration, sampled at an adequate rate, and have an ensemble of multiple realizations. This is not the case when measured data is recorded for an insufficient duration, or one wants to make decisions (requiring an estimate of the MPM) in real-time based on observing the data only for a short duration. Sometimes, the instrumentation system may not be properly designed to measure the dynamic vessel motions with a fine sampling rate, or it may be a legacy instrumentation system. The question then becomes whether the short-duration and/or the undersampled data is useful at all, or if some useful information (i.e., an estimate of MPM) can be extracted, and if yes, what is the accuracy and uncertainty of such estimates. In this paper, a procedure for estimation of the MPM from the short-time maxima, i.e., the maximum value from a time series of short duration (say, 10 or 30 minutes), is presented. For this purpose pitch data is simulated from the vessel RAOs (response amplitude operators). Factors to convert the short-time maxima to the MPM are computed for various non-exceedance levels. It is shown that the factors estimated from simulation can also be obtained from the theory of extremes of a Gaussian process. Afterwards, estimation of the MPM from the short-time maxima is explored for an undersampled process; however, undersampled data must not be used and only the adequately sampled data should be utilized. It is found that the undersampled data can be somewhat useful and factors to convert the short-time maxima to the MPM can be derived for an associated non-exceedance level. However, compared to the adequately sampled data, the factors for the undersampled data are less useful since they depend on more variables and have more uncertainty. While the vessel pitch data was the focus of this paper, the results and conclusions are valid for any adequately sampled narrow-banded Gaussian process.


2009 ◽  
Vol 20 (8) ◽  
pp. 887-896 ◽  
Author(s):  
Subhasish Mohanty ◽  
Santanu Das ◽  
Aditi Chattopadhyay ◽  
Pedro Peralta

2015 ◽  
Vol 27 (9) ◽  
pp. 1825-1856 ◽  
Author(s):  
Karthik C. Lakshmanan ◽  
Patrick T. Sadtler ◽  
Elizabeth C. Tyler-Kabara ◽  
Aaron P. Batista ◽  
Byron M. Yu

Noisy, high-dimensional time series observations can often be described by a set of low-dimensional latent variables. Commonly used methods to extract these latent variables typically assume instantaneous relationships between the latent and observed variables. In many physical systems, changes in the latent variables manifest as changes in the observed variables after time delays. Techniques that do not account for these delays can recover a larger number of latent variables than are present in the system, thereby making the latent representation more difficult to interpret. In this work, we introduce a novel probabilistic technique, time-delay gaussian-process factor analysis (TD-GPFA), that performs dimensionality reduction in the presence of a different time delay between each pair of latent and observed variables. We demonstrate how using a gaussian process to model the evolution of each latent variable allows us to tractably learn these delays over a continuous domain. Additionally, we show how TD-GPFA combines temporal smoothing and dimensionality reduction into a common probabilistic framework. We present an expectation/conditional maximization either (ECME) algorithm to learn the model parameters. Our simulations demonstrate that when time delays are present, TD-GPFA is able to correctly identify these delays and recover the latent space. We then applied TD-GPFA to the activity of tens of neurons recorded simultaneously in the macaque motor cortex during a reaching task. TD-GPFA is able to better describe the neural activity using a more parsimonious latent space than GPFA, a method that has been used to interpret motor cortex data but does not account for time delays. More broadly, TD-GPFA can help to unravel the mechanisms underlying high-dimensional time series data by taking into account physical delays in the system.


Sign in / Sign up

Export Citation Format

Share Document