scholarly journals Extracting the scaling exponents of a self-affine, non-Gaussian process from a finite-length time series

2006 ◽  
Vol 74 (5) ◽  
Author(s):  
K. Kiyani ◽  
S. C. Chapman ◽  
B. Hnat
2018 ◽  
Vol 2018 ◽  
pp. 1-5 ◽  
Author(s):  
Keqiang Dong ◽  
Linan Long

The complexity-entropy causality plane, as a powerful tool for discriminating Gaussian from non-Gaussian process, has been recently introduced to describe the complexity among time series. We propose to use this method to distinguish the stage of climb-cruise-decline of aeroengine. Our empirical results demonstrate that this statistical physics approach is useful. Further, the return intervals based complexity-entropy causality plane is introduced to describe the complexity of aeroengine fuel flow time series. The results can infer that the cruise process has lowest complexity and the decline process has highest complexity.


2021 ◽  
pp. 1-13
Author(s):  
Haitao Liu ◽  
Yew-Soon Ong ◽  
Ziwei Yu ◽  
Jianfei Cai ◽  
Xiaobo Shen

Energies ◽  
2021 ◽  
Vol 14 (15) ◽  
pp. 4392
Author(s):  
Jia Zhou ◽  
Hany Abdel-Khalik ◽  
Paul Talbot ◽  
Cristian Rabiti

This manuscript develops a workflow, driven by data analytics algorithms, to support the optimization of the economic performance of an Integrated Energy System. The goal is to determine the optimum mix of capacities from a set of different energy producers (e.g., nuclear, gas, wind and solar). A stochastic-based optimizer is employed, based on Gaussian Process Modeling, which requires numerous samples for its training. Each sample represents a time series describing the demand, load, or other operational and economic profiles for various types of energy producers. These samples are synthetically generated using a reduced order modeling algorithm that reads a limited set of historical data, such as demand and load data from past years. Numerous data analysis methods are employed to construct the reduced order models, including, for example, the Auto Regressive Moving Average, Fourier series decomposition, and the peak detection algorithm. All these algorithms are designed to detrend the data and extract features that can be employed to generate synthetic time histories that preserve the statistical properties of the original limited historical data. The optimization cost function is based on an economic model that assesses the effective cost of energy based on two figures of merit: the specific cash flow stream for each energy producer and the total Net Present Value. An initial guess for the optimal capacities is obtained using the screening curve method. The results of the Gaussian Process model-based optimization are assessed using an exhaustive Monte Carlo search, with the results indicating reasonable optimization results. The workflow has been implemented inside the Idaho National Laboratory’s Risk Analysis and Virtual Environment (RAVEN) framework. The main contribution of this study addresses several challenges in the current optimization methods of the energy portfolios in IES: First, the feasibility of generating the synthetic time series of the periodic peak data; Second, the computational burden of the conventional stochastic optimization of the energy portfolio, associated with the need for repeated executions of system models; Third, the inadequacies of previous studies in terms of the comparisons of the impact of the economic parameters. The proposed workflow can provide a scientifically defendable strategy to support decision-making in the electricity market and to help energy distributors develop a better understanding of the performance of integrated energy systems.


2012 ◽  
Vol 16 (1) ◽  
pp. 29-42 ◽  
Author(s):  
M. Siena ◽  
A. Guadagnini ◽  
M. Riva ◽  
S. P. Neuman

Abstract. We use three methods to identify power-law scaling of multi-scale log air permeability data collected by Tidwell and Wilson on the faces of a laboratory-scale block of Topopah Spring tuff: method of moments (M), Extended Self-Similarity (ESS) and a generalized version thereof (G-ESS). All three methods focus on q-th-order sample structure functions of absolute increments. Most such functions exhibit power-law scaling at best over a limited midrange of experimental separation scales, or lags, which are sometimes difficult to identify unambiguously by means of M. ESS and G-ESS extend this range in a way that renders power-law scaling easier to characterize. Our analysis confirms the superiority of ESS and G-ESS over M in identifying the scaling exponents, ξ(q), of corresponding structure functions of orders q, suggesting further that ESS is more reliable than G-ESS. The exponents vary in a nonlinear fashion with q as is typical of real or apparent multifractals. Our estimates of the Hurst scaling coefficient increase with support scale, implying a reduction in roughness (anti-persistence) of the log permeability field with measurement volume. The finding by Tidwell and Wilson that log permeabilities associated with all tip sizes can be characterized by stationary variogram models, coupled with our findings that log permeability increments associated with the smallest tip size are approximately Gaussian and those associated with all tip sizes scale show nonlinear variations in ξ(q) with q, are consistent with a view of these data as a sample from a truncated version (tfBm) of self-affine fractional Brownian motion (fBm). Since in theory the scaling exponents, ξ(q), of tfBm vary linearly with q we conclude that nonlinear scaling in our case is not an indication of multifractality but an artifact of sampling from tfBm. This allows us to explain theoretically how power-law scaling of our data, as well as of non-Gaussian heavy-tailed signals subordinated to tfBm, are extended by ESS. It further allows us to identify the functional form and estimate all parameters of the corresponding tfBm based on sample structure functions of first and second orders.


Author(s):  
Aidin Tamhidi ◽  
Nicolas Kuehn ◽  
S. Farid Ghahari ◽  
Arthur J. Rodgers ◽  
Monica D. Kohler ◽  
...  

ABSTRACT Ground-motion time series are essential input data in seismic analysis and performance assessment of the built environment. Because instruments to record free-field ground motions are generally sparse, methods are needed to estimate motions at locations with no available ground-motion recording instrumentation. In this study, given a set of observed motions, ground-motion time series at target sites are constructed using a Gaussian process regression (GPR) approach, which treats the real and imaginary parts of the Fourier spectrum as random Gaussian variables. Model training, verification, and applicability studies are carried out using the physics-based simulated ground motions of the 1906 Mw 7.9 San Francisco earthquake and Mw 7.0 Hayward fault scenario earthquake in northern California. The method’s performance is further evaluated using the 2019 Mw 7.1 Ridgecrest earthquake ground motions recorded by the Community Seismic Network stations located in southern California. These evaluations indicate that the trained GPR model is able to adequately estimate the ground-motion time series for frequency ranges that are pertinent for most earthquake engineering applications. The trained GPR model exhibits proper performance in predicting the long-period content of the ground motions as well as directivity pulses.


Sign in / Sign up

Export Citation Format

Share Document