scholarly journals An Invariance Property of Predictors in Kernel-Induced Hypothesis Spaces

2006 ◽  
Vol 18 (4) ◽  
pp. 749-759 ◽  
Author(s):  
Nicola Ancona ◽  
Sebastiano Stramaglia

We consider kernel-based learning methods for regression and analyze what happens to the risk minimizer when new variables, statistically independent of input and target variables, are added to the set of input variables. This problem arises, for example, in the detection of causality relations between two time series. We find that the risk minimizer remains unchanged if we constrain the risk minimization to hypothesis spaces induced by suitable kernel functions. We show that not all kernel-induced hypothesis spaces enjoy this property. We present sufficient conditions ensuring that the risk minimizer does not change and show that they hold for inhomogeneous polynomial and gaussian radial basis function kernels. We also provide examples of kernel-induced hypothesis spaces whose risk minimizer changes if independent variables are added as input.

Author(s):  
Huageng Luo ◽  
Liping Wang ◽  
Don Beeson ◽  
Gene Wiggs

In spite of exponential growth in computing power, the enormous computational cost of complex and large-scale engineering design problems make it impractical to rely exclusively on original high fidelity simulation codes. Therefore, there has been an increasing interest in the use of fast executing meta-models to alleviate the computational cost required by slow and expensive simulation models — especially for optimization and probabilistic design. However, many state-of-the-art meta-modeling techniques, such as Radial Basis Function (RBF), Gaussian Process (GP), and Kriging can only make good predictions in the case of interpolation. Their ability for extrapolation is not impressive since the models are mathematically constructed for interpolations. Although Multivariate Adaptive Regression Splines (MARS) and Artificial Neural Network (ANN) have been tried for extrapolation problems (forecasting), the results do not always meet accuracy requirements. The autoregressive moving-average (ARMA) model is a popular time series modeling and forecasting tool. It has been widely used in many engineering applications in which all the inputs and outputs are time dependent. Many researchers have tried to extend the time series ARMA modeling technique into so-called spatial ARMA modeling or time-space ARMA modeling. However, the time-space ARMA modeling requires extensive computation in grid data generation as well as in model building, particularly for high dimensional problems. In this paper, a pseudo-ARMA approach is proposed to strengthen meta-modeling extrapolation capability. Each input is randomly sampled at a given mean value and distribution range to form a pseudo-time series. The output variables are evaluated based on input variables, which formulate output variable pseudo time series. The pseudo-ARMA model is built based on the pseudo input and output time series. Using the constructed pseudo-ARMA model, and new input variables generated with extended distribution parameters, such as distribution means and distribution ranges, the output variables can be evaluated to achieve extrapolations. Several numerical examples are presented to demonstrate the proposed approach. The results are compared with Radial Basis Function (RBF) meta-modeling results for both interpolation and extrapolation.


2012 ◽  
Vol 182-183 ◽  
pp. 1358-1361
Author(s):  
Le Xiao ◽  
Min Peng Hu

According to the fact that the use of electricity in grain depot is nonlinear time series, the article introduces the prediction model of electricity based on Radial Basis Function Neural Network, and conducts the modeling and prediction by adopting the historical electricity consumption of a typical grain depot. As the result of simulation shows, the model obtains better forecasting results in grain depot electricity.


Sign in / Sign up

Export Citation Format

Share Document