scholarly journals Chiller Load Forecasting Using Hyper-Gaussian Nets

Energies ◽  
2021 ◽  
Vol 14 (12) ◽  
pp. 3479
Author(s):  
Manuel R. Arahal ◽  
Manuel G. Ortega ◽  
Manuel G. Satué

Energy load forecasting for optimization of chiller operation is a topic that has been receiving increasing attention in recent years. From an engineering perspective, the methodology for designing and deploying a forecasting system for chiller operation should take into account several issues regarding prediction horizon, available data, selection of variables, model selection and adaptation. In this paper these issues are parsed to develop a neural forecaster. The method combines previous ideas such as basis expansions and local models. In particular, hyper-gaussians are proposed to provide spatial support (in input space) to models that can use auto-regressive, exogenous and past errors as variables, constituting thus a particular case of NARMAX modelling. Tests using real data from different world locations are given showing the expected performance of the proposal with respect to the objectives and allowing a comparison with other approaches.

Energies ◽  
2021 ◽  
Vol 14 (21) ◽  
pp. 7128
Author(s):  
Leonard Burg ◽  
Gonca Gürses-Tran ◽  
Reinhard Madlener ◽  
Antonello Monti

Power system operators are confronted with a multitude of new forecasting tasks to ensure a constant supply security despite the decreasing number of fully controllable energy producers. With this paper, we aim to facilitate the selection of suitable forecasting approaches for the load forecasting problem. First, we provide a classification of load forecasting cases in two dimensions: temporal and hierarchical. Then, we identify typical features and models for forecasting and compare their applicability in a structured manner depending on six previously defined cases. These models are compared against real data in terms of their computational effort and accuracy during development and testing. From this comparative analysis, we derive a generic guide for the selection of the best prediction models and features per case.


2015 ◽  
Vol 26 (2) ◽  
pp. 997-1020
Author(s):  
Marcelo Azevedo Costa ◽  
Thiago de Souza Rodrigues ◽  
André Gabriel FC da Costa ◽  
René Natowicz ◽  
Antônio Pádua Braga

This work proposes a sequential methodology for selecting variables in classification problems in which the number of predictors is much larger than the sample size. The methodology includes a Monte Carlo permutation procedure that conditionally tests the null hypothesis of no association among the outcomes and the available predictors. In order to improve computing aspects, we propose a new parametric distribution, the Truncated and Zero Inflated Gumbel Distribution. The final application is to find compact classification models with improved performance for genomic data. Results using real data sets show that the proposed methodology selects compact models with optimized classification performances.


Energies ◽  
2021 ◽  
Vol 14 (2) ◽  
pp. 345
Author(s):  
Janusz Sowinski

Forecasting of daily loads is crucial for the Distribution System Operators (DSO). Contemporary short-term load forecasting models (STLF) are very well recognized and described in numerous articles. One of such models is the Adaptive Neuro-Fuzzy Inference System (ANFIS), which requires a large set of historical data. A well-recognized issue both for the ANFIS and other daily load forecasting models is the selection of exogenous variables. This article attempts to verify the statement that an appropriate selection of exogenous variables of the ANFIS model affects the accuracy of the forecasts obtained ex post. This proposal seems to be a return to the roots of the Polish econometrics school and the use of the Hellwig method to select exogenous variables of the ANFIS model. In this context, it is also worth asking whether the use of the Hellwig method in conjunction with the ANFIS model makes it possible to investigate the significance of weather variables on the profile of the daily load in an energy company. The functioning of the ANFIS model was tested for some consumers exhibiting high load randomness located within the area under supervision of the examined power company. The load curves featuring seasonal variability and weekly similarity are suitable for forecasting with the ANFIS model. The Hellwig method has been used to select exogenous variables in the ANFIS model. The optimal set of variables has been determined on the basis of integral indicators of information capacity H. Including an additional variable, i.e., air temperature, has also been taken into consideration. Some results of ex post daily load forecast are presented.


2013 ◽  
Vol 2013 ◽  
pp. 1-11 ◽  
Author(s):  
Jia-Rou Liu ◽  
Po-Hsiu Kuo ◽  
Hung Hung

Large-p-small-ndatasets are commonly encountered in modern biomedical studies. To detect the difference between two groups, conventional methods would fail to apply due to the instability in estimating variances int-test and a high proportion of tied values in AUC (area under the receiver operating characteristic curve) estimates. The significance analysis of microarrays (SAM) may also not be satisfactory, since its performance is sensitive to the tuning parameter, and its selection is not straightforward. In this work, we propose a robust rerank approach to overcome the above-mentioned diffculties. In particular, we obtain a rank-based statistic for each feature based on the concept of “rank-over-variable.” Techniques of “random subset” and “rerank” are then iteratively applied to rank features, and the leading features will be selected for further studies. The proposed re-rank approach is especially applicable for large-p-small-ndatasets. Moreover, it is insensitive to the selection of tuning parameters, which is an appealing property for practical implementation. Simulation studies and real data analysis of pooling-based genome wide association (GWA) studies demonstrate the usefulness of our method.


Complexity ◽  
2018 ◽  
Vol 2018 ◽  
pp. 1-16 ◽  
Author(s):  
Yiwen Zhang ◽  
Yuanyuan Zhou ◽  
Xing Guo ◽  
Jintao Wu ◽  
Qiang He ◽  
...  

The K-means algorithm is one of the ten classic algorithms in the area of data mining and has been studied by researchers in numerous fields for a long time. However, the value of the clustering number k in the K-means algorithm is not always easy to be determined, and the selection of the initial centers is vulnerable to outliers. This paper proposes an improved K-means clustering algorithm called the covering K-means algorithm (C-K-means). The C-K-means algorithm can not only acquire efficient and accurate clustering results but also self-adaptively provide a reasonable numbers of clusters based on the data features. It includes two phases: the initialization of the covering algorithm (CA) and the Lloyd iteration of the K-means. The first phase executes the CA. CA self-organizes and recognizes the number of clusters k based on the similarities in the data, and it requires neither the number of clusters to be prespecified nor the initial centers to be manually selected. Therefore, it has a “blind” feature, that is, k is not preselected. The second phase performs the Lloyd iteration based on the results of the first phase. The C-K-means algorithm combines the advantages of CA and K-means. Experiments are carried out on the Spark platform, and the results verify the good scalability of the C-K-means algorithm. This algorithm can effectively solve the problem of large-scale data clustering. Extensive experiments on real data sets show that the accuracy and efficiency of the C-K-means algorithm outperforms the existing algorithms under both sequential and parallel conditions.


Author(s):  
Willi Sauerbrei ◽  
◽  
Aris Perperoglou ◽  
Matthias Schmid ◽  
Michal Abrahamowicz ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document