A Comparative Study of Genetic Algorithms and Gradient Methods for RM12 Turbofan Engine Diagnostics and Performance Estimation

Author(s):  
Tomas Gro¨nstedt ◽  
Markus Wallin

Recent work on gas turbine diagnostics based on optimisation techniques advocates two different approaches: 1) Stochastic optimisation, including Genetic Algorithm techniques, for its robustness when optimising objective functions with many local optima and 2) Gradient based methods mainly for their computational efficiency. For smooth and single optimum functions, gradient methods are known to provide superior numerical performance. This paper addresses the key issue for method selection, i.e. whether multiple local optima may occur when the optimisation approach is applied to real engine testing. Two performance test data sets for the RM12 low bypass ratio turbofan engine, powering the Swedish Fighter Gripen, have been analysed. One set of data was recorded during performance testing of a highly degraded engine. This engine has been subjected to Accelerated Mission Testing (AMT) cycles corresponding to more than 4000 hours of run time. The other data set was recorded for a development engine with less than 200 hours of operation. The search for multiple optima was performed starting from more than 100 extreme points. Not a single case of multi-modality was encountered, i.e. one unique solution for each of the two data sets was consistently obtained. The RM12 engine cycle is typical for a modern fighter engine, implying that the obtained results can be transferred to, at least, most low bypass ratio turbofan engines. The paper goes on to describe the numerical difficulties that had to be resolved to obtain efficient and robust performance by the gradient solvers. Ill conditioning and noise may, as illustrated on a model problem, introduce local optima without a correspondence in the gas turbine physics. Numerical methods exploiting the special problem structure represented by a non-linear least squares formulation is given special attention. Finally, a mixed norm allowing for both robustness and numerical efficiency is suggested.

2005 ◽  
Vol 32 (5) ◽  
pp. 789-795 ◽  
Author(s):  
Jessica Manness ◽  
Jay Doering

Field performance testing of hydraulic turbines is undertaken to define the head-power-discharge relationship that identifies the peak operating point of the turbine. This relationship is essential for the efficient operation of a hydraulic turbine. Unfortunately, in some cases it is not feasible to field test turbines because of time, budgetary, or other constraints. Gordon (2001) proposed a method of predicting and (or) simulating the performance curve for several types of turbines. However, a limited data set was available for the development of his model for certain types of turbines. Moreover, his model did not include a precise method of developing performance curves for rerunnered turbines. Manitoba Hydro operates a large network of hydroelectric turbines, which are subject to periodic field performance testing. This provided a large data set with which to refine the model proposed by Gordon (2001). Furthermore, since these data include rerunnered units, this provides an opportunity to refine the effects of rerunnering. Analysis shows that the accuracy of the refined model is within 2% of the performance test results for an "old" turbine, while for a newer turbine or a rerunnered turbine the error is within 1%. For both an old turbine and a rerunnered turbine, this indicates an accuracy improvement of 3% over the original method proposed by Gordon (2001).Key words: hydraulic turbine, efficiency, simulation modeling


Geophysics ◽  
2017 ◽  
Vol 82 (1) ◽  
pp. G1-G21 ◽  
Author(s):  
William J. Titus ◽  
Sarah J. Titus ◽  
Joshua R. Davis

We apply a Bayesian Markov chain Monte Carlo formalism to the gravity inversion of a single localized 2D subsurface object. The object is modeled as a polygon described by five parameters: the number of vertices, a density contrast, a shape-limiting factor, and the width and depth of an encompassing container. We first constrain these parameters with an interactive forward model and explicit geologic information. Then, we generate an approximate probability distribution of polygons for a given set of parameter values. From these, we determine statistical distributions such as the variance between the observed and model fields, the area, the center of area, and the occupancy probability (the probability that a spatial point lies within the subsurface object). We introduce replica exchange to mitigate trapping in local optima and to compute model probabilities and their uncertainties. We apply our techniques to synthetic data sets and a natural data set collected across the Rio Grande Gorge Bridge in New Mexico. On the basis of our examples, we find that the occupancy probability is useful in visualizing the results, giving a “hazy” cross section of the object. We also find that the role of the container is important in making predictions about the subsurface object.


2013 ◽  
Vol 411-414 ◽  
pp. 1884-1893
Author(s):  
Yong Chun Cao ◽  
Ya Bin Shao ◽  
Shuang Liang Tian ◽  
Zheng Qi Cai

Due to many of the clustering algorithms based on GAs suffer from degeneracy and are easy to fall in local optima, a novel dynamic genetic algorithm for clustering problems (DGA) is proposed. The algorithm adopted the variable length coding to represent individuals and processed the parallel crossover operation in the subpopulation with individuals of the same length, which allows the DGA algorithm clustering to explore the search space more effectively and can automatically obtain the proper number of clusters and the proper partition from a given data set; the algorithm used the dynamic crossover probability and adaptive mutation probability, which prevented the dynamic clustering algorithm from getting stuck at a local optimal solution. The clustering results in the experiments on three artificial data sets and two real-life data sets show that the DGA algorithm derives better performance and higher accuracy on clustering problems.


Nowadays, a huge amount of data is generated due to the growth in the technologies. There are different tools used to view this massive amount of data, and these tools contain different data mining techniques which can be applied for the obtained data sets. Classification is required to extract useful information or to predict the result from these enormous amounts of data. For this purpose, there are different classification algorithms. In this paper, we have compared Naive Bayes, K*, and random forest classification algorithm using Weka tool. To analyze the performance of these three algorithms we have considered three data sets. They are diabetes, supermarket and weather data set. In this work, an analysis is made based on the confusion matrix and different performance measures like RMSE, MAE, ROC, etc


2021 ◽  
pp. 1-26
Author(s):  
Richard C. Gerum ◽  
Achim Schilling

Up to now, modern machine learning (ML) has been based on approximating big data sets with high-dimensional functions, taking advantage of huge computational resources. We show that biologically inspired neuron models such as the leaky-integrate-and-fire (LIF) neuron provide novel and efficient ways of information processing. They can be integrated in machine learning models and are a potential target to improve ML performance. Thus, we have derived simple update rules for LIF units to numerically integrate the differential equations. We apply a surrogate gradient approach to train the LIF units via backpropagation. We demonstrate that tuning the leak term of the LIF neurons can be used to run the neurons in different operating modes, such as simple signal integrators or coincidence detectors. Furthermore, we show that the constant surrogate gradient, in combination with tuning the leak term of the LIF units, can be used to achieve the learning dynamics of more complex surrogate gradients. To prove the validity of our method, we applied it to established image data sets (the Oxford 102 flower data set, MNIST), implemented various network architectures, used several input data encodings and demonstrated that the method is suitable to achieve state-of-the-art classification performance. We provide our method as well as further surrogate gradient methods to train spiking neural networks via backpropagation as an open-source KERAS package to make it available to the neuroscience and machine learning community. To increase the interpretability of the underlying effects and thus make a small step toward opening the black box of machine learning, we provide interactive illustrations, with the possibility of systematically monitoring the effects of parameter changes on the learning characteristics.


Author(s):  
M. Zwingenberg ◽  
F.-K. Benra ◽  
K. Werner

The performance data of most Siemens heavy-duty gas turbines which have been built in the last 20 years are stored in so-called typefiles. These typefiles contain the description of the thermodynamic operating behavior for each gas turbine type using several component maps, e.g., for the compressor, the turbine and the combustion chamber. In addition to all available high-accuracy performance test results, modern IT technology enables the user to handle a tremendous volume of measured data via remote access. This allows the user to determine and to guarantee the performance of modifications and upgrades with sufficient precision, even for older gas turbine types. The method for automated generation of typefiles based on the entire volume of available data and its corresponding Matlab® based software solution are the focus of this contribution. Although this method offers a very promising source of data from various sites, the obtainable data sets usually do not cover the entire temperature and rotational speed range that is necessary to create a map suitable for all requisite operating conditions. Thus, theoretically-based additional information combined with special extrapolation methods are necessary.


2017 ◽  
Vol 26 (1) ◽  
pp. 153-168 ◽  
Author(s):  
Vijay Kumar ◽  
Jitender Kumar Chhabra ◽  
Dinesh Kumar

AbstractThe main problem of classical clustering technique is that it is easily trapped in the local optima. An attempt has been made to solve this problem by proposing the grey wolf algorithm (GWA)-based clustering technique, called GWA clustering (GWAC), through this paper. The search capability of GWA is used to search the optimal cluster centers in the given feature space. The agent representation is used to encode the centers of clusters. The proposed GWAC technique is tested on both artificial and real-life data sets and compared to six well-known metaheuristic-based clustering techniques. The computational results are encouraging and demonstrate that GWAC provides better values in terms of precision, recall, G-measure, and intracluster distances. GWAC is further applied for gene expression data set and its performance is compared to other techniques. Experimental results reveal the efficiency of the GWAC over other techniques.


Author(s):  
Vahid Noei Aghaei ◽  
Hiwa Khaledi ◽  
Mohsen Reza Soltani

Performance testing of gas turbine packages is becoming increasingly common to assure that the turbine output power and efficiency meet the expected values during the turbine life cycle. In the conventional Performance Test Analysis (PTA), field measurements and calculations are carried out on the basis of standard codes to find the whole engine performance parameters (i.e. power and efficiency) at test conditions and to compare them with the expected values. Recently, regarding the development of Gas Path Analysis (GPA) and diagnostic techniques to investigate the gas turbine health state, performance test capabilities can be improved by using these analyses to perform further examination on the measured test data and to determine the deviation of gas turbine component health parameters from the “new and clean” health state during the engine operation. Determining the mentioned deviations, potentials of engine improvement in the component level can be obtained and subsequently the action-oriented recommendations are reported as guidelines in the overhaul. Also in the case of performance test after the overhaul, the main result of the GPA application in PTA is the verification of the overhaul effectiveness. Using the GPA in the cases studied in this paper indicates that heath state of engine components can be investigated from the performance test data and as the main result, it is show that applying the GPA, it is possible to distinguish the effect of non-recoverable degradation and that of the poor overhaul on the engine performance and finally to assess technically the effectiveness of overhaul.


2021 ◽  
Vol 50 (2) ◽  
pp. 247-263
Author(s):  
Xuliang Duan ◽  
Bing Guo ◽  
Yan Shen ◽  
Yuncheng Shen ◽  
Xiangqian Dong ◽  
...  

Data currency is a temporal reference of data, it reflects the degree to which the data is current with the worldit models. Currency rule is a formal rule extracted from the data set and reflecting the currency order of thedata tuples, it can be used for both data repairing and currency quality evaluation. Based on the research of datacurrency repairing, the basic form of currency rule is extended, and parallel rule extraction and update algorithmsare proposed to meet the requirement of running on dynamic data sets. Besides, four data currency qualityevaluation models are proposed and verified by experiments. The performance test show that the efficiencyof parallel algorithms is significantly improved, the rules compliance mean(CM2) model based on extendedcurrency rule has the highest average precision. The extended currency rules not only improve the efficiencyand adaptability, but also provide more valuable features for data quality evaluation.


Sign in / Sign up

Export Citation Format

Share Document