Machine learning-based digital twins reduce seasonal remapping in aeroderivative gas turbines

2021 ◽  
pp. 1-7
Author(s):  
Nick Petro ◽  
Felipe Lopez

Abstract Aeroderivative gas turbines have their combustion set points adjusted periodically in a process known as remapping. Even turbines that perform well after remapping may produce unacceptable behavior when external conditions change. This article introduces a digital twin that uses real-time measurements of combustor acoustics and emissions in a machine learning model that tracks recent operating conditions. The digital twin is leveraged by an optimizer that select adjustments that allow the unit to maintain combustor dynamics and emissions in compliance without seasonal remapping. Results from a pilot site demonstrate that the proposed approach can allow a GE LM6000PD unit to operate for ten months without seasonal remapping while adjusting to changes in ambient temperature (4 - 38 °C) and to different fuel compositions.

Author(s):  
Andrés L. Carrillo Peña ◽  
Jeffer S. Eugenio Barroso ◽  
Alberto A. Martínez Vesga ◽  
Sebastián Roa Prada ◽  
Victor A. Ardila Acuña

Abstract Centrifugal pumps are devices commonly used in countless industrial and residential applications, from water supply systems to oil and gas processing plants. These rotatory hydraulic machines have a strong impact on the energy consumption of industry worldwide, not only because of their vast amount but also because of their continuous operation. Therefore, developing techniques to improve the efficiency of pumping systems is of great help to make communities and industrial activity more sustainable. The overall performance of these pieces of machinery cannot be fully predicted by means of analytical procedures due to the complexity of the fluid flow phenomena that occurs in their interior, so it is common practice to resort to alternate modeling techniques, such as computer aided numerical analysis, which can predict the performance of a pump, given its CAD computer model. However, the performance of an actual centrifugal pump may deviate from its ideal behavior due to multiple causing factors which may alter the performance curves given by the manufacturers in the corresponding data sheets. The discrepancies between the real and the simulated responses of centrifugal pumps demand for better modeling and simulation techniques to improve the design of more efficient pumping systems. Digital twins have the ability to bring the simulation environment closer to reality, by replicating the behavior of the physical system in a simulation environment with the support of experimental data. The digital twin of a multiple pumps system with serial and parallel configurations was developed, based on two identical industrial centrifugal pumps available in the laboratory. Experimental data was collected to calibrate the digital twin system so that the simulated system can predict the response under changing operating conditions. The simulation environment was developed with the assistance of a commercial Computational Fluid Dynamics computer program. After validating the behavior of the virtual components, with respect to the behavior of their actual counterparts, tests were carried out to predict the behavior of the pumping system in case of downstream disturbances which can affect the operating point of the overall pumping system and its corresponding efficiency. The development of the digital twin for the pumping system allowed visualizing how the pumps connected in series or in parallel can be maneuvered to adjust its operating conditions to achieve higher efficiency operating conditions in response to changes in the conditions downstream in the pipeline.


Author(s):  
K. K. Botros ◽  
G. R. Price ◽  
G. Kibrya

A Predictive Emission Monitoring (PEM) model has been developed based on an optimized Neural Network (NN) architecture which takes 8 fundamental parameters as input variables. The model predicts both NO and NOx as output variables. The NN is initially trained using a combination of two sets of data: a) measured data at various loads from an LM1600 gas turbine installed at one of the compressor stations on TransCanada Transmission system in Alberta, Canada, b) data generated by a Computational Fluid Dynamics (CFD) at different operating conditions covering the range of the engine operating parameters spanned over one year. The predictions of NOx by CFD employed the ‘flamelet’ model and a set of 8 reactions including the Zeldovich mechanism for thermal NOx along with an empirical correlation for prompt NOx formation. It was found that a Multi Layer Perceptron type Neural Network with two hidden layers was the optimum architecture for predicting NO levels with a maximum absolute error of around 7%, mean absolute error of 2.3% and standard deviation of 1.97%. The model is easy to implement on the station PLC. A set of one year data consisting of 2804 cases was submitted to the above optimized NN architecture with varying ambient temperature from –29.9 °C to 35.7 °C and output power from 570 kW to 16.955 MW. This gave consistent contours of NO levels. As expected, NN architecture shows that NO increases with increasing power or ambient temperature.


2020 ◽  
Vol 143 (1) ◽  
Author(s):  
Jinlong Liu ◽  
Christopher Ulishney ◽  
Cosmin Emil Dumitrescu

Abstract Engine calibration requires detailed feedback information that can reflect the combustion process as the optimized objective. Indicated mean effective pressure (IMEP) is such an indicator describing an engine’s capacity to do work under different combinations of control variables. In this context, it is of interest to find cost-effective solutions that will reduce the number of experimental tests. This paper proposes a random forest machine learning model as a cost-effective tool for optimizing engine performance. Specifically, the model estimated IMEP for a natural gas spark ignited engine obtained from a converted diesel engine. The goal was to develop an economical and robust tool that can help reduce the large number of experiments usually required throughout the design and development of internal combustion engines. The data used for building such correlative model came from engine experiments that varied the spark advance, fuel-air ratio, and engine speed. The inlet conditions and the coolant/oil temperature were maintained constant. As a result, the model inputs were the key engine operation variables that affect engine performance. The trained model was shown to be able to predict the combustion-related feedback information with good accuracy (R2 ≈ 0.9 and MSE ≈ 0). In addition, the model accurately reproduced the effect of control variables on IMEP, which would help narrow the choice of operating conditions for future designs of experiment. Overall, the machine learning approach presented here can provide new chances for cost-efficient engine analysis and diagnostics work.


Author(s):  
Samuel M. Hipple ◽  
Zachary T. Reinhart ◽  
Harry Bonilla-Alvarado ◽  
Paolo Pezzini ◽  
Kenneth Mark Bryden

Abstract With increasing regulation and the push for clean energy, the operation of power plants is becoming increasingly complex. This complexity combined with the need to optimize performance at base load and off-design condition means that predicting power plant performance with computational modeling is more important than ever. However, traditional modeling approaches such as physics-based models do not capture the true performance of power plant critical components. The complexity of factors such as coupling, noise, and off-design operating conditions makes the performance prediction of critical components such as turbomachinery difficult to model. In a complex system, such as a gas turbine power plant, this creates significant disparities between models and actual system performance that limits the detection of abnormal operations. This study compares machine learning tools to predict gas turbine performance over traditional physics-based models. A long short-term memory (LSTM) model, a form of a recurrent neural network, was trained using operational datasets from a 100 kW recuperated gas turbine power system designed for hybrid configuration. The LSTM turbine model was trained to predict shaft speed, outlet pressure, and outlet temperature. The performance of both the machine learning model and a physics-based model were compared against experimental data of the gas turbine system. Results show that the machine learning model has significant advantages in prediction accuracy and precision compared to a traditional physics-based model when fed facility data as an input. This advantage of predicting performance by machine learning models can be used to detect abnormal operations.


Coatings ◽  
2021 ◽  
Vol 11 (4) ◽  
pp. 450
Author(s):  
Rahul Ramachandran

Degradation by wear and corrosion are frequently encountered in a variety of tribosystems, including materials and tools in forming operations. The combined effect of wear and corrosion, known as tribocorrosion, can result in accelerated material degradation. Interfacial conditions can affect this degradation. Tribocorrosion maps serve the purpose of identifying operating conditions at the interface for an acceptable rate of degradation. This paper proposes a machine learning-based approach to generate tribocorrosion maps, which can be used to predict tribosystem performance. Two tribocorrosion datasets from the published literature are used. The materials have been chosen based on the wide availability of their tribocorrosion data in the literature. First, unsupervised machine learning is used to identify and label clusters from tribocorrosion data. The identified clusters are then used to train a support vector classification model. The trained support vector machine is used to generate tribocorrosion maps. The generated maps are compared with those from the literature. The general approach can be applied to create tribocorrosion maps of materials widely used in material forming.


2021 ◽  
Vol 73 (03) ◽  
pp. 34-37
Author(s):  
Judy Feder

The time needed to eliminate complications and accidents accounts for 20–25% of total well construction time, according to a 2020 SPE paper (SPE 200740). The same paper notes that digital twins have proven to be a key enabler in improving sustainability during well construction, shrinking the carbon footprint by reducing overall drilling time and encouraging and bringing confidence to contactless advisory and collaboration. The paper also points out the potential application of digital twins to activities such as geothermal drilling. Advanced data analytics and machine learning (ML) potentially can reduce engineering hours up to 70% during field development, according to Boston Consulting Group. Increased field automation, remote operations, sensor costs, digital twins, machine learning, and improved computational speed are responsible. It is no surprise, then, that digital twins are taking on a greater sense of urgency for operators, service companies, and drilling contractors working to improve asset and enterprise safety, productivity, and performance management. For 2021, digital twins appear among the oil and gas industry’s top 10 digital spending priorities. DNV GL said in its Technology Outlook 2030 that this could be the decade when cloud computing and advanced simulation see virtual system testing, virtual/augmented reality, and machine learning progressively merge into full digital twins that combine data analytics, real-time, and near-real-time data for installations, subsurface geology, and reservoirs to bring about significant advancements in upstream asset performance, safety, and profitability. The biggest challenges to these advancements, according to the firm, will be establishing confidence in the data and computational models that a digital twin uses and user organizations’ readiness to work with and evolve alongside the digital twin. JPT looked at publications from inside and outside the upstream industry and at several recent SPE papers to get a snapshot of where the industry stands regarding uptake of digital twins in well construction and how the technology is affecting operations and outcomes. Why Digital Twins Gartner Information defines a digital twin as a digital representation of a real-world entity or system. “The implementation of a digital twin,” Gartner writes, “is an encapsulated software object or model that mirrors a unique physical object, process, organization, person or other abstraction.” Data from multiple digital twins can be aggregated for a composite view across several real-world entities and their related processes. In upstream oil and gas, digital twins focus on the well—and, ultimately, the field—and its lifecycle. Unlike a digital simulation, which produces scenarios based on what could happen in the physical world but whose scenarios may not be actionable, a digital twin represents actual events from the physical world, making it possible to visualize and understand real-life scenarios to make better decisions. Digital well construction twins can pertain to single assets or processes and to the reservoir/subsurface or the surface. Ultimately, when process and asset sub-twins are connected, the result is an integrated digital twin of the entire asset or well. Massive sensor technology and the ability to store and handle huge amounts of data from the asset will enable the full digital twin to age throughout the life-cycle of the asset, along with the asset itself (Fig. 1).


Author(s):  
Jinlong Liu ◽  
Christopher Ulishney ◽  
Cosmin E. Dumitrescu

Abstract Converting existing compression ignition engines to spark ignition approach is a promising approach to increase the application of natural gas in the heavy-duty transportation sector. However, the diesel-like environment dramatically affects the engine performance and emissions. As a result, experimental tests are needed to investigate the characteristics of such converted engines. A machine learning model based on bagged decision trees algorithm was established in this study to reduce the experimental cost and identify the operating conditions of special interest for analysis. Preliminary engine tests that changed spark timing, mixture equivalence ratio, and engine speed (three key engine operation variables) but maintained intake and boundary conditions were applied as model input to train such a correlative model. The model output was the indicated mean effective pressure, which is an engine parameter generally used to assist in locating high engine efficiency regions at constant engine speed and fuel/air ratio. After training, the correlative model can provide acceptable prediction performance except few outliers. Subsequently, boosting ensemble learning approach was applied in this study to help improve the model performance. Furthermore, the results showed that the boosted decision trees algorithm better described the combustion process inside the cylinder, as least for the operating conditions investigated in this study.


Lubricants ◽  
2018 ◽  
Vol 6 (4) ◽  
pp. 108 ◽  
Author(s):  
Jakob Moder ◽  
Philipp Bergmann ◽  
Florian Grün

Hydrodynamic journal bearings are used within a wide range of machines, such as combustion engines, gas turbines, or wind turbines. For a safe operation, awareness of the lubrication regime, in which the bearing is currently operating, is of great importance. In the current study, highspeed data signals of a torque sensor, sampled with a frequency of 1000 hz in a time range of 2.5 s, obtained on a journal bearing test-rig under various operating conditions, are used to train machine learning models, such as neural networks and logistic regression. Results indicate that a fast Fourier transform (fft) of the highspeed torque signals enables accurate predictions of lubrication regimes. The trained models are analysed in order to identify distinctive frequencies for the respective lubrication regime.


2021 ◽  
Vol 6 (2) ◽  
pp. 14
Author(s):  
Thalosang Tshireletso ◽  
Pilate Moyo ◽  
Matongo Kabani

A nonparametric machine learning model was used to study the behaviour of the variables of a concrete arch dam: Roode Elsberg dam. The variables used were ambient temperature, water temperatures, and water level. Water temperature was measured using twelve thermometers; six thermometers were on each flank of the dam. The thermometers were placed in pairs on different levels: avg6 (avg6-R and avg6-L) and avg5 (avg5-R and avg5-L) were on level 47.43 m, avg4 (avg4-R and avg4-L) and avg3 (avg3-R and avg3-L) were on level 43.62 m, and avg2 (avg2-R and avg2-L) and avg1 (avg1-R and avg1-L) were on level 26.23 m. Four neural networks and four random forests were cross-validated to determine their best-performing hyperparameters with the water temperature data. Quantile random forest was the best performer at mtry 7 (Number of variables randomly sampled as candidates at each split) and RMSE (Root mean square error) of 0.0015, therefore it was used for making predictions. The predictions were made using two cases of water level: recorded water level and full dam steady-state at Representative Concentration Pathway (RCP) 4.5 (hot and cold model) and RCP 8.5 (hot and cold model). Ambient temperature increased on average by 1.6 °C for the period 2012–2053 when using recorded water level; this led to increases in water temperature of 0.9 °C, 0.8 °C, and 0.4 °C for avg6-R, avg3-R, and avg1-R, respectively, for the period 2012–2053. The same average temperature increase led to average increases of 0.7 °C for avg6-R, 0.6 °C for avg3-R, and 0.3 °C for avg1-R for a full dam steady-state for the period 2012–2053.


2021 ◽  
Vol 7 (1) ◽  
pp. 342
Author(s):  
Jia An ◽  
Chee Kai Chua ◽  
Vladimir Mironov

The application of machine learning (ML) in bioprinting has attracted considerable attention recently. Many have focused on the benefits and potential of ML, but a clear overview of how ML shapes the future of three-dimensional (3D) bioprinting is still lacking. Here, it is proposed that two missing links, Big Data and Digital Twin, are the key to articulate the vision of future 3D bioprinting. Creating training databases from Big Data curation and building digital twins of human organs with cellular resolution and properties are the most important and urgent challenges. With these missing links, it is envisioned that future 3D bioprinting will become more digital and in silico, and eventually strike a balance between virtual and physical experiments toward the most efficient utilization of bioprinting resources. Furthermore, the virtual component of bioprinting and biofabrication, namely, digital bioprinting, will become a new growth point for digital industry and information technology in future.


Sign in / Sign up

Export Citation Format

Share Document