A new hybrid model based on Secondary Decomposition, Reinforcement Learning and SRU Network for Wind Turbine Gearbox Oil Temperature Forecasting

Measurement ◽  
2021 ◽  
pp. 109347
Author(s):  
Hui Liu ◽  
Chengqing Yu ◽  
Chengming Yu
Energies ◽  
2019 ◽  
Vol 12 (20) ◽  
pp. 3920
Author(s):  
Qiang Zhao ◽  
Kunkun Bao ◽  
Jia Wang ◽  
Yinghua Han ◽  
Jinkuan Wang

Condition monitoring can improve the reliability of wind turbines, which can effectively reduce operation and maintenance costs. The temperature prediction model of wind turbine gearbox components is of great significance for monitoring the operation status of the gearbox. However, the complex operating conditions of wind turbines pose grand challenges to predict the temperature of gearbox components. In this study, an online hybrid model based on a long short term memory (LSTM) neural network and adaptive error correction (LSTM-AEC) using simple-variable data is proposed. In the proposed model, a more suitable deep learning approach for time series, LSTM algorithm, is applied to realize the preliminary prediction of temperature, which has a stronger ability to capture the non-stationary and non-linear characteristics of gearbox components temperature series. In order to enhance the performance of the LSTM prediction model, the adaptive error correction model based on the variational mode decomposition (VMD) algorithm is developed, where the VMD algorithm can effectively solve the prediction difficulty issue caused by the non-stationary, high-frequency and chaotic characteristics of error series. To apply the hybrid model to the online prediction process, a real-time rolling data decomposition process based on VMD algorithm is proposed. With aims to validate the effectiveness of the hybrid model proposed in this paper, several traditional models are introduced for comparative analysis. The experimental results show that the hybrid model has better prediction performance than other comparative models.


2021 ◽  
Vol 8 ◽  
Author(s):  
Huan Zhao ◽  
Junhua Zhao ◽  
Ting Shu ◽  
Zibin Pan

Buildings account for a large proportion of the total energy consumption in many countries and almost half of the energy consumption is caused by the Heating, Ventilation, and air-conditioning (HVAC) systems. The model predictive control of HVAC is a complex task due to the dynamic property of the system and environment, such as temperature and electricity price. Deep reinforcement learning (DRL) is a model-free method that utilizes the “trial and error” mechanism to learn the optimal policy. However, the learning efficiency and learning cost are the main obstacles of the DRL method to practice. To overcome this problem, the hybrid-model-based DRL method is proposed for the HVAC control problem. Firstly, a specific MDPs is defined by considering the energy cost, temperature violation, and action violation. Then the hybrid-model-based DRL method is proposed, which utilizes both the knowledge-driven model and the data-driven model during the whole learning process. Finally, the protection mechanism and adjusting reward methods are used to further reduce the learning cost. The proposed method is tested in a simulation environment using the Australian Energy Market Operator (AEMO) electricity price data and New South Wales temperature data. Simulation results show that 1) the DRL method can reduce the energy cost while maintaining the temperature satisfactory compared to the short term MPC method; 2) the proposed method improves the learning efficiency and reduces the learning cost during the learning process compared to the model-free method.


2011 ◽  
Vol 44 (1) ◽  
pp. 7061-7066 ◽  
Author(s):  
Silvio Simani ◽  
Paolo Castaldi ◽  
Marcello Bonfè

Author(s):  
Qiu Yingning ◽  
Feng Yanhui ◽  
Yang Wenxian ◽  
Cao Mengnan ◽  
Wang Hao ◽  
...  

2022 ◽  
Vol 118 ◽  
pp. 102960
Author(s):  
Dianrui Wang ◽  
Yue Shen ◽  
Junhe Wan ◽  
Qixin Sha ◽  
Guangliang Li ◽  
...  

2018 ◽  
Vol 35 (1) ◽  
pp. 415-421 ◽  
Author(s):  
Ruiming Fang ◽  
Rongyan Shang ◽  
Shunhui Jiang ◽  
Changqing Peng ◽  
Zhijun Ye

Sign in / Sign up

Export Citation Format

Share Document