scholarly journals A self-learning circuit diagram for optimal water and energy management

Joule ◽  
2021 ◽  
Vol 5 (9) ◽  
pp. 2251-2253
Author(s):  
Ngoc T. Bui ◽  
Jeffrey J. Urban
2021 ◽  
pp. 307-327
Author(s):  
Hussein Joumaa ◽  
Khoder Jneid ◽  
Mireille Jacomino

Energies ◽  
2020 ◽  
Vol 13 (10) ◽  
pp. 2562
Author(s):  
Leehter Yao ◽  
Fazida Hanim Hashim ◽  
Chien-Chi Lai

A home energy management system (HEMS) was designed in this paper for a smart home that uses integrated energy resources such as power from the grid, solar power generated from photovoltaic (PV) panels, and power from an energy storage system (ESS). A fuzzy controller is proposed for the HEMS to optimally manage the integrated power of the smart home. The fuzzy controller is designed to control the power rectifier for regulating the AC power in response to the variations in the residential electric load, solar power from PV panels, power of the ESS, and the real-time electricity prices. A self-learning scheme is designed for the proposed fuzzy controller to adapt with short-term and seasonal climatic changes and residential load variations. A parsimonious parameterization scheme for both the antecedent and consequent parts of the fuzzy rule base is utilized so that the self-learning scheme of the fuzzy controller is computationally efficient.


2020 ◽  
Vol 44 (7) ◽  
pp. 5659-5674
Author(s):  
Hongqiang Guo ◽  
Fengrui Zhao ◽  
Hongliang Guo ◽  
Qinghu Cui ◽  
Erlei Du ◽  
...  

Energies ◽  
2018 ◽  
Vol 11 (10) ◽  
pp. 2575 ◽  
Author(s):  
Zhen Zhang ◽  
Cheng Ma ◽  
Rong Zhu

Multi-physical field sensing and machine learning have drawn great attention in various fields such as sensor networks, robotics, energy devices, smart buildings, intelligent system and so on. In this paper, we present a novel efficient method for thermal and energy management based on bimodal airflow-temperature sensing and reinforcement learning, which expedites an exploration process by self-learning and adjusts action policy only through actuators interacting with the environment, being free of the controlled object model and priori experiences. In general, training of reinforcement learning requires a large amount of data iterations, which takes a long time and is not suitable for real-time control. Here, we propose an approach to speed up the learning process by indicating the action adjustment direction. We adopt tailor-designed bimodal sensors to simultaneously detect airflow and temperature field, which provides comprehensive information for reinforcement learning. The proposed thermal and energy management incorporates bimodal parametric sensing with an improved actor-critic algorithm to realize self-learning control. Experiments of thermal and energy management in a multi-module integrated system validate the effectiveness of the proposed methodology, which demonstrate high efficiency, fast response, and good robustness in various control scenarios. The proposed methodology can be widely applied to thermal and energy management of diverse integrated systems.


Processes ◽  
2019 ◽  
Vol 7 (10) ◽  
pp. 672 ◽  
Author(s):  
Hongqiang Guo ◽  
Shangye Du ◽  
Fengrui Zhao ◽  
Qinghu Cui ◽  
Weilong Ren

Tabular Q-learning (QL) can be easily implemented into a controller to realize self-learning energy management control of a plug-in hybrid electric bus (PHEB). However, the “curse of dimensionality” problem is difficult to avoid, as the design space is huge. This paper proposes a QL-PMP algorithm (QL and Pontryagin minimum principle (PMP)) to address the problem. The main novelty is that the difference between the feedback SOC (state of charge) and the reference SOC is exclusively designed as state, and then a limited state space with 50 rows and 25 columns is proposed. The off-line training process shows that the limited state space is reasonable and adequate for the self-learning; the Hardware-in-Loop (HIL) simulation results show that the QL-PMP strategy can be implemented into a controller to realize real-time control, and can on average improve the fuel economy by 20.42%, compared to the charge depleting–charge sustaining (CDCS) strategy.


IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 103153-103166 ◽  
Author(s):  
Hong-Qiang Guo ◽  
Guoliang Wei ◽  
Fengbo Wang ◽  
Chong Wang ◽  
Shangye Du

Sign in / Sign up

Export Citation Format

Share Document