scholarly journals Modelling energy systems of Vietnam with integration of renewable power sources

Author(s):  
A.V. Edelev ◽  
D.N. Karamov ◽  
I.A. Sidorov ◽  
D.V. Binh ◽  
N.H. Nam ◽  
...  

The paper addresses the research of the large-scale penetration of renewable energy into the power system of Vietnam. The proposed approach presents the optimization of operational decisions in different power generation technologies as a Markov decision process. It uses a stochastic base model that optimizes a deterministic lookahead model. The first model applies the stochastic search to optimize the operation of power sources. The second model captures hourly variations of renewable energy over a year. The approach helps to find the optimal generation configuration under different market conditions.

2010 ◽  
Vol 44-47 ◽  
pp. 3611-3615 ◽  
Author(s):  
Zhi Cong Zhang ◽  
Kai Shun Hu ◽  
Hui Yu Huang ◽  
Shuai Li ◽  
Shao Yong Zhao

Reinforcement learning (RL) is a state or action value based machine learning method which approximately solves large-scale Markov Decision Process (MDP) or Semi-Markov Decision Process (SMDP). A multi-step RL algorithm called Sarsa(,k) is proposed, which is a compromised variation of Sarsa and Sarsa(). It is equivalent to Sarsa if k is 1 and is equivalent to Sarsa() if k is infinite. Sarsa(,k) adjust its performance by setting k value. Two forms of Sarsa(,k), forward view Sarsa(,k) and backward view Sarsa(,k), are constructed and proved equivalent in off-line updating.


2013 ◽  
Vol 30 (05) ◽  
pp. 1350014 ◽  
Author(s):  
ZHICONG ZHANG ◽  
WEIPING WANG ◽  
SHOUYAN ZHONG ◽  
KAISHUN HU

Reinforcement learning (RL) is a state or action value based machine learning method which solves large-scale multi-stage decision problems such as Markov Decision Process (MDP) and Semi-Markov Decision Process (SMDP) problems. We minimize the makespan of flow shop scheduling problems with an RL algorithm. We convert flow shop scheduling problems into SMDPs by constructing elaborate state features, actions and the reward function. Minimizing the accumulated reward is equivalent to minimizing the schedule objective function. We apply on-line TD(λ) algorithm with linear gradient-descent function approximation to solve the SMDPs. To examine the performance of the proposed RL algorithm, computational experiments are conducted on benchmarking problems in comparison with other scheduling methods. The experimental results support the efficiency of the proposed algorithm and illustrate that the RL approach is a promising computational approach for flow shop scheduling problems worthy of further investigation.


2021 ◽  
Author(s):  
Congmei Jiang ◽  
Yongfang Mao ◽  
Yi Chai ◽  
Mingbiao Yu

<p>With the increasing penetration of renewable resources such as wind and solar, especially in terms of large-scale integration, the operation and planning of power systems are faced with great risks due to the inherent stochasticity of natural resources. Although this uncertainty can be anticipated, the timing, magnitude, and duration of fluctuations cannot be predicted accurately. In addition, the outputs of renewable power sources are correlated in space and time, and this brings further challenges for predicting the characteristics of their future behavior. To address these issues, this paper describes an unsupervised distribution learning method for renewable scenario forecasts that considers spatiotemporal correlation based on generative adversarial network (GAN), which has been shown to generate realistic time series for stochastic processes. We first utilize an improved GAN to learn unknown data distributions and model the dynamic processes of renewable resources. We then generate a large number of forecasted scenarios using stochastic constrained optimization. For validation, we use power generation data from the National Renewable Energy Laboratory wind and solar integration datasets. The simulation results show that the generated trajectories not only reflect the future power generation dynamics, but also correctly capture the temporal, spatial, and fluctuant characteristics of the real power generation processes. The experimental comparisons verify the superiority of the proposed method and indicate that it can reduce at least 50% of the training iterations of the generative model for scenario forecasts.<br></p>


Mathematics ◽  
2021 ◽  
Vol 9 (12) ◽  
pp. 1385
Author(s):  
Irais Mora-Ochomogo ◽  
Marco Serrato ◽  
Jaime Mora-Vargas ◽  
Raha Akhavan-Tabatabaei

Natural disasters represent a latent threat for every country in the world. Due to climate change and other factors, statistics show that they continue to be on the rise. This situation presents a challenge for the communities and the humanitarian organizations to be better prepared and react faster to natural disasters. In some countries, in-kind donations represent a high percentage of the supply for the operations, which presents additional challenges. This research proposes a Markov Decision Process (MDP) model to resemble operations in collection centers, where in-kind donations are received, sorted, packed, and sent to the affected areas. The decision addressed is when to send a shipment considering the uncertainty of the donations’ supply and the demand, as well as the logistics costs and the penalty of unsatisfied demand. As a result of the MDP a Monotone Optimal Non-Decreasing Policy (MONDP) is proposed, which provides valuable insights for decision-makers within this field. Moreover, the necessary conditions to prove the existence of such MONDP are presented.


Sign in / Sign up

Export Citation Format

Share Document