Intelligent wind farm control via deep reinforcement learning and high-fidelity simulations

2021 ◽  
Vol 292 ◽  
pp. 116928
Author(s):  
Hongyang Dong ◽  
Jincheng Zhang ◽  
Xiaowei Zhao
2021 ◽  
Vol 281 ◽  
pp. 116115
Author(s):  
Xiaolei Yang ◽  
Christopher Milliren ◽  
Matt Kistner ◽  
Christopher Hogg ◽  
Jeff Marr ◽  
...  

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Kai Liu ◽  
Majid Allahyari ◽  
Jorge S. Salinas ◽  
Nadim Zgheib ◽  
S. Balachandar

AbstractHigh-fidelity simulations of coughs and sneezes that serve as virtual experiments are presented, and they offer an unprecedented opportunity to peer into the chaotic evolution of the resulting airborne droplet clouds. While larger droplets quickly fall-out of the cloud, smaller droplets evaporate rapidly. The non-volatiles remain airborne as droplet nuclei for a long time to be transported over long distances. The substantial variation observed between the different realizations has important social distancing implications, since probabilistic outlier-events do occur and may need to be taken into account when assessing the risk of contagion. Contrary to common expectations, we observe dry ambient conditions to increase by more than four times the number of airborne potentially virus-laden nuclei, as a result of reduced droplet fall-out through rapid evaporation. The simulation results are used to validate and calibrate a comprehensive multiphase theory, which is then used to predict the spread of airborne nuclei under a wide variety of ambient conditions.


2021 ◽  
Vol 31 (3) ◽  
pp. 1-26
Author(s):  
Aravind Balakrishnan ◽  
Jaeyoung Lee ◽  
Ashish Gaurav ◽  
Krzysztof Czarnecki ◽  
Sean Sedwards

Reinforcement learning (RL) is an attractive way to implement high-level decision-making policies for autonomous driving, but learning directly from a real vehicle or a high-fidelity simulator is variously infeasible. We therefore consider the problem of transfer reinforcement learning and study how a policy learned in a simple environment using WiseMove can be transferred to our high-fidelity simulator, W ise M ove . WiseMove is a framework to study safety and other aspects of RL for autonomous driving. W ise M ove accurately reproduces the dynamics and software stack of our real vehicle. We find that the accurately modelled perception errors in W ise M ove contribute the most to the transfer problem. These errors, when even naively modelled in WiseMove , provide an RL policy that performs better in W ise M ove than a hand-crafted rule-based policy. Applying domain randomization to the environment in WiseMove yields an even better policy. The final RL policy reduces the failures due to perception errors from 10% to 2.75%. We also observe that the RL policy has significantly less reliance on velocity compared to the rule-based policy, having learned that its measurement is unreliable.


2009 ◽  
Vol 46 (5) ◽  
pp. 903-922 ◽  
Author(s):  
Miguel R. Visbal ◽  
Raymond E. Gordnier ◽  
Marshall C. Galbraith

Sign in / Sign up

Export Citation Format

Share Document