scholarly journals Flow Control in Wings and Discovery of Novel Approaches via Deep Reinforcement Learning

Author(s):  
Ricardo Vinuesa ◽  
Oriol Lehmkuhl ◽  
Adrian Lozano-Duran ◽  
Jean Rabault

In this review we summarize existing trends of flow control used to improve the aerodynamic efficiency of wings. We first discuss active methods to control turbulence, starting with flat-plate geometries and building towards the more complicated flow around wings. Then, we discuss active approaches to control separation, a crucial aspect towards achieving high aerodynamic efficiency. Furthermore, we highlight methods relying on turbulence simulation, and discuss various levels of modelling. Finally, we thoroughly revise data-driven methods, their application to flow control, and focus on deep reinforcement learning (DRL). We conclude that this methodology has the potential to discover novel control strategies in complex turbulent flows of aerodynamic relevance.

2007 ◽  
Vol 570 ◽  
pp. 467-477 ◽  
Author(s):  
IVAN MARUSIC ◽  
D. D. JOSEPH ◽  
KRISHNAN MAHESH

A formula is derived that shows exactly how much the discrepancy between the volume flux in laminar and in turbulent flow at the same pressure gradient increases as the pressure gradient is increased. We compare laminar and turbulent flows in channels with and without flow control. For the related problem of a fixed bulk-Reynolds-number flow, we seek the theoretical lowest bound for skin-friction drag for control schemes that use surface blowing and suction with zero-net volume-flux addition. For one such case, using a crossflow approach, we show that sustained drag below that of the laminar-Poiseuille-flow case is not possible. For more general control strategies we derive a criterion for achieving sublaminar drag and use this to consider the implications for control strategy design and the limitations at high Reynolds numbers.


2020 ◽  
Vol 117 (42) ◽  
pp. 26091-26098
Author(s):  
Dixia Fan ◽  
Liu Yang ◽  
Zhicheng Wang ◽  
Michael S. Triantafyllou ◽  
George Em Karniadakis

We have demonstrated the effectiveness of reinforcement learning (RL) in bluff body flow control problems both in experiments and simulations by automatically discovering active control strategies for drag reduction in turbulent flow. Specifically, we aimed to maximize the power gain efficiency by properly selecting the rotational speed of two small cylinders, located parallel to and downstream of the main cylinder. By properly defining rewards and designing noise reduction techniques, and after an automatic sequence of tens of towing experiments, the RL agent was shown to discover a control strategy that is comparable to the optimal strategy found through lengthy systematically planned control experiments. Subsequently, these results were verified by simulations that enabled us to gain insight into the physical mechanisms of the drag reduction process. While RL has been used effectively previously in idealized computer flow simulation studies, this study demonstrates its effectiveness in experimental fluid mechanics and verifies it by simulations, potentially paving the way for efficient exploration of additional active flow control strategies in other complex fluid mechanics applications.


Author(s):  
Mohamed Elhawary

Deep reinforcement learning (DRL) algorithms are rapidly making inroads into fluid mechanics, following the remarkable achievements of these techniques in a wide range of science and engineering applications. In this paper, a deep reinforcement learning (DRL) agent has been employed to train an artificial neural network (ANN) using computational fluid dynamics (CFD) data to perform active flow control (AFC) around a 2-D circular cylinder. Flow control strategies are investigated at a diameter-based Reynolds number Re_D = 100 using advantage actor-critic (A2C) algorithm by means of two symmetric plasma actuators located on the surface of the cylinder near the separation point. The DRL agent interacts with the computational fluid dynamics (CFD) environment through manipulating the non-dimensional burst frequency (f+) of the two plasma actuators, and the time-averaged surface pressure is used as a feedback observation to the deep neural networks (DNNs). The results show that a regular actuation using a constant non-dimensional burst frequency gives a maximum drag reduction of 21.8 %, while the DRL agent is able to learn a control strategy that achieves a drag reduction of 22.6%. By analyzing the flow-field, it is shown that the drag reduction is accompanied with a strong flow reattachment and a significant reduction in the mean velocity magnitude and velocity fluctuations at the wake region. These outcomes prove the great capabilities of the deep reinforcement learning (DRL) paradigm in performing active flow control (AFC), and pave the way toward developing robust flow control strategies for real-life applications.


2020 ◽  
Vol 26 (42) ◽  
pp. 7655-7671 ◽  
Author(s):  
Jinfeng Zou ◽  
Edwin Wang

Background: Precision medicine puts forward customized healthcare for cancer patients. An important way to accomplish this task is to stratify patients into those who may respond to a treatment and those who may not. For this purpose, diagnostic and prognostic biomarkers have been pursued. Objective: This review focuses on novel approaches and concepts of exploring biomarker discovery under the circumstances that technologies are developed, and data are accumulated for precision medicine. Results: The traditional mechanism-driven functional biomarkers have the advantage of actionable insights, while data-driven computational biomarkers can fulfill more needs, especially with tremendous data on the molecules of different layers (e.g. genetic mutation, mRNA, protein etc.) which are accumulated based on a plenty of technologies. Besides, the technology-driven liquid biopsy biomarker is very promising to improve patients’ survival. The developments of biomarker discovery on these aspects are promoting the understanding of cancer, helping the stratification of patients and improving patients’ survival. Conclusion: Current developments on mechanisms-, data- and technology-driven biomarker discovery are achieving the aim of precision medicine and promoting the clinical application of biomarkers. Meanwhile, the complexity of cancer requires more effective biomarkers, which could be accomplished by a comprehensive integration of multiple types of biomarkers together with a deep understanding of cancer.


Author(s):  
Ivan Herreros

This chapter discusses basic concepts from control theory and machine learning to facilitate a formal understanding of animal learning and motor control. It first distinguishes between feedback and feed-forward control strategies, and later introduces the classification of machine learning applications into supervised, unsupervised, and reinforcement learning problems. Next, it links these concepts with their counterparts in the domain of the psychology of animal learning, highlighting the analogies between supervised learning and classical conditioning, reinforcement learning and operant conditioning, and between unsupervised and perceptual learning. Additionally, it interprets innate and acquired actions from the standpoint of feedback vs anticipatory and adaptive control. Finally, it argues how this framework of translating knowledge between formal and biological disciplines can serve us to not only structure and advance our understanding of brain function but also enrich engineering solutions at the level of robot learning and control with insights coming from biology.


Sign in / Sign up

Export Citation Format

Share Document