scholarly journals Model-free machine learning of wireless SISO/MIMO communications

Author(s):  
Dolores García ◽  
Jesus O. Lacruz ◽  
Damiano Badini ◽  
Danilo De Donno ◽  
Joerg Widmer
Author(s):  
Camila Chovet ◽  
Marc Lippert ◽  
Laurent Keirsbulck ◽  
Bernd R. Noack ◽  
Jean-Marc Foucaut

We experimentally control the turbulent flow over backward-facing step (ReH = 31500). The goal is to modify the internal (Xr) and external (Lr) recirculation points and consequently the recirculation zone (Ar). A model-free machine learning control (MLC) is used as control logic. As benchmark, an optimized periodic forcing is employed. MLC generalizes periodic forcing by a multi-frequency actuation. In addition, a sensor-based control and a non-autonomous feedback, open- and closed-loop laws, were use to optimize the control. The MLC multi-frequency forcing outperforms, as expected, periodic forcing. The non-autonomous feedback brings a further improvement. The unforced and actuated flows have been investigated in real-time with a TSI particle image velocimetry (PIV) system. The current study shows that a generalization of multi-frequency forcing and sensor feedback significantly reduces the turbulent recirculation zone, far beyond optimized periodic forcing. The study suggests that MLC can effectively explore and optimize new feedback actuation mechanisms and we anticipate MLC to be a game changer in turbulence control.


PLoS ONE ◽  
2016 ◽  
Vol 11 (7) ◽  
pp. e0158722 ◽  
Author(s):  
Elena Daskalaki ◽  
Peter Diem ◽  
Stavroula G. Mougiakakou

2017 ◽  
Vol 7 (1) ◽  
Author(s):  
Ivan Kruglov ◽  
Oleg Sergeev ◽  
Alexey Yanilkin ◽  
Artem R. Oganov

2018 ◽  
Vol 98 (24) ◽  
Author(s):  
Hongkee Yoon ◽  
Jae-Hoon Sim ◽  
Myung Joon Han

2021 ◽  
Author(s):  
◽  
Hazel Darney

<p>With the rapid uptake of machine learning artificial intelligence in our daily lives, we are beginning to realise the risks involved in implementing this technology in high-stakes decision making. This risk is due to machine learning decisions being based in human-curated datasets, meaning these decisions are not bias-free. Machine learning datasets put women at a disadvantage due to factors including (but not limited to) historical exclusion of women in data collection, research, and design; as well as the low participation of women in artificial intelligence fields. These factors mean that applications of machine learning may fail to treat the needs and experiences of women as equal to those of men.    Research into understanding gender biases in machine learning frequently occurs within the computer science field. This has frequently resulted in research where bias is inconsistently defined, and proposed techniques do not engage with relevant literature outside of the artificial intelligence field. This research proposes a novel, interdisciplinary approach to the measurement and validation of gender biases in machine learning. This approach translates methods of human-based gender bias measurement in psychology, forming a gender bias questionnaire for use on a machine rather than a human.   The final output system of this research as a proof of concept demonstrates the potential for a new approach to gender bias investigation. This system takes advantage of the qualitative nature of language to provide a new way of understanding gender data biases by outputting both quantitative and qualitative results. These results can then be meaningfully translated into their real-world implications.</p>


2021 ◽  
Author(s):  
◽  
Hazel Darney

<p>With the rapid uptake of machine learning artificial intelligence in our daily lives, we are beginning to realise the risks involved in implementing this technology in high-stakes decision making. This risk is due to machine learning decisions being based in human-curated datasets, meaning these decisions are not bias-free. Machine learning datasets put women at a disadvantage due to factors including (but not limited to) historical exclusion of women in data collection, research, and design; as well as the low participation of women in artificial intelligence fields. These factors mean that applications of machine learning may fail to treat the needs and experiences of women as equal to those of men.    Research into understanding gender biases in machine learning frequently occurs within the computer science field. This has frequently resulted in research where bias is inconsistently defined, and proposed techniques do not engage with relevant literature outside of the artificial intelligence field. This research proposes a novel, interdisciplinary approach to the measurement and validation of gender biases in machine learning. This approach translates methods of human-based gender bias measurement in psychology, forming a gender bias questionnaire for use on a machine rather than a human.   The final output system of this research as a proof of concept demonstrates the potential for a new approach to gender bias investigation. This system takes advantage of the qualitative nature of language to provide a new way of understanding gender data biases by outputting both quantitative and qualitative results. These results can then be meaningfully translated into their real-world implications.</p>


Sign in / Sign up

Export Citation Format

Share Document