scholarly journals Energy-free machine learning force field for aluminum

2017 ◽  
Vol 7 (1) ◽  
Author(s):  
Ivan Kruglov ◽  
Oleg Sergeev ◽  
Alexey Yanilkin ◽  
Artem R. Oganov
2021 ◽  
Vol 200 ◽  
pp. 110759
Author(s):  
Rafikul Islam ◽  
Md Fauzul Kabir ◽  
Saugato Rahman Dhruba ◽  
Khurshida Afroz

2018 ◽  
Vol 98 (24) ◽  
Author(s):  
Hongkee Yoon ◽  
Jae-Hoon Sim ◽  
Myung Joon Han

2021 ◽  
Author(s):  
◽  
Hazel Darney

<p>With the rapid uptake of machine learning artificial intelligence in our daily lives, we are beginning to realise the risks involved in implementing this technology in high-stakes decision making. This risk is due to machine learning decisions being based in human-curated datasets, meaning these decisions are not bias-free. Machine learning datasets put women at a disadvantage due to factors including (but not limited to) historical exclusion of women in data collection, research, and design; as well as the low participation of women in artificial intelligence fields. These factors mean that applications of machine learning may fail to treat the needs and experiences of women as equal to those of men.    Research into understanding gender biases in machine learning frequently occurs within the computer science field. This has frequently resulted in research where bias is inconsistently defined, and proposed techniques do not engage with relevant literature outside of the artificial intelligence field. This research proposes a novel, interdisciplinary approach to the measurement and validation of gender biases in machine learning. This approach translates methods of human-based gender bias measurement in psychology, forming a gender bias questionnaire for use on a machine rather than a human.   The final output system of this research as a proof of concept demonstrates the potential for a new approach to gender bias investigation. This system takes advantage of the qualitative nature of language to provide a new way of understanding gender data biases by outputting both quantitative and qualitative results. These results can then be meaningfully translated into their real-world implications.</p>


2021 ◽  
Author(s):  
◽  
Hazel Darney

<p>With the rapid uptake of machine learning artificial intelligence in our daily lives, we are beginning to realise the risks involved in implementing this technology in high-stakes decision making. This risk is due to machine learning decisions being based in human-curated datasets, meaning these decisions are not bias-free. Machine learning datasets put women at a disadvantage due to factors including (but not limited to) historical exclusion of women in data collection, research, and design; as well as the low participation of women in artificial intelligence fields. These factors mean that applications of machine learning may fail to treat the needs and experiences of women as equal to those of men.    Research into understanding gender biases in machine learning frequently occurs within the computer science field. This has frequently resulted in research where bias is inconsistently defined, and proposed techniques do not engage with relevant literature outside of the artificial intelligence field. This research proposes a novel, interdisciplinary approach to the measurement and validation of gender biases in machine learning. This approach translates methods of human-based gender bias measurement in psychology, forming a gender bias questionnaire for use on a machine rather than a human.   The final output system of this research as a proof of concept demonstrates the potential for a new approach to gender bias investigation. This system takes advantage of the qualitative nature of language to provide a new way of understanding gender data biases by outputting both quantitative and qualitative results. These results can then be meaningfully translated into their real-world implications.</p>


Author(s):  
Dolores García ◽  
Jesus O. Lacruz ◽  
Damiano Badini ◽  
Danilo De Donno ◽  
Joerg Widmer

2020 ◽  
Author(s):  
Romain Gaillac ◽  
Siwar Chibani ◽  
François-Xavier Coudert

<div> <div> <div> <p>The characterization of the mechanical properties of crystalline materials is nowadays considered a routine computational task in DFT calculations. However, its high computational cost still prevents it from being used in high-throughput screening methodologies, where a cheaper estimate of the elastic properties of a material is required. In this work, we have investigated the accuracy of force field calculations for the prediction of mechanical properties, and in particular for the characterization of the directional Poisson’s ratio. We analyze the behavior of about 600,000 hypothetical zeolitic structures at the classical level (a scale three orders of magnitude larger than previous studies), to highlight generic trends between mechanical properties and energetic stability. By comparing these results with DFT calculations on 991 zeolitic frameworks, we highlight the limitations of force field predictions, in particular for predicting auxeticity. We then used this reference DFT data as a training set for a machine learning algorithm, showing that it offers a way to build fast and reliable predictive models for anisotropic properties. The accuracies obtained are, in particular, much better than the current “cheap” approach for screening, which is the use of force fields. These results are a significant improvement over the previous work, due to the more difficult nature of the properties studied, namely the anisotropic elastic response. It is also the first time such a large training data set is used for zeolitic materials. </p></div></div></div><div><div><div> </div> </div> </div>


Sign in / Sign up

Export Citation Format

Share Document