scholarly journals A machine-learning-based alloy design platform that enables both forward and inverse predictions for thermo-mechanically controlled processed (TMCP) steel alloys

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Jin-Woong Lee ◽  
Chaewon Park ◽  
Byung Do Lee ◽  
Joonseo Park ◽  
Nam Hoon Goo ◽  
...  

AbstractPredicting mechanical properties such as yield strength (YS) and ultimate tensile strength (UTS) is an intricate undertaking in practice, notwithstanding a plethora of well-established theoretical and empirical models. A data-driven approach should be a fundamental exercise when making YS/UTS predictions. For this study, we collected 16 descriptors (attributes) that implicate the compositional and processing information and the corresponding YS/UTS values for 5473 thermo-mechanically controlled processed (TMCP) steel alloys. We set up an integrated machine-learning (ML) platform consisting of 16 ML algorithms to predict the YS/UTS based on the descriptors. The integrated ML platform involved regularization-based linear regression algorithms, ensemble ML algorithms, and some non-linear ML algorithms. Despite the dirty nature of most real-world industry data, we obtained acceptable holdout dataset test results such as R2 > 0.6 and MSE < 0.01 for seven non-linear ML algorithms. The seven fully trained non-linear ML models were used for the ensuing ‘inverse design (prediction)’ based on an elitist-reinforced, non-dominated sorting genetic algorithm (NSGA-II). The NSGA-II enabled us to predict solutions that exhibit desirable YS/UTS values for each ML algorithm. In addition, the NSGA-II-driven solutions in the 16-dimensional input feature space were visualized using holographic research strategy (HRS) in order to systematically compare and analyze the inverse-predicted solutions for each ML algorithm.

2021 ◽  
Author(s):  
David Dempsey ◽  
Shane Cronin ◽  
Andreas Kempa-Liehr ◽  
Martin Letourneur

&lt;p&gt;Sudden steam-driven eruptions at tourist volcanoes were the cause of 63 deaths at Mt Ontake (Japan) in 2014, and 22 deaths at Whakaari (New Zealand) in 2019. Warning systems that can anticipate these eruptions could provide crucial hours for evacuation or sheltering but these require reliable forecasting. Recently, machine learning has been used to extract eruption precursors from observational data and train forecasting models. However, a weakness of this data-driven approach is its reliance on long observational records that span multiple eruptions. As many volcano datasets may only record one or no eruptions, there is a need to extend these techniques to data-poor locales.&lt;/p&gt;&lt;p&gt;Transfer machine learning is one approach for generalising lessons learned at data-rich volcanoes and applying them to data-poor ones. Here, we tackle two problems: (1) generalising time series features between seismic stations at Whakaari to address recording gaps, and (2) training a forecasting model for Mt Ruapehu augmented using data from Whakaari. This required that we standardise data records at different stations for direct comparisons, devise an interpolation scheme to fill in missing eruption data, and combine volcano-specific feature matrices prior to model training.&lt;/p&gt;&lt;p&gt;We trained a forecast model for Whakaari using tremor data from three eruptions recorded at one seismic station (WSRZ) and augmented by data from two other eruptions recorded at a second station (WIZ). First, the training data from both stations were standardised to a unit normal distribution in log space. Then, linear interpolation in feature space was used to infer missing eruption features at WSRZ. Under pseudo-prospective testing, the augmented model had similar forecasting skill to one trained using all five eruptions recorded at a single station (WIZ). However, extending this approach to Ruapehu, we saw reduced performance indicating that more work is needed in standardisation and feature selection.&lt;/p&gt;


2020 ◽  
Author(s):  
Roland Stirnberg ◽  
Jan Cermak ◽  
Simone Kotthaus ◽  
Martial Haeffelin ◽  
Hendrik Andersen ◽  
...  

Abstract. Air pollution, in particular high concentrations of particulate matter smaller than 1 µm in diameter (PM1), continues to be a major health problem, and meteorology is known to substantially contribute to atmospheric PM concentrations. However, the scientific understanding of the complex mechanisms leading to high pollution episodes is inconclusive, as the effects of meteorological variables are not easy to separate and quantify. In this study, a novel, data-driven approach based on empirical relationships is used to characterise the role of meteorology on atmospheric concentrations of PM1. A tree-based machine learning model is set up to reproduce concentrations of speciated PM1 at a suburban site southwest of Paris, France, using meteorological variables as input features. The contributions of each meteorological feature to modeled PM1 concentrations are quantified using SHapley Additive exPlanation (SHAP) regression values. Meteorological contributions to PM1 concentrations are analysed in selected high-resolution case studies, contrasting season-specific processes. Model results suggest that winter pollution episodes are often driven by a combination of shallow mixed layer heights (MLH), low temperatures, low wind speeds or inflow from northeastern wind directions. Contributions of MLHs to the winter pollution episodes are quantified to be on average ~ 5 µg/m³ for MLHs below 500 m agl. Temperatures below freezing initiate formation processes and increase local emissions related to residential heating, amounting to a contribution of as much as ~ 9 µg/m³. Northeasterly winds are found to contribute ~ 5 µg/m³ to total PM1 concentrations (combined effects of u- and v-wind components), by advecting particles from source regions, e.g. central Europe or the Paris region. However, in calm conditions (i.e. wind speeds


Sensors ◽  
2021 ◽  
Vol 21 (14) ◽  
pp. 4846
Author(s):  
Dušan Marković ◽  
Dejan Vujičić ◽  
Snežana Tanasković ◽  
Borislav Đorđević ◽  
Siniša Ranđić ◽  
...  

The appearance of pest insects can lead to a loss in yield if farmers do not respond in a timely manner to suppress their spread. Occurrences and numbers of insects can be monitored through insect traps, which include their permanent touring and checking of their condition. Another more efficient way is to set up sensor devices with a camera at the traps that will photograph the traps and forward the images to the Internet, where the pest insect’s appearance will be predicted by image analysis. Weather conditions, temperature and relative humidity are the parameters that affect the appearance of some pests, such as Helicoverpa armigera. This paper presents a model of machine learning that can predict the appearance of insects during a season on a daily basis, taking into account the air temperature and relative humidity. Several machine learning algorithms for classification were applied and their accuracy for the prediction of insect occurrence was presented (up to 76.5%). Since the data used for testing were given in chronological order according to the days when the measurement was performed, the existing model was expanded to take into account the periods of three and five days. The extended method showed better accuracy of prediction and a lower percentage of false detections. In the case of a period of five days, the accuracy of the affected detections was 86.3%, while the percentage of false detections was 11%. The proposed model of machine learning can help farmers to detect the occurrence of pests and save the time and resources needed to check the fields.


2021 ◽  
Vol 428 ◽  
pp. 110074
Author(s):  
Rem-Sophia Mouradi ◽  
Cédric Goeury ◽  
Olivier Thual ◽  
Fabrice Zaoui ◽  
Pablo Tassi

2021 ◽  
Vol 8 (1) ◽  
pp. 205395172110135
Author(s):  
Florian Jaton

This theoretical paper considers the morality of machine learning algorithms and systems in the light of the biases that ground their correctness. It begins by presenting biases not as a priori negative entities but as contingent external referents—often gathered in benchmarked repositories called ground-truth datasets—that define what needs to be learned and allow for performance measures. I then argue that ground-truth datasets and their concomitant practices—that fundamentally involve establishing biases to enable learning procedures—can be described by their respective morality, here defined as the more or less accounted experience of hesitation when faced with what pragmatist philosopher William James called “genuine options”—that is, choices to be made in the heat of the moment that engage different possible futures. I then stress three constitutive dimensions of this pragmatist morality, as far as ground-truthing practices are concerned: (I) the definition of the problem to be solved (problematization), (II) the identification of the data to be collected and set up (databasing), and (III) the qualification of the targets to be learned (labeling). I finally suggest that this three-dimensional conceptual space can be used to map machine learning algorithmic projects in terms of the morality of their respective and constitutive ground-truthing practices. Such techno-moral graphs may, in turn, serve as equipment for greater governance of machine learning algorithms and systems.


Author(s):  
Edgar A. Martínez-García ◽  
Nancy Ávila Rodríguez ◽  
Ricardo Rodríguez-Jorge ◽  
Jolanta Mizera-Pietraszko ◽  
Jaichandar Kulandaidaasan Sheba ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document