Revolutionizing Membrane Design Using Machine Learning-Bayesian Optimization

Author(s):  
Haiping Gao ◽  
Shifa Zhong ◽  
Wenlong Zhang ◽  
Thomas Igou ◽  
Eli Berger ◽  
...  
Foods ◽  
2021 ◽  
Vol 10 (4) ◽  
pp. 763
Author(s):  
Ran Yang ◽  
Zhenbo Wang ◽  
Jiajia Chen

Mechanistic-modeling has been a useful tool to help food scientists in understanding complicated microwave-food interactions, but it cannot be directly used by the food developers for food design due to its resource-intensive characteristic. This study developed and validated an integrated approach that coupled mechanistic-modeling and machine-learning to achieve efficient food product design (thickness optimization) with better heating uniformity. The mechanistic-modeling that incorporated electromagnetics and heat transfer was previously developed and validated extensively and was used directly in this study. A Bayesian optimization machine-learning algorithm was developed and integrated with the mechanistic-modeling. The integrated approach was validated by comparing the optimization performance with a parametric sweep approach, which is solely based on mechanistic-modeling. The results showed that the integrated approach had the capability and robustness to optimize the thickness of different-shape products using different initial training datasets with higher efficiency (45.9% to 62.1% improvement) than the parametric sweep approach. Three rectangular-shape trays with one optimized thickness (1.56 cm) and two non-optimized thicknesses (1.20 and 2.00 cm) were 3-D printed and used in microwave heating experiments, which confirmed the feasibility of the integrated approach in thickness optimization. The integrated approach can be further developed and extended as a platform to efficiently design complicated microwavable foods with multiple-parameter optimization.


Fuels ◽  
2021 ◽  
Vol 2 (3) ◽  
pp. 286-303
Author(s):  
Vuong Van Pham ◽  
Ebrahim Fathi ◽  
Fatemeh Belyadi

The success of machine learning (ML) techniques implemented in different industries heavily rely on operator expertise and domain knowledge, which is used in manually choosing an algorithm and setting up the specific algorithm parameters for a problem. Due to the manual nature of model selection and parameter tuning, it is impossible to quantify or evaluate the quality of this manual process, which in turn limits the ability to perform comparison studies between different algorithms. In this study, we propose a new hybrid approach for developing machine learning workflows to help automated algorithm selection and hyperparameter optimization. The proposed approach provides a robust, reproducible, and unbiased workflow that can be quantified and validated using different scoring metrics. We have used the most common workflows implemented in the application of artificial intelligence (AI) and ML in engineering problems including grid/random search, Bayesian search and optimization, genetic programming, and compared that with our new hybrid approach that includes the integration of Tree-based Pipeline Optimization Tool (TPOT) and Bayesian optimization. The performance of each workflow is quantified using different scoring metrics such as Pearson correlation (i.e., R2 correlation) and Mean Square Error (i.e., MSE). For this purpose, actual field data obtained from 1567 gas wells in Marcellus Shale, with 121 features from reservoir, drilling, completion, stimulation, and operation is tested using different proposed workflows. A proposed new hybrid workflow is then used to evaluate the type well used for evaluation of Marcellus shale gas production. In conclusion, our automated hybrid approach showed significant improvement in comparison to other proposed workflows using both scoring matrices. The new hybrid approach provides a practical tool that supports the automated model and hyperparameter selection, which is tested using real field data that can be implemented in solving different engineering problems using artificial intelligence and machine learning. The new hybrid model is tested in a real field and compared with conventional type wells developed by field engineers. It is found that the type well of the field is very close to P50 predictions of the field, which shows great success in the completion design of the field performed by field engineers. It also shows that the field average production could have been improved by 8% if shorter cluster spacing and higher proppant loading per cluster were used during the frac jobs.


2021 ◽  
Vol 7 (1) ◽  
Author(s):  
Flore Mekki-Berrada ◽  
Zekun Ren ◽  
Tan Huang ◽  
Wai Kuan Wong ◽  
Fang Zheng ◽  
...  

AbstractIn materials science, the discovery of recipes that yield nanomaterials with defined optical properties is costly and time-consuming. In this study, we present a two-step framework for a machine learning-driven high-throughput microfluidic platform to rapidly produce silver nanoparticles with the desired absorbance spectrum. Combining a Gaussian process-based Bayesian optimization (BO) with a deep neural network (DNN), the algorithmic framework is able to converge towards the target spectrum after sampling 120 conditions. Once the dataset is large enough to train the DNN with sufficient accuracy in the region of the target spectrum, the DNN is used to predict the colour palette accessible with the reaction synthesis. While remaining interpretable by humans, the proposed framework efficiently optimizes the nanomaterial synthesis and can extract fundamental knowledge of the relationship between chemical composition and optical properties, such as the role of each reactant on the shape and amplitude of the absorbance spectrum.


2021 ◽  
Author(s):  
Theresa Reiker ◽  
Monica Golumbeanu ◽  
Andrew Shattock ◽  
Lydia Burgert ◽  
Thomas A. Smith ◽  
...  

AbstractIndividual-based models have become important tools in the global battle against infectious diseases, yet model complexity can make calibration to biological and epidemiological data challenging. We propose a novel approach to calibrate disease transmission models via a Bayesian optimization framework employing machine learning emulator functions to guide a global search over a multi-objective landscape. We demonstrate our approach by application to an established individual-based model of malaria, optimizing over a high-dimensional parameter space with respect to a portfolio of multiple fitting objectives built from datasets capturing the natural history of malaria transmission and disease progression. Outperforming other calibration methodologies, the new approach quickly reaches an improved final goodness of fit. Per-objective parameter importance and sensitivity diagnostics provided by our approach offer epidemiological insights and enhance trust in predictions through greater interpretability.One Sentence SummaryWe propose a novel, fast, machine learning-based approach to calibrate disease transmission models that outperforms other methodologies


2021 ◽  
Vol 23 (2) ◽  
pp. 359-370
Author(s):  
Michał Matuszczak ◽  
Mateusz Żbikowski ◽  
Andrzej Teodorczyk

The article proposes an approach based on deep and machine learning models to predict a component failure as an enhancement of condition based maintenance scheme of a turbofan engine and reviews currently used prognostics approaches in the aviation industry. Component degradation scale representing its life consumption is proposed and such collected condition data are combined with engines sensors and environmental data. With use of data manipulation techniques, a framework for models training is created and models' hyperparameters obtained through Bayesian optimization. Models predict the continuous variable representing condition based on the input. Best performed model is identified by detemining its score on the holdout set. Deep learning models achieved 0.71 MSE score (ensemble meta-model of neural networks) and outperformed significantly machine learning models with their best score at 1.75. The deep learning models shown their feasibility to predict the component condition within less than 1 unit of the error in the rank scale.


Electronics ◽  
2019 ◽  
Vol 8 (11) ◽  
pp. 1267 ◽  
Author(s):  
Yonghoon Kim ◽  
and Mokdong Chung

In machine learning, performance is of great value. However, each learning process requires much time and effort in setting each parameter. The critical problem in machine learning is determining the hyperparameters, such as the learning rate, mini-batch size, and regularization coefficient. In particular, we focus on the learning rate, which is directly related to learning efficiency and performance. Bayesian optimization using a Gaussian Process is common for this purpose. In this paper, based on Bayesian optimization, we attempt to optimize the hyperparameters automatically by utilizing a Gamma distribution, instead of a Gaussian distribution, to improve the training performance of predicting image discrimination. As a result, our proposed method proves to be more reasonable and efficient in the estimation of learning rate when training the data, and can be useful in machine learning.


Sign in / Sign up

Export Citation Format

Share Document