Physics-Based Gaussian Process Method for Predicting Average Product Lifetime in Design Stage

Author(s):  
Xinpeng Wei ◽  
Daoru Han ◽  
Xiaoping Du

Abstract The average lifetime or the mean time to failure (MTTF) of a product is an important metric to measure the product reliability. Current methods of evaluating the MTTF are mainly based on statistics or data. They need lifetime testing on a number of products to get the lifetime samples, which are then used to estimate the MTTF. The lifetime testing, however, is expensive in terms of both time and cost. The efficiency is also low because it cannot be effectively incorporated in the early design stage where many physics-based models are available. We propose to predict the MTTF in the design stage by means of a physics-based Gaussian process method. Since the physics-based models are usually computationally demanding, we face a problem with both big data (on the model input side) and small data (on the model output side). The proposed adaptive supervised training method with the Gaussian process regression can quickly predict the MTTF with minimized number of calling the physical models. The proposed method can enable the design to be continually improved by changing design variables until reliability measures, including the MTTF, are satisfied. The effectiveness of the method is demonstrated by three examples.

Author(s):  
Xinpeng Wei ◽  
Daoru Han ◽  
Xiaoping Du

Abstract Average lifetime, or mean time to failure (MTTF), of a product is an important metric to measure the product reliability. Current methods of evaluating MTTF are mainly statistics or data based. They need lifetime testing on a number of products to get the lifetime samples, which are then used to estimate MTTF. The lifetime testing, however, is expensive in terms of both time and cost. The efficiency is also low because it cannot be effectively incorporated in the early design stage where many physics-based models are available. We propose to predict MTTF in the design stage by means of physics-based models. The advantage is that the design can be continually improved by changing design variables until reliability measures, including MTTF, are satisfied. Since the physics-based models are usually computationally demanding, we face a problem with both big data (on the model input side) and small data (on the model output side). We develop an adaptive supervised training method based on Gaussian process regression, and the method can then quickly predict MTTF with minimized number of calling the physics-based models. The effectiveness of the method is demonstrated by two examples.


Algorithms ◽  
2021 ◽  
Vol 14 (6) ◽  
pp. 177
Author(s):  
Edina Chandiwana ◽  
Caston Sigauke ◽  
Alphonce Bere

Probabilistic solar power forecasting has been critical in Southern Africa because of major shortages of power due to climatic changes and other factors over the past decade. This paper discusses Gaussian process regression (GPR) coupled with core vector regression for short-term hourly global horizontal irradiance (GHI) forecasting. GPR is a powerful Bayesian non-parametric regression method that works well for small data sets and quantifies the uncertainty in the predictions. The choice of a kernel that characterises the covariance function is a crucial issue in Gaussian process regression. In this study, we adopt the minimum enclosing ball (MEB) technique. The MEB improves the forecasting power of GPR because the smaller the ball is, the shorter the training time, hence performance is robust. Forecasting of real-time data was done on two South African radiometric stations, Stellenbosch University (SUN) in a coastal area of the Western Cape Province, and the University of Venda (UNV) station in the Limpopo Province. Variables were selected using the least absolute shrinkage and selection operator via hierarchical interactions. The Bayesian approach using informative priors was used for parameter estimation. Based on the root mean square error, mean absolute error and percentage bias the results showed that the GPR model gives the most accurate predictions compared to those from gradient boosting and support vector regression models, making this study a useful tool for decision-makers and system operators in power utility companies. The main contribution of this paper is in the use of a GPR model coupled with the core vector methodology which is used in forecasting GHI using South African data. This is the first application of GPR coupled with core vector regression in which the minimum enclosing ball is applied on GHI data, to the best of our knowledge.


Author(s):  
Ria Novita Suwandani ◽  
Yogo Purwono

This study aims to calculate the allowance for losses by applying Gaussian Process regression to estimate future claims. Modeling is done on motor vehicle insurance data. The data used in this study are historical data on PT XYZ's motor vehicle insurance business line during 2017 and 2019 (January 2017 to December 2019). Data analysis will be carried out on the 2017 - 2019 data to obtain an estimate of the claim reserves in the following year, namely 2018 - 2020. This study uses the Chain Ladder method which is the most popular loss reserving method in theory and practice. The estimation results show that the Gaussian Process Regression method is very flexible and can be applied without much adjustment. These results were also compared with the Chain Ladder method. Estimated claim reserves for PT XYZ's motor vehicle business line using the chain-ladder method, the company must provide funds for 2017 of 8,997,979,222 IDR in 2018 16,194,503,605 IDR in 2019 amounting to Rp. 1,719,764,520 for backup. Meanwhile, by using the Bayessian Gaussian Process method, the company must provide funds for 2017 of 9,060,965,077 IDR in 2018 amounting to 16,307,865,130 IDR, and in 2019 1,731,802,871 IDR for backup. The more conservative Bayessian Gaussian Process method. Motor vehicle insurance data has a short development time (claims occur) so that it is included in the short-tail type of business.


2018 ◽  
Vol 885 ◽  
pp. 18-31 ◽  
Author(s):  
Paul Gardner ◽  
Timothy J. Rogers ◽  
Charles Lord ◽  
Rob J. Barthorpe

Efficient surrogate modelling of computer models (herein defined as simulators) becomes of increasing importance as more complex simulators and non-deterministic methods, such as Monte Carlo simulations, are utilised. This is especially true in large multidimensional design spaces. In order for these technologies to be feasible in an early design stage context, the surrogate model (oremulator) must create an accurate prediction of the simulator in the proposed design space. Gaussian Processes (GPs) are a powerful non-parametric Bayesian approach that can be used as emulators. The probabilistic framework means that predictive distributions are inferred, providing an understanding of the uncertainty introduced by replacing the simulator with an emulator, known as code uncertainty. An issue with GPs is that they have a computational complexity of O(N3) (where N is the number of data points), which can be reduced to O(NM2) by using various sparse approximations, calculated from a subset of inducing points (where M is the number of inducing points). This paper explores the use of sparse Gaussian process emulators as a computationally efficient method for creating surrogate models of structural dynamics simulators. Discussions on the performance of these methods are presented along with comments regarding key applications to the early design stage.


2020 ◽  
Author(s):  
Marc Philipp Bahlke ◽  
Natnael Mogos ◽  
Jonny Proppe ◽  
Carmen Herrmann

Heisenberg exchange spin coupling between metal centers is essential for describing and understanding the electronic structure of many molecular catalysts, metalloenzymes, and molecular magnets for potential application in information technology. We explore the machine-learnability of exchange spin coupling, which has not been studied yet. We employ Gaussian process regression since it can potentially deal with small training sets (as likely associated with the rather complex molecular structures required for exploring spin coupling) and since it provides uncertainty estimates (“error bars”) along with predicted values. We compare a range of descriptors and kernels for 257 small dicopper complexes and find that a simple descriptor based on chemical intuition, consisting only of copper-bridge angles and copper-copper distances, clearly outperforms several more sophisticated descriptors when it comes to extrapolating towards larger experimentally relevant complexes. Exchange spin coupling is similarly easy to learn as the polarizability, while learning dipole moments is much harder. The strength of the sophisticated descriptors lies in their ability to linearize structure-property relationships, to the point that a simple linear ridge regression performs just as well as the kernel-based machine-learning model for our small dicopper data set. The superior extrapolation performance of the simple descriptor is unique to exchange spin coupling, reinforcing the crucial role of choosing a suitable descriptor, and highlighting the interesting question of the role of chemical intuition vs. systematic or automated selection of features for machine learning in chemistry and material science.


2018 ◽  
Author(s):  
Caitlin C. Bannan ◽  
David Mobley ◽  
A. Geoff Skillman

<div>A variety of fields would benefit from accurate pK<sub>a</sub> predictions, especially drug design due to the affect a change in ionization state can have on a molecules physiochemical properties.</div><div>Participants in the recent SAMPL6 blind challenge were asked to submit predictions for microscopic and macroscopic pK<sub>a</sub>s of 24 drug like small molecules.</div><div>We recently built a general model for predicting pK<sub>a</sub>s using a Gaussian process regression trained using physical and chemical features of each ionizable group.</div><div>Our pipeline takes a molecular graph and uses the OpenEye Toolkits to calculate features describing the removal of a proton.</div><div>These features are fed into a Scikit-learn Gaussian process to predict microscopic pK<sub>a</sub>s which are then used to analytically determine macroscopic pK<sub>a</sub>s.</div><div>Our Gaussian process is trained on a set of 2,700 macroscopic pK<sub>a</sub>s from monoprotic and select diprotic molecules.</div><div>Here, we share our results for microscopic and macroscopic predictions in the SAMPL6 challenge.</div><div>Overall, we ranked in the middle of the pack compared to other participants, but our fairly good agreement with experiment is still promising considering the challenge molecules are chemically diverse and often polyprotic while our training set is predominately monoprotic.</div><div>Of particular importance to us when building this model was to include an uncertainty estimate based on the chemistry of the molecule that would reflect the likely accuracy of our prediction. </div><div>Our model reports large uncertainties for the molecules that appear to have chemistry outside our domain of applicability, along with good agreement in quantile-quantile plots, indicating it can predict its own accuracy.</div><div>The challenge highlighted a variety of means to improve our model, including adding more polyprotic molecules to our training set and more carefully considering what functional groups we do or do not identify as ionizable. </div>


2019 ◽  
Vol 150 (4) ◽  
pp. 041101 ◽  
Author(s):  
Iakov Polyak ◽  
Gareth W. Richings ◽  
Scott Habershon ◽  
Peter J. Knowles

Sign in / Sign up

Export Citation Format

Share Document