A Time-Dependent Reliability Estimation Method Based on Gaussian Process Regression

Author(s):  
Wang Han ◽  
Xiaoling Zhang ◽  
Xiesi Huang ◽  
Haiqing Li

This paper presents a time-dependent reliability estimation method for engineering system based on machine learning and simulation method. Due to the stochastic nature of the environmental loads and internal incentive, the physics of failure for mechanical system is complex, and it is challenging to include uncertainties for the physical modeling of failure in the engineered system’s life cycle. In this paper, an efficient time-dependent reliability assessment framework for mechanical system is proposed using a machine learning algorithm considering stochastic dynamic loads in the mechanical system. Firstly, stochastic external loads of mechanical system are analyzed, and the finite element model is established. Secondly, the physics of failure mode of mechanical system at a time location is analyzed, and the distribution of time realization under each load condition is calculated. Then, the distribution of fatigue life can be obtained based on high-cycle fatigue theory. To reduce the calculation cost, a machine learning algorithm is utilized for physical modeling of failure by integrating uniform design and Gaussian process regression. The probabilistic fatigue life of gear transmission system under different load conditions can be calculated, and the time-varying reliability of mechanical system is further evaluated. Finally, numerical examples and the fatigue reliability estimation of gear transmission system is presented to demonstrate the effectiveness of the proposed method.

Author(s):  
Sachin Dev Suresh ◽  
Ali Qasim ◽  
Bhajan Lal ◽  
Syed Muhammad Imran ◽  
Khor Siak Foo

The production of oil and natural gas contributes to a significant amount of revenue generation in Malaysia thereby strengthening the country’s economy. The flow assurance industry is faced with impediments during smooth operation of the transmission pipeline in which gas hydrate formation is the most important. It affects the normal operation of the pipeline by plugging it. Under high pressure and low temperature conditions, gas hydrate is a crystalline structure consisting of a network of hydrogen bonds between host molecules of water and guest molecules of the incoming gases. Industry uses different types of chemical inhibitors in pipeline to suppress hydrate formation. To overcome this problem, machine learning algorithm has been introduced as part of risk management strategies. The objective of this paper is to utilize Machine Learning (ML) model which is Gaussian Process Regression (GPR). GPR is a new approach being applied to mitigate the growth of gas hydrate. The input parameters used are concentration and pressure of Carbon Dioxide (CO2) and Methane (CH4) gas hydrates whereas the output parameter is the Average Depression Temperature (ADT). The values for the parameter are taken from available data sets that enable GPR to predict the results accurately in terms of Coefficient of Determination, R2 and Mean Squared Error, MSE. The outcome from the research showed that GPR model provided with highest R2 value for training and testing data of 97.25% and 96.71%, respectively. MSE value for GPR was also found to be lowest for training and testing data of 0.019 and 0.023, respectively.


2019 ◽  
Vol 11 (4) ◽  
pp. 168781401983987
Author(s):  
Wei Peng ◽  
Xiesi Huang ◽  
Xiaoling Zhang ◽  
Liyong Ni ◽  
Shengguang Zhu

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Faezeh Akhavizadegan ◽  
Javad Ansarifar ◽  
Lizhi Wang ◽  
Isaiah Huber ◽  
Sotirios V. Archontoulis

AbstractThe performance of crop models in simulating various aspects of the cropping system is sensitive to parameter calibration. Parameter estimation is challenging, especially for time-dependent parameters such as cultivar parameters with 2–3 years of lifespan. Manual calibration of the parameters is time-consuming, requires expertise, and is prone to error. This research develops a new automated framework to estimate time-dependent parameters for crop models using a parallel Bayesian optimization algorithm. This approach integrates the power of optimization and machine learning with prior agronomic knowledge. To test the proposed time-dependent parameter estimation method, we simulated historical yield increase (from 1985 to 2018) in 25 environments in the US Corn Belt with APSIM. Then we compared yield simulation results and nine parameter estimates from our proposed parallel Bayesian framework, with Bayesian optimization and manual calibration. Results indicated that parameters calibrated using the proposed framework achieved an 11.6% reduction in the prediction error over Bayesian optimization and a 52.1% reduction over manual calibration. We also trained nine machine learning models for yield prediction and found that none of them was able to outperform the proposed method in terms of root mean square error and R2. The most significant contribution of the new automated framework for time-dependent parameter estimation is its capability to find close-to-optimal parameters for the crop model. The proposed approach also produced explainable insight into cultivar traits’ trends over 34 years (1985–2018).


2021 ◽  
Author(s):  
Redouane Lguensat ◽  
Julie Deshayes ◽  
Venkatramani Balaji

<p>A major cause of earth system model discrepancies result from processes that are missed or are incorrectly represented in the model's equations. Despite the increasing number of collected observations, reducing parametric uncertainties is still an enourmous challenge.</p><p>The process of relying on experience and intuition to find good sets of parameters, commonly referred to as "parameter tuning" keeps having a central role in the roadmaps followed by dozens of modeling groups involved in community efforts such as the Coupled Model Intercomparison Project (CMIP). </p><p>In this work, we study a tool from the Uncertainty Quantification community that started recently to draw attention in climate modeling: History Matching also referred to as "Iterative Refocussing".</p><p>The core idea of History Matching is to run several simulations with different set of parameters and then use observed data to rule-out any parameter settings which are "implausible". Since climate simulation models are computationally heavy and do not allow testing every possible parameter setting, we employ an emulator that can be a cheap and accurate replacement. Here a machine learning algorithm, namely, Gaussian Process Regression is used for the emulating step. History Matching is then a good example where the recent advances in machine learning can be of high interest to climate modeling.</p><p>We investigate History Matching on a toy model: the two-layer Lorenz96, and share our findings about the challenges and opportunities of using this technique. We also discuss the use of this technique for realistic ocean models such as NEMO.</p>


2021 ◽  
Author(s):  
Nicolas Leseur ◽  
Alfredo Mendez ◽  
Muhammad Zeeshan Baig ◽  
Pierre-Olivier Goiran

Abstract A practical example of a theory-guided data science case study is presented to evaluate the potential of the Diyab formation, an Upper Jurassic interval, source rock of some of the largest reservoirs in the Arabian Peninsula. A workflow base on a three-step approach combining the physics of logging tool response and a probabilistic machine-learning algorithm was undertaken to evaluate four wells of the prospect. At first, a core-calibrated multi-mineral model was established on a concept well for which an extensive suite of logs and core measurements had been acquired. To transfer the knowledge gained from the latter physics-driven interpretation onto the other data-scarce wells, the relationship between the output rock and fluid volumes and their input log responses was then learned by means of a Gaussian Process Regression (GPR). Finally, once trained on the key well, the latter probabilistic algorithm was deployed on the three remaining wells to predict reservoir properties, quantify resource potential and estimate volumetric-related uncertainties. The physics-informed machine-learning approach introduced in this work was found to provide results which matches with the majority of the available core data, while discrepancies could generally be explained by the occurrence of laminations which thickness are under the resolution of nuclear logs. Overall, the GPR approach seems to enable an efficient transfer of knowledge from data-rich key wells to other data-scarce wells. As opposed to a more conventional formation evaluation process which is carried out more independently from the key well, the present approach ensures that the final petrophysical interpretation reflects and benefits from the insights and the physics-driven coherency achieved at key well location.


2018 ◽  
Author(s):  
C.H.B. van Niftrik ◽  
F. van der Wouden ◽  
V. Staartjes ◽  
J. Fierstra ◽  
M. Stienen ◽  
...  

2020 ◽  
pp. 1-12
Author(s):  
Li Dongmei

English text-to-speech conversion is the key content of modern computer technology research. Its difficulty is that there are large errors in the conversion process of text-to-speech feature recognition, and it is difficult to apply the English text-to-speech conversion algorithm to the system. In order to improve the efficiency of the English text-to-speech conversion, based on the machine learning algorithm, after the original voice waveform is labeled with the pitch, this article modifies the rhythm through PSOLA, and uses the C4.5 algorithm to train a decision tree for judging pronunciation of polyphones. In order to evaluate the performance of pronunciation discrimination method based on part-of-speech rules and HMM-based prosody hierarchy prediction in speech synthesis systems, this study constructed a system model. In addition, the waveform stitching method and PSOLA are used to synthesize the sound. For words whose main stress cannot be discriminated by morphological structure, label learning can be done by machine learning methods. Finally, this study evaluates and analyzes the performance of the algorithm through control experiments. The results show that the algorithm proposed in this paper has good performance and has a certain practical effect.


2020 ◽  
pp. 1-11
Author(s):  
Jie Liu ◽  
Lin Lin ◽  
Xiufang Liang

The online English teaching system has certain requirements for the intelligent scoring system, and the most difficult stage of intelligent scoring in the English test is to score the English composition through the intelligent model. In order to improve the intelligence of English composition scoring, based on machine learning algorithms, this study combines intelligent image recognition technology to improve machine learning algorithms, and proposes an improved MSER-based character candidate region extraction algorithm and a convolutional neural network-based pseudo-character region filtering algorithm. In addition, in order to verify whether the algorithm model proposed in this paper meets the requirements of the group text, that is, to verify the feasibility of the algorithm, the performance of the model proposed in this study is analyzed through design experiments. Moreover, the basic conditions for composition scoring are input into the model as a constraint model. The research results show that the algorithm proposed in this paper has a certain practical effect, and it can be applied to the English assessment system and the online assessment system of the homework evaluation system algorithm system.


Author(s):  
Kunal Parikh ◽  
Tanvi Makadia ◽  
Harshil Patel

Dengue is unquestionably one of the biggest health concerns in India and for many other developing countries. Unfortunately, many people have lost their lives because of it. Every year, approximately 390 million dengue infections occur around the world among which 500,000 people are seriously infected and 25,000 people have died annually. Many factors could cause dengue such as temperature, humidity, precipitation, inadequate public health, and many others. In this paper, we are proposing a method to perform predictive analytics on dengue’s dataset using KNN: a machine-learning algorithm. This analysis would help in the prediction of future cases and we could save the lives of many.


Sign in / Sign up

Export Citation Format

Share Document