Utilizing Natural Frequency Monitoring and Machine Learning to Monitor and Predict Structural Integrity and Minimize the Cost of Fixed Offshore Platform Intervention

2020 ◽  
Author(s):  
Anthony Falsetta ◽  
Elaine Whiteley ◽  
Craig Dickinson ◽  
George Zhou ◽  
Shankar Sundararaman
2018 ◽  
Vol 147 ◽  
pp. 05002
Author(s):  
Ricky L. Tawekal ◽  
Faisal D. Purnawarman ◽  
Yati Muliati

In RBUI method, platform with higher risk level will need inspection done more intensively than those with lower risk level. However, the probability of failure (PoF) evaluation in RBUI method is usually carried out in semi quantitative way by comparing failure parameters associated with the same damage mechanism between a group of platforms located in the same area. Therefore, RBUI will not be effective for platforms spread in distant areas where failure parameter associated with the same damage mechanism may not be the same. The existing standard, American Petroleum Institute, Recommended Practice for Structural Integrity Management of Fixed Offshore Structures (API RP-2SIM), is limited on the general instructions in determining the risk value of a platform, yet it does not provide a detail instruction on how determining the Probability of Failure (PoF) of platform. In this paper, the PoF is determined quantitatively by calculating structural reliability index based on structural collapse failure mode, thus the method in determining the inspection schedule is called Risk-Reliability Based Underwater Inspection (RReBUI). Models of 3-legs jacket fixed offshore platform in Java Sea and 4-legs jacket fixed offshore platform in Natuna Sea are used to study the implementation of RReBUI.


Author(s):  
Mehrdad Kimiaei ◽  
Jalal Mirzadeh ◽  
Partha Dev ◽  
Mike Efthymiou ◽  
Riaz Khan

Abstract Fixed offshore platforms subject to wave-in-deck loading have historically encountered challenges in meeting target reliability levels. This has often resulted in costly subsea remediation, impacted platform occupancy levels or premature decommissioning of critical structural assets due to safety concerns. This paper addresses the long-standing industry challenge by presenting a novel structural reliability approach that involves converging the analytical behavior of a structure to its measured dynamic response for assessment. In this approach, called the Structural Integrity Management (SIM) TRIAD method, the platform model is calibrated based on the measured in-field platform natural frequencies using a structural health monitoring (SHM) system, so that the reliability assessment can be performed on a structural model whose stiffness is simulated as close to reality as possible. The methodology demonstrates the potential of unlocking structural capacity of offshore structures by removing conservatism normally associated with traditional reliability assessment methods, thus significantly improving the ability to achieve target structural reliability levels in a cost effective manner. The SIM TRIAD method has been implemented while assessing an existing fixed offshore platform subject to wave-in-deck loads, which is located in East Malaysian waters. It has enabled the facility operator to achieve acceptable target structural reliability and has assisted in developing an optimized risk-based inspection (RBI) plan for ensuring safe operations to end of asset field life. The methodology and findings of the assessment are presented in this paper to illustrate the benefits of the SIM TRIAD method.


Polymers ◽  
2021 ◽  
Vol 13 (3) ◽  
pp. 353
Author(s):  
Kun-Cheng Ke ◽  
Ming-Shyan Huang

Conventional methods for assessing the quality of components mass produced using injection molding are expensive and time-consuming or involve imprecise statistical process control parameters. A suitable alternative would be to employ machine learning to classify the quality of parts by using quality indices and quality grading. In this study, we used a multilayer perceptron (MLP) neural network along with a few quality indices to accurately predict the quality of “qualified” and “unqualified” geometric shapes of a finished product. These quality indices, which exhibited a strong correlation with part quality, were extracted from pressure curves and input into the MLP model for learning and prediction. By filtering outliers from the input data and converting the measured quality into quality grades used as output data, we increased the prediction accuracy of the MLP model and classified the quality of finished parts into various quality levels. The MLP model may misjudge datapoints in the “to-be-confirmed” area, which is located between the “qualified” and “unqualified” areas. We classified the “to-be-confirmed” area, and only the quality of products in this area were evaluated further, which reduced the cost of quality control considerably. An integrated circuit tray was manufactured to experimentally demonstrate the feasibility of the proposed method.


2021 ◽  
Vol 3 (1) ◽  
Author(s):  
Zhikuan Zhao ◽  
Jack K. Fitzsimons ◽  
Patrick Rebentrost ◽  
Vedran Dunjko ◽  
Joseph F. Fitzsimons

AbstractMachine learning has recently emerged as a fruitful area for finding potential quantum computational advantage. Many of the quantum-enhanced machine learning algorithms critically hinge upon the ability to efficiently produce states proportional to high-dimensional data points stored in a quantum accessible memory. Even given query access to exponentially many entries stored in a database, the construction of which is considered a one-off overhead, it has been argued that the cost of preparing such amplitude-encoded states may offset any exponential quantum advantage. Here we prove using smoothed analysis that if the data analysis algorithm is robust against small entry-wise input perturbation, state preparation can always be achieved with constant queries. This criterion is typically satisfied in realistic machine learning applications, where input data is subjective to moderate noise. Our results are equally applicable to the recent seminal progress in quantum-inspired algorithms, where specially constructed databases suffice for polylogarithmic classical algorithm in low-rank cases. The consequence of our finding is that for the purpose of practical machine learning, polylogarithmic processing time is possible under a general and flexible input model with quantum algorithms or quantum-inspired classical algorithms in the low-rank cases.


2020 ◽  
Vol 7 (1) ◽  
Author(s):  
Miles L. Timpe ◽  
Maria Han Veiga ◽  
Mischa Knabenhans ◽  
Joachim Stadel ◽  
Stefano Marelli

AbstractIn the late stages of terrestrial planet formation, pairwise collisions between planetary-sized bodies act as the fundamental agent of planet growth. These collisions can lead to either growth or disruption of the bodies involved and are largely responsible for shaping the final characteristics of the planets. Despite their critical role in planet formation, an accurate treatment of collisions has yet to be realized. While semi-analytic methods have been proposed, they remain limited to a narrow set of post-impact properties and have only achieved relatively low accuracies. However, the rise of machine learning and access to increased computing power have enabled novel data-driven approaches. In this work, we show that data-driven emulation techniques are capable of classifying and predicting the outcome of collisions with high accuracy and are generalizable to any quantifiable post-impact quantity. In particular, we focus on the dataset requirements, training pipeline, and classification and regression performance for four distinct data-driven techniques from machine learning (ensemble methods and neural networks) and uncertainty quantification (Gaussian processes and polynomial chaos expansion). We compare these methods to existing analytic and semi-analytic methods. Such data-driven emulators are poised to replace the methods currently used in N-body simulations, while avoiding the cost of direct simulation. This work is based on a new set of 14,856 SPH simulations of pairwise collisions between rotating, differentiated bodies at all possible mutual orientations.


2021 ◽  
Vol 6 (11) ◽  
pp. 157
Author(s):  
Gonçalo Pereira ◽  
Manuel Parente ◽  
João Moutinho ◽  
Manuel Sampaio

Decision support and optimization tools to be used in construction often require an accurate estimation of the cost variables to maximize their benefit. Heavy machinery is traditionally one of the greatest costs to consider mainly due to fuel consumption. These typically diesel-powered machines have a great variability of fuel consumption depending on the scenario of utilization. This paper describes the creation of a framework aiming to estimate the fuel consumption of construction trucks depending on the carried load, the slope, the distance, and the pavement type. Having a more accurate estimation will increase the benefit of these optimization tools. The fuel consumption estimation model was developed using Machine Learning (ML) algorithms supported by data, which were gathered through several sensors, in a specially designed datalogger with wireless communication and opportunistic synchronization, in a real context experiment. The results demonstrated the viability of the method, providing important insight into the advantages associated with the combination of sensorization and the machine learning models in a real-world construction setting. Ultimately, this study comprises a significant step towards the achievement of IoT implementation from a Construction 4.0 viewpoint, especially when considering its potential for real-time and digital twins applications.


2021 ◽  
Vol 3 (2) ◽  
pp. 43-50
Author(s):  
Safa SEN ◽  
Sara Almeida de Figueiredo

Predicting bank failures has been an essential subject in literature due to the significance of the banks for the economic prosperity of a country. Acting as an intermediary player of the economy, banks channel funds between creditors and debtors. In that matter, banks are considered the backbone of the economies; hence, it is important to create early warning systems that identify insolvent banks from solvent ones. Thus, Insolvent banks can apply for assistance and avoid bankruptcy in financially turbulent times. In this paper, we will focus on two different machine learning disciplines: Boosting and Cost-Sensitive methods to predict bank failures. Boosting methods are widely used in the literature due to their better prediction capability. However, Cost-Sensitive Forest is relatively new to the literature and originally invented to solve imbalance problems in software defect detection. Our results show that comparing to the boosting methods, Cost-Sensitive Forest particularly classifies failed banks more accurately. Thus, we suggest using the Cost-Sensitive Forest when predicting bank failures with imbalanced datasets.


Polymers ◽  
2021 ◽  
Vol 13 (18) ◽  
pp. 3100
Author(s):  
Anusha Mairpady ◽  
Abdel-Hamid I. Mourad ◽  
Mohammad Sayem Mozumder

The selection of nanofillers and compatibilizing agents, and their size and concentration, are always considered to be crucial in the design of durable nanobiocomposites with maximized mechanical properties (i.e., fracture strength (FS), yield strength (YS), Young’s modulus (YM), etc). Therefore, the statistical optimization of the key design factors has become extremely important to minimize the experimental runs and the cost involved. In this study, both statistical (i.e., analysis of variance (ANOVA) and response surface methodology (RSM)) and machine learning techniques (i.e., artificial intelligence-based techniques (i.e., artificial neural network (ANN) and genetic algorithm (GA)) were used to optimize the concentrations of nanofillers and compatibilizing agents of the injection-molded HDPE nanocomposites. Initially, through ANOVA, the concentrations of TiO2 and cellulose nanocrystals (CNCs) and their combinations were found to be the major factors in improving the durability of the HDPE nanocomposites. Further, the data were modeled and predicted using RSM, ANN, and their combination with a genetic algorithm (i.e., RSM-GA and ANN-GA). Later, to minimize the risk of local optimization, an ANN-GA hybrid technique was implemented in this study to optimize multiple responses, to develop the nonlinear relationship between the factors (i.e., the concentration of TiO2 and CNCs) and responses (i.e., FS, YS, and YM), with minimum error and with regression values above 95%.


Sign in / Sign up

Export Citation Format

Share Document