Machine Learning Prediction and Reliability Analysis Applied to Subsea Spool and Jumper Design

2021 ◽  
Author(s):  
Mengdi Song ◽  
Massyl Gheroufella ◽  
Paul Chartier

Abstract In subsea pipelines projects, the design of rigid spool and jumper can be a challenging and time-consuming task. The selected spool layout for connecting the pipelines to the subsea structures, including the number of bends and leg lengths, must offer the flexibility to accommodate the pipeline thermal expansion, the pipe-lay target box and misalignments associated with the post-lay survey metrology and spool fabrication. The analysis results are considerably affected by many uncertainties involved. Consequently, a very large amount of calculations is required to assess the full combination of uncertainties and to capture the worst-case scenario. Rather than applying the deterministic solution, this paper uses machine learning prediction to significantly improve the efficiency of the design process. In addition, thanks to the fast predictive model using machine learning algorithms, the uncertainty quantification and propagation analysis using probabilistic statistical method becomes feasible in terms of CPU time and can be incorporated into the design process to evaluate the reliability of the outputs. The latter allows us to perform a systematic probabilistic design by considering a certain level of acceptance on the probability of failure, for example as per DNVGL design code. The machine learning predictive modelling and the reliability analysis based upon the probability distribution of the uncertainties are introduced and explained in this paper. Some project examples are shown to highlight the method’s comprehensive nature and efficient characteristics.

2021 ◽  
Vol 11 (15) ◽  
pp. 6787
Author(s):  
Jože M. Rožanec ◽  
Blaž Kažič ◽  
Maja Škrjanc ◽  
Blaž Fortuna ◽  
Dunja Mladenić

Demand forecasting is a crucial component of demand management, directly impacting manufacturing companies’ planning, revenues, and actors through the supply chain. We evaluate 21 baseline, statistical, and machine learning algorithms to forecast smooth and erratic demand on a real-world use case scenario. The products’ data were obtained from a European original equipment manufacturer targeting the global automotive industry market. Our research shows that global machine learning models achieve superior performance than local models. We show that forecast errors from global models can be constrained by pooling product data based on the past demand magnitude. We also propose a set of metrics and criteria for a comprehensive understanding of demand forecasting models’ performance.


The field of biosciences have advanced to a larger extent and have generated large amounts of information from Electronic Health Records. This have given rise to the acute need of knowledge generation from this enormous amount of data. Data mining methods and machine learning play a major role in this aspect of biosciences. Chronic Kidney Disease(CKD) is a condition in which the kidneys are damaged and cannot filter blood as they always do. A family history of kidney diseases or failure, high blood pressure, type 2 diabetes may lead to CKD. This is a lasting damage to the kidney and chances of getting worser by time is high. The very common complications that results due to a kidney failure are heart diseases, anemia, bone diseases, high potasium and calcium. The worst case situation leads to complete kidney failure and necessitates kidney transplant to live. An early detection of CKD can improve the quality of life to a greater extent. This calls for good prediction algorithm to predict CKD at an earlier stage . Literature shows a wide range of machine learning algorithms employed for the prediction of CKD. This paper uses data preprocessing,data transformation and various classifiers to predict CKD and also proposes best Prediction framework for CKD. The results of the framework show promising results of better prediction at an early stage of CKD


2020 ◽  
Vol 10 (11) ◽  
pp. 3980 ◽  
Author(s):  
Cung Lian Sang ◽  
Bastian Steinhagen ◽  
Jonas Dominik Homburg ◽  
Michael Adams ◽  
Marc Hesse ◽  
...  

In ultra-wideband (UWB)-based wireless ranging or distance measurement, differentiation between line-of-sight (LOS), non-line-of-sight (NLOS), and multi-path (MP) conditions is important for precise indoor localization. This is because the accuracy of the reported measured distance in UWB ranging systems is directly affected by the measurement conditions (LOS, NLOS, or MP). However, the major contributions in the literature only address the binary classification between LOS and NLOS in UWB ranging systems. The MP condition is usually ignored. In fact, the MP condition also has a significant impact on the ranging errors of the UWB compared to the direct LOS measurement results. However, the magnitudes of the error contained in MP conditions are generally lower than completely blocked NLOS scenarios. This paper addresses machine learning techniques for identification of the three mentioned classes (LOS, NLOS, and MP) in the UWB indoor localization system using an experimental dataset. The dataset was collected in different conditions in different scenarios in indoor environments. Using the collected real measurement data, we compared three machine learning (ML) classifiers, i.e., support vector machine (SVM), random forest (RF) based on an ensemble learning method, and multilayer perceptron (MLP) based on a deep artificial neural network, in terms of their performance. The results showed that applying ML methods in UWB ranging systems was effective in the identification of the above-three mentioned classes. Specifically, the overall accuracy reached up to 91.9% in the best-case scenario and 72.9% in the worst-case scenario. Regarding the F1-score, it was 0.92 in the best-case and 0.69 in the worst-case scenario. For reproducible results and further exploration, we provide the publicly accessible experimental research data discussed in this paper at PUB (Publications at Bielefeld University). The evaluations of the three classifiers are conducted using the open-source Python machine learning library scikit-learn.


Author(s):  
Helper Zhou ◽  
Victor Gumbo

The emergence of machine learning algorithms presents the opportunity for a variety of stakeholders to perform advanced predictive analytics and to make informed decisions. However, to date there have been few studies in developing countries that evaluate the performance of such algorithms—with the result that pertinent stakeholders lack an informed basis for selecting appropriate techniques for modelling tasks. This study aims to address this gap by evaluating the performance of three machine learning techniques: ordinary least squares (OLS), least absolute shrinkage and selection operator (LASSO), and artificial neural networks (ANNs). These techniques are evaluated in respect of their ability to perform predictive modelling of the sales performance of small, medium and micro enterprises (SMMEs) engaged in manufacturing. The evaluation finds that the ANNs algorithm’s performance is far superior to that of the other two techniques, OLS and LASSO, in predicting the SMMEs’ sales performance.


Entropy ◽  
2020 ◽  
Vol 22 (11) ◽  
pp. 1310
Author(s):  
Ioannis Triantafyllou ◽  
Ioannis C. Drivas ◽  
Georgios Giannakopoulos

Acquiring knowledge about users’ opinion and what they say regarding specific features within an app, constitutes a solid steppingstone for understanding their needs and concerns. App review utilization helps project management teams to identify threads and opportunities for app software maintenance, optimization and strategic marketing purposes. Nevertheless, app user review classification for identifying valuable gems of information for app software improvement, is a complex and multidimensional issue. It requires foresight and multiple combinations of sophisticated text pre-processing, feature extraction and machine learning methods to efficiently classify app reviews into specific topics. Against this backdrop, we propose a novel feature engineering classification schema that is capable to identify more efficiently and earlier terms-words within reviews that could be classified into specific topics. For this reason, we present a novel feature extraction method, the DEVMAX.DF combined with different machine learning algorithms to propose a solution in app review classification problems. One step further, a simulation of a real case scenario takes place to validate the effectiveness of the proposed classification schema into different apps. After multiple experiments, results indicate that the proposed schema outperforms other term extraction methods such as TF.IDF and χ2 to classify app reviews into topics. To this end, the paper contributes to the knowledge expansion of research and practitioners with the purpose to reinforce their decision-making process within the realm of app reviews utilization.


Author(s):  
Songhuan Yao ◽  
Zongsheng Hu ◽  
Qiang Xie ◽  
Yidong Yang ◽  
Hao Peng

Abstract Online dose verification in proton therapy is a critical task for quality assurance. We further studied the feasibility of using a wavelet-based machine learning framework to accomplishing that goal in three dimensions, built upon our previous work in 1D. The wavelet decomposition was utilized to extract features of acoustic signals and a bidirectional long-short-term memory (Bi-LSTM) recurrent neural network (RNN) was used. The 3D dose distributions of mono-energetic proton beams (multiple beam energies) inside a 3D CT phantom, were generated using Monte-Carlo simulation. The 3D propagation of acoustic signal was modeled using the k-Wave toolbox. Three different beamlets (i.e. acoustic pathways) were tested, one with its own model. The performance was quantitatively evaluated in terms of mean relative error (MRE) of dose distribution and positioning error of Bragg peak (△BP ), for two signal-to-noise ratios (SNRs). Due to the lack of experimental data for the time being, two SNR conditions were modeled (SNR=1 and 5). The model is found to yield good accuracy and noise immunity for all three beamlets. The results exhibit an MRE below 0.6% (without noise) and 1.2% (SNR= 5), and △BP below 1.2 mm (without noise) and 1.3 mm (SNR= 5). For the worst-case scenario (SNR=1), the MRE and △BP are below 2.3% and 1.9 mm, respectively. It is encouraging to find out that our model is able to identify the correlation between acoustic waveforms and dose distributions in 3D heterogeneous tissues, as in the 1D case. The work lays a good foundation for us to advance the study and fully validate the feasibility with experimental results.


2021 ◽  
Vol 35 (1) ◽  
pp. 93-98
Author(s):  
Ratna Kumari Challa ◽  
Siva Prasad Chintha ◽  
B. Reddaiah ◽  
Kanusu Srinivasa Rao

Currently, the machine learning group is well-understood and commonly used for predictive modelling and feature generation through linear methodologies such as reversals, principal analysis and canonical correlation analyses. All these approaches are typically intended to capture fascinating subspaces in the original space of high dimensions. These methods have all a closed-form approach because of its simple linear structures, which makes estimation and theoretical analysis for small datasets very straightforward. However, it is very common for a data set to have millions or trillions of samples and features in modern machine learning problems. We deal with the problem of fast estimation from large volumes of data for ordinary squares. The search operation is a very important operation and it is useful in many applications. Some applications when the data set size is large, the linear search takes the time which is proportional to the size of the data set. Binary search and interpolation search performs good for the search of elements in the data set in O(logn) and ⋅O(log(⋅logn)) respectively in the worst case. Now, in this paper, an effort is made to develop a novel fast searching algorithm based on the least square regression curve fitting method. The algorithm is implemented and its execution results are analyzed and compared with binary search and interpolation search performance. The proposed model is compared with the traditional methods and the proposed fast searching algorithm exhibits better performance than the traditional models.


2020 ◽  
Vol 32 ◽  
pp. 03032
Author(s):  
Sahil Parab ◽  
Piyush Rathod ◽  
Durgesh Patil ◽  
Vishwanath Chikkareddi

Diabetes Detection has been one of the many challenges which is being faced by the medical as well as technological communities. The principles of machine learning and its algorithms is used in order to detect the possibility of a diabetic patient based on their level of glucose concentration , insulin levels and other medically point of view required test reports. The basic diabetes detection model uses Bayesian classification machine learning algorithm, but even though the model is able to detect diabetes, the efficiency is not acceptable at all times because of the drawbacks of the single algorithm of the model. A Hybrid Machine Learning Model is used to overcome the drawbacks produced by a single algorithm model. A Hybrid Model is constructed by implementing multiple applicable machine learning algorithms such as the SVM model and Bayesian’s Classification model or any other models in order to overcome drawbacks faced by each other and also provide their mutually contributed efficiency. In a perfect case scenario the new hybrid machine learning model will be able to provide more efficiency as compared to the old Bayesian’s classification model.


2018 ◽  
Vol 7 (3.4) ◽  
pp. 197
Author(s):  
Deepali Vora ◽  
Kamatchi Iyer

Predictive modelling is a statistical technique to predict future behaviour. Machine learning is one of the most popular methods for predicting the future behaviour. From the plethora of algorithms available it is always interesting to find out which algorithm or technique is most suitable for data under consideration. Educational Data Mining is the area of research where predictive modelling is most useful. Predicting the grades of the undergraduate students accurately can help students as well as educators in many ways. Early prediction can help motivating students in better ways to select their future endeavour. This paper presents the results of various machine learning algorithms applied to the data collected from undergraduate studies. It evaluates the effectiveness of various machine learning algorithms when applied to data collected from undergraduate studies. Two major challenges are addressed as: choosing the right features and choosing the right algorithm for prediction. 


Sign in / Sign up

Export Citation Format

Share Document