scholarly journals Prediction of FRCM–Concrete Bond Strength with Machine Learning Approach

2022 ◽  
Vol 14 (2) ◽  
pp. 845
Author(s):  
Aman Kumar ◽  
Harish Chandra Arora ◽  
Krishna Kumar ◽  
Mazin Abed Mohammed ◽  
Arnab Majumdar ◽  
...  

Fibre-reinforced cement mortar (FRCM) has been widely utilised for the repair and restoration of building structures. The bond strength between FRCM and concrete typically takes precedence over the mechanical parameters. However, the bond behaviour of the FRCM–concrete interface is complex. Due to several failure modes, the prediction of bond strength is difficult to forecast. In this paper, effective machine learning models were employed in order to accurately predict the FRCM–concrete bond strength. This article employed a database of 382 test results available in the literature on single-lap and double-lap shear experiments on FRCM–concrete interfacial bonding. The compressive strength of concrete, width of concrete block, FRCM elastic modulus, thickness of textile layer, textile width, textile bond length, and bond strength of FRCM–concrete interface have been taken into consideration with popular machine learning models. The paper estimates the predictive accuracy of different machine learning models for estimating the FRCM–concrete bond strength and found that the GPR model has the highest accuracy with an R-value of 0.9336 for interfacial bond strength prediction. This study can be utilising in the estimation of bond strength to minimise the experimentation cost in minimum time.

2020 ◽  
Vol 7 (4) ◽  
pp. 212-219 ◽  
Author(s):  
Aixia Guo ◽  
Michael Pasque ◽  
Francis Loh ◽  
Douglas L. Mann ◽  
Philip R. O. Payne

Abstract Purpose of Review One in five people will develop heart failure (HF), and 50% of HF patients die in 5 years. The HF diagnosis, readmission, and mortality prediction are essential to develop personalized prevention and treatment plans. This review summarizes recent findings and approaches of machine learning models for HF diagnostic and outcome prediction using electronic health record (EHR) data. Recent Findings A set of machine learning models have been developed for HF diagnostic and outcome prediction using diverse variables derived from EHR data, including demographic, medical note, laboratory, and image data, and achieved expert-comparable prediction results. Summary Machine learning models can facilitate the identification of HF patients, as well as accurate patient-specific assessment of their risk for readmission and mortality. Additionally, novel machine learning techniques for integration of diverse data and improvement of model predictive accuracy in imbalanced data sets are critical for further development of these promising modeling methodologies.


Author(s):  
Terazima Maeda

Nowadays, there is a large number of machine learning models that could be used for various areas. However, different research targets are usually sensitive to the type of models. For a specific prediction target, the predictive accuracy of a machine learning model is always dependent to the data feature, data size and the intrinsic relationship between inputs and outputs. Therefore, for a specific data group and a fixed prediction mission, how to rationally compare the predictive accuracy of different machine learning model is a big question. In this brief note, we show how should we compare the performances of different machine models by raising some typical examples.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Amirhessam Tahmassebi ◽  
Mehrtash Motamedi ◽  
Amir H. Alavi ◽  
Amir H. Gandomi

PurposeEngineering design and operational decisions depend largely on deep understanding of applications that requires assumptions for simplification of the problems in order to find proper solutions. Cutting-edge machine learning algorithms can be used as one of the emerging tools to simplify this process. In this paper, we propose a novel scalable and interpretable machine learning framework to automate this process and fill the current gap.Design/methodology/approachThe essential principles of the proposed pipeline are mainly (1) scalability, (2) interpretibility and (3) robust probabilistic performance across engineering problems. The lack of interpretibility of complex machine learning models prevents their use in various problems including engineering computation assessments. Many consumers of machine learning models would not trust the results if they cannot understand the method. Thus, the SHapley Additive exPlanations (SHAP) approach is employed to interpret the developed machine learning models.FindingsThe proposed framework can be applied to a variety of engineering problems including seismic damage assessment of structures. The performance of the proposed framework is investigated using two case studies of failure identification in reinforcement concrete (RC) columns and shear walls. In addition, the reproducibility, reliability and generalizability of the results were validated and the results of the framework were compared to the benchmark studies. The results of the proposed framework outperformed the benchmark results with high statistical significance.Originality/valueAlthough, the current study reveals that the geometric input features and reinforcement indices are the most important variables in failure modes detection, better model can be achieved with employing more robust strategies to establish proper database to decrease the errors in some of the failure modes identification.


2020 ◽  
Vol 2020 ◽  
pp. 1-12
Author(s):  
Wenjin Zhu ◽  
Zhiming Chao ◽  
Guotao Ma

In this paper, a database developed from the existing literature about permeability of rock was established. Based on the constructed database, a Support Vector Machine (SVM) model with hyperparameters optimised by Mind Evolutionary Algorithm (MEA) was proposed to predict the permeability of rock. Meanwhile, the Genetic Algorithm- (GA-) and Particle Swarm Algorithm- (PSO-) SVM models were constructed to compare the improving effects of MEA on the foretelling accuracy of machine learning models with those of GA and PSO, respectively. The following conclusions were drawn. MEA can increase the predictive accuracy of the constructed machine learning models remarkably in a few iteration times, which has better optimisation performance than that of GA and PSO. MEA-SVM has the best forecasting performance, followed by PSO-SVM, while the estimating precision of GA-SVM is lower than them. The proposed MEA-SVM model can accurately predict the permeability of rock indicating the model having a satisfactory generalization and extrapolation capacity.


PLoS ONE ◽  
2021 ◽  
Vol 16 (4) ◽  
pp. e0249423
Author(s):  
Indy Man Kit Ho ◽  
Kai Yuen Cheong ◽  
Anthony Weldon

Despite the wide adoption of emergency remote learning (ERL) in higher education during the COVID-19 pandemic, there is insufficient understanding of influencing factors predicting student satisfaction for this novel learning environment in crisis. The present study investigated important predictors in determining the satisfaction of undergraduate students (N = 425) from multiple departments in using ERL at a self-funded university in Hong Kong while Moodle and Microsoft Team are the key learning tools. By comparing the predictive accuracy between multiple regression and machine learning models before and after the use of random forest recursive feature elimination, all multiple regression, and machine learning models showed improved accuracy while the most accurate model was the elastic net regression with 65.2% explained variance. The results show only neutral (4.11 on a 7-point Likert scale) regarding the overall satisfaction score on ERL. Even majority of students are competent in technology and have no obvious issue in accessing learning devices or Wi-Fi, face-to-face learning is more preferable compared to ERL and this is found to be the most important predictor. Besides, the level of efforts made by instructors, the agreement on the appropriateness of the adjusted assessment methods, and the perception of online learning being well delivered are shown to be highly important in determining the satisfaction scores. The results suggest that the need of reviewing the quality and quantity of modified assessment accommodated for ERL and structured class delivery with the suitable amount of interactive learning according to the learning culture and program nature.


2019 ◽  
Vol 116 (44) ◽  
pp. 22071-22080 ◽  
Author(s):  
W. James Murdoch ◽  
Chandan Singh ◽  
Karl Kumbier ◽  
Reza Abbasi-Asl ◽  
Bin Yu

Machine-learning models have demonstrated great success in learning complex patterns that enable them to make predictions about unobserved data. In addition to using models for prediction, the ability to interpret what a model has learned is receiving an increasing amount of attention. However, this increased focus has led to considerable confusion about the notion of interpretability. In particular, it is unclear how the wide array of proposed interpretation methods are related and what common concepts can be used to evaluate them. We aim to address these concerns by defining interpretability in the context of machine learning and introducing the predictive, descriptive, relevant (PDR) framework for discussing interpretations. The PDR framework provides 3 overarching desiderata for evaluation: predictive accuracy, descriptive accuracy, and relevancy, with relevancy judged relative to a human audience. Moreover, to help manage the deluge of interpretation methods, we introduce a categorization of existing techniques into model-based and post hoc categories, with subgroups including sparsity, modularity, and simulatability. To demonstrate how practitioners can use the PDR framework to evaluate and understand interpretations, we provide numerous real-world examples. These examples highlight the often underappreciated role played by human audiences in discussions of interpretability. Finally, based on our framework, we discuss limitations of existing methods and directions for future work. We hope that this work will provide a common vocabulary that will make it easier for both practitioners and researchers to discuss and choose from the full range of interpretation methods.


2021 ◽  
Vol 4 ◽  
Author(s):  
Gero Szepannek ◽  
Karsten Lübke

Algorithmic scoring methods are widely used in the finance industry for several decades in order to prevent risk and to automate and optimize decisions. Regulatory requirements as given by the Basel Committee on Banking Supervision (BCBS) or the EU data protection regulations have led to an increasing interest and research activity on understanding black box machine learning models by means of explainable machine learning. Even though this is a step into a right direction, such methods are not able to guarantee for a fair scoring as machine learning models are not necessarily unbiased and may discriminate with respect to certain subpopulations such as a particular race, gender, or sexual orientation—even if the variable itself is not used for modeling. This is also true for white box methods like logistic regression. In this study, a framework is presented that allows analyzing and developing models with regard to fairness. The proposed methodology is based on techniques of causal inference and some of the methods can be linked to methods from explainable machine learning. A definition of counterfactual fairness is given together with an algorithm that results in a fair scoring model. The concepts are illustrated by means of a transparent simulation and a popular real-world example, the German Credit data using traditional scorecard models based on logistic regression and weight of evidence variable pre-transform. In contrast to previous studies in the field for our study, a corrected version of the data is presented and used. With the help of the simulation, the trade-off between fairness and predictive accuracy is analyzed. The results indicate that it is possible to remove unfairness without a strong performance decrease unless the correlation of the discriminative attributes on the other predictor variables in the model is not too strong. In addition, the challenge in explaining the resulting scoring model and the associated fairness implications to users is discussed.


Author(s):  
Terazima Maeda

Nowadays, there is a large number of machine learning models that could be used for various areas. However, different research targets are usually sensitive to the type of models. For a specific prediction target, the predictive accuracy of a machine learning model is always dependent to the data feature, data size and the intrinsic relationship between inputs and outputs. Therefore, for a specific data group and a fixed prediction mission, how to rationally compare the predictive accuracy of different machine learning model is a big question. In this brief note, we show how should we compare the performances of different machine models by raising some typical examples.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Jessalyn K. Holodinsky ◽  
Amy Y. X. Yu ◽  
Moira K. Kapral ◽  
Peter C. Austin

Abstract Background Hometime, the total number of days a person is living in the community (not in a healthcare institution) in a defined period of time after a hospitalization, is a patient-centred outcome metric increasingly used in healthcare research. Hometime exhibits several properties which make its statistical analysis difficult: it has a highly non-normal distribution, excess zeros, and is bounded by both a lower and upper limit. The optimal methodology for the analysis of hometime is currently unknown. Methods Using administrative data we identified adult patients diagnosed with stroke between April 1, 2010 and December 31, 2017 in Ontario, Canada. 90-day hometime and clinically relevant covariates were determined through administrative data linkage. Fifteen different statistical and machine learning models were fit to the data using a derivation sample. The models’ predictive accuracy and bias were assessed using an independent validation sample. Results Seventy-five thousand four hundred seventy-five patients were identified (divided into a derivation set of 49,402 and a test set of 26,073). In general, the machine learning models had lower root mean square error and mean absolute error than the statistical models. However, some statistical models resulted in lower (or equal) bias than the machine learning models. Most of the machine learning models constrained predicted values between the minimum and maximum observable hometime values but this was not the case for the statistical models. The machine learning models also allowed for the display of complex non-linear interactions between covariates and hometime. No model captured the non-normal bucket shaped hometime distribution. Conclusions Overall, no model clearly outperformed the others. However, it was evident that machine learning methods performed better than traditional statistical methods. Among the machine learning methods, generalized boosting machines using the Poisson distribution as well as random forests regression were the best performing. No model was able to capture the bucket shaped hometime distribution and future research on factors which are associated with extreme values of hometime that are not available in administrative data is warranted.


Sign in / Sign up

Export Citation Format

Share Document