Modelling Concrete Strength Using Support Vector Machines

2013 ◽  
Vol 438-439 ◽  
pp. 170-173 ◽  
Author(s):  
Hai Ying Yang ◽  
Yi Feng Dong

Support vector machine (SVM) is a statistical learning theory based on a structural risk minimization principle that minimizes both error and weight terms. A SVM model is presented to predict compressive strength of concrete at 28 days in this paper. A total of 20 data sets were used to train, whereas the remaining 10 data sets were used to test the created model. Radial basis function based on support vector machines was used to model the compressive strength and results were compared with a generalized regression neural network approach. The results of this study showed that the SVM approach has the potential to be a practical tool for predicting compressive strength of concrete at 28 days.

2016 ◽  
Vol 28 (6) ◽  
pp. 1217-1247 ◽  
Author(s):  
Yunlong Feng ◽  
Yuning Yang ◽  
Xiaolin Huang ◽  
Siamak Mehrkanoon ◽  
Johan A. K. Suykens

This letter addresses the robustness problem when learning a large margin classifier in the presence of label noise. In our study, we achieve this purpose by proposing robustified large margin support vector machines. The robustness of the proposed robust support vector classifiers (RSVC), which is interpreted from a weighted viewpoint in this work, is due to the use of nonconvex classification losses. Besides the robustness, we also show that the proposed RSCV is simultaneously smooth, which again benefits from using smooth classification losses. The idea of proposing RSVC comes from M-estimation in statistics since the proposed robust and smooth classification losses can be taken as one-sided cost functions in robust statistics. Its Fisher consistency property and generalization ability are also investigated. Besides the robustness and smoothness, another nice property of RSVC lies in the fact that its solution can be obtained by solving weighted squared hinge loss–based support vector machine problems iteratively. We further show that in each iteration, it is a quadratic programming problem in its dual space and can be solved by using state-of-the-art methods. We thus propose an iteratively reweighted type algorithm and provide a constructive proof of its convergence to a stationary point. Effectiveness of the proposed classifiers is verified on both artificial and real data sets.


Author(s):  
Cecilio Angulo ◽  
Luis Gonzalez-Abril

Support Vector Machines -- SVMs -- are learning machines, originally designed for bi-classification problems, implementing the well-known Structural Risk Minimization (SRM) inductive principle to obtain good generalization on a limited number of learning patterns (Vapnik, 1998). The optimization criterion for these machines is maximizing the margin between two classes, i.e. the distance between two parallel hyperplanes that split the vectors of each one of the two classes, since larger is the margin separating classes, smaller is the VC dimension of the learning machine, which theoretically ensures a good generalization performance (Vapnik, 1998), as it has been demonstrated in a number of real applications (Cristianini, 2000). In its formulation is applicable the kernel trick, which improves the capacity of these algorithms, learning not being directly performed in the original space of data but in a new space called feature space; for this reason this algorithm is one of the most representative of the called Kernel Machines (KMs). Main theory was originally developed on the sixties and seventies by V. Vapnik and A. Chervonenkis (Vapnik et al., 1963, Vapnik et al., 1971, Vapnik, 1995, Vapnik, 1998), on the basis of a separable binary classification problem, however generalization in the use of these learning algorithms did not take place until the nineties (Boser et al., 1992). SVMs has been used thoroughly in any kind of learning problems, mainly in classification problems, although also in other problems like regression (Schölkopf et al., 2004) or clustering (Ben-Hur et al., 2001). The fields of Optic Character Recognition (Cortes et al., 1995) and Text Categorization (Sebastiani, 2002) were the most important initial applications where SVMs were used. With the extended application of new kernels, novel applications have taken place in the field of Bioinformatics, concretely many works are related with the classification of data in Genetic Expression (Microarray Gene Expression) (Brown et al., 1997) and detecting structures between proteins and their relationship with the chains of DNA (Jaakkola et al., 2000). Other applications include image identification, voice recognition, prediction in time series, etc. A more extensive list of applications can be found in (Guyon, 2006).


Materials ◽  
2015 ◽  
Vol 8 (10) ◽  
pp. 7169-7178 ◽  
Author(s):  
Yi-Fan Shih ◽  
Yu-Ren Wang ◽  
Kuo-Liang Lin ◽  
Chin-Wen Chen

2016 ◽  
Vol 15 (03) ◽  
pp. 603-619 ◽  
Author(s):  
Min-Yuan Cheng ◽  
Nhat-Duc Hoang

This paper presents an AI approach named as self-Adaptive fuzzy least squares support vector machines inference model (SFLSIM) for predicting compressive strength of rubberized concrete. The SFLSIM consists of a fuzzification process for converting crisp input data into membership grades and an inference engine which is constructed based on least squares support vector machines (LS-SVM). Moreover, the proposed inference model integrates differential evolution (DE) to adaptively search for the most appropriate profiles of fuzzy membership functions (MFs) as well as the LS-SVM’s tuning parameters. In this study, 70 concrete mix samples are utilized to train and test the SFLSIM. According to experimental results, the SFLSIM can achieve a comparatively low MAPE which is less than 2%.


Sign in / Sign up

Export Citation Format

Share Document