scholarly journals Prediction of river suspended sediment load using machine learning models and geo-morphometric parameters

2021 ◽  
Vol 14 (18) ◽  
Author(s):  
Maryam Asadi ◽  
Ali Fathzadeh ◽  
Ruth Kerry ◽  
Zohre Ebrahimi-Khusfi ◽  
Ruhollah Taghizadeh-Mehrjardi

AbstractEstimating sediment load of rivers is one of the major problems in river engineering that has been using various data mining algorithms and variables. It is desirable to obtain accurate estimates of sediment load while using techniques that limit computational intensity when datasets are large. This study investigates the usefulness of geo-morphometric factors and machine learning (ML) models for predicting suspended sediment load (SSL) in several river basins in Lorestan and Gilan, Iran. Six ML models, namely, multiple linear regression (MLR), artificial neural networks (ANN), K-nearest neighbor (KNN), Gaussian processes (GP), support vector machines (SVM), and evolutionary support vector machines (ESVM), were evaluated for estimating minimum and average SSL for the study regions. Geo-morphometric parameters and river discharge data were utilized as the main predictors in modeling process. In addition, an attribute reduction technique was applied to decrease the algorithm complexity and computational resources used. The results showed that all models estimated both target variables well. However, the optimal models for predicting average sediment load and minimum sediment load were the GP and ESVM models, respectively.

Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Siyamak Doroudi ◽  
Ahmad Sharafati ◽  
Seyed Hossein Mohajeri

Predicting suspended sediment load (SSL) in water resource management requires efficient and reliable predicted models. This study considers the support vector regression (SVR) method to predict daily suspended sediment load. Since the SVR has unknown parameters, the observer-teacher-learner-based Optimization (OTLBO) method is integrated with the SVR model to provide a novel hybrid predictive model. The SVR combined with the genetic algorithm (SVR-GA) is used as an alternative model. To explore the performance and application of the proposed models, five input combinations of rainfall and discharge data of Cham Siah River catchment are provided. The predictive models are assessed using various numerical and visual indicators. The results indicate that the SVR-OTLBO model offers a higher prediction performance than other models employed in the current study. Specifically, SVR-OTLBO model offers highest Pearson correlation coefficient (R = 0.9768), Willmott’s Index (WI = 0.9812), ratio of performance to IQ (RPIQ = 0.9201), and modified index of agreement (md = 0.7411) and the lowest relative root mean square error (RRMSE = 0.5371) in comparison with SVR-GA (R = 0.9704, WI = 0.9794, RPIQ = 0.8521, and md = 0.7323, 0.5617) and SVR (R = 0.9501, WI = 0.9734, RPIQ = 0.3229, md = 0.4338, and RRMSE = 1.0829) models, respectively.


2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Yao Huimin

With the development of cloud computing and distributed cluster technology, the concept of big data has been expanded and extended in terms of capacity and value, and machine learning technology has also received unprecedented attention in recent years. Traditional machine learning algorithms cannot solve the problem of effective parallelization, so a parallelization support vector machine based on Spark big data platform is proposed. Firstly, the big data platform is designed with Lambda architecture, which is divided into three layers: Batch Layer, Serving Layer, and Speed Layer. Secondly, in order to improve the training efficiency of support vector machines on large-scale data, when merging two support vector machines, the “special points” other than support vectors are considered, that is, the points where the nonsupport vectors in one subset violate the training results of the other subset, and a cross-validation merging algorithm is proposed. Then, a parallelized support vector machine based on cross-validation is proposed, and the parallelization process of the support vector machine is realized on the Spark platform. Finally, experiments on different datasets verify the effectiveness and stability of the proposed method. Experimental results show that the proposed parallelized support vector machine has outstanding performance in speed-up ratio, training time, and prediction accuracy.


PLoS ONE ◽  
2021 ◽  
Vol 16 (10) ◽  
pp. e0257901
Author(s):  
Yanjing Bi ◽  
Chao Li ◽  
Yannick Benezeth ◽  
Fan Yang

Phoneme pronunciations are usually considered as basic skills for learning a foreign language. Practicing the pronunciations in a computer-assisted way is helpful in a self-directed or long-distance learning environment. Recent researches indicate that machine learning is a promising method to build high-performance computer-assisted pronunciation training modalities. Many data-driven classifying models, such as support vector machines, back-propagation networks, deep neural networks and convolutional neural networks, are increasingly widely used for it. Yet, the acoustic waveforms of phoneme are essentially modulated from the base vibrations of vocal cords, and this fact somehow makes the predictors collinear, distorting the classifying models. A commonly-used solution to address this issue is to suppressing the collinearity of predictors via partial least square regressing algorithm. It allows to obtain high-quality predictor weighting results via predictor relationship analysis. However, as a linear regressor, the classifiers of this type possess very simple topology structures, constraining the universality of the regressors. For this issue, this paper presents an heterogeneous phoneme recognition framework which can further benefit the phoneme pronunciation diagnostic tasks by combining the partial least square with support vector machines. A French phoneme data set containing 4830 samples is established for the evaluation experiments. The experiments of this paper demonstrates that the new method improves the accuracy performance of the phoneme classifiers by 0.21 − 8.47% comparing to state-of-the-arts with different data training data density.


Sign in / Sign up

Export Citation Format

Share Document