scholarly journals A novel method for total chlorine detection using machine learning with electrode arrays

RSC Advances ◽  
2019 ◽  
Vol 9 (59) ◽  
pp. 34196-34206
Author(s):  
Zhe Li ◽  
Shunhao Huang ◽  
Juan Chen

Establish soft measurement model of total chlorine: cyclic voltammetry curves, principal component analysis and support vector regression.

Author(s):  
Tshilidzi Marwala

This chapter develops and compares the merits of three different data imputation models by using accuracy measures. The three methods are auto-associative neural networks, a principal component analysis and support vector regression all combined with cultural genetic algorithms to impute missing variables. The use of a principal component analysis improves the overall performance of the auto-associative network while the use of support vector regression shows promising potential for future investigation. Imputation accuracies up to 97.4% for some of the variables are achieved.


2020 ◽  
Vol 16 (4) ◽  
pp. 155014772091640
Author(s):  
Lanmei Wang ◽  
Yao Wang ◽  
Guibao Wang ◽  
Jianke Jia

In this article, principal component analysis method, which is applied to image compression and feature extraction, is introduced into the dimension reduction of input characteristic variable of support vector regression, and a method of joint estimation of near-field angle and range based on principal component analysis dimension reduction is proposed. Signal-to-noise ratio and calculation amount are the decisive factors affecting the performance of the algorithm. Principal component analysis is used to fuse the main characteristics of training data and discard redundant information, the signal-to-noise ratio is improved, and the calculation amount is reduced accordingly. Similarly, support vector regression is used to model the signal, and the upper triangular elements of the signal covariance matrix are usually used as input features. Since the covariance matrix has more upper triangular elements, training it as a feature input will affect the training speed to some extent. Principal component analysis is used to reduce the dimensionality of the upper triangular element of the covariance matrix of the known signal, and it is used as the input feature of the multi-output support vector regression machine to construct the near-field parameter estimation model, and the parameter estimation of unknown signal is herein obtained. Simulation results show that this method has high estimation accuracy and training speed, and has strong adaptability at low signal-to-noise ratio, and the performance is better than that of the back-propagation neural network algorithm and the two-step multiple signal classification algorithm.


Processes ◽  
2019 ◽  
Vol 7 (12) ◽  
pp. 928 ◽  
Author(s):  
Miguel De-la-Torre ◽  
Omar Zatarain ◽  
Himer Avila-George ◽  
Mirna Muñoz ◽  
Jimy Oblitas ◽  
...  

This paper explores five multivariate techniques for information fusion on sorting the visual ripeness of Cape gooseberry fruits (principal component analysis, linear discriminant analysis, independent component analysis, eigenvector centrality feature selection, and multi-cluster feature selection.) These techniques are applied to the concatenated channels corresponding to red, green, and blue (RGB), hue, saturation, value (HSV), and lightness, red/green value, and blue/yellow value (L*a*b) color spaces (9 features in total). Machine learning techniques have been reported for sorting the Cape gooseberry fruits’ ripeness. Classifiers such as neural networks, support vector machines, and nearest neighbors discriminate on fruit samples using different color spaces. Despite the color spaces being equivalent up to a transformation, a few classifiers enable better performances due to differences in the pixel distribution of samples. Experimental results show that selection and combination of color channels allow classifiers to reach similar levels of accuracy; however, combination methods still require higher computational complexity. The highest level of accuracy was obtained using the seven-dimensional principal component analysis feature space.


2021 ◽  
Vol 23 (06) ◽  
pp. 1699-1715
Author(s):  
Mohamed, A. M. ◽  
◽  
Abdel Latif, S. H ◽  
Alwan, A. S. ◽  
◽  
...  

The principle component analysis is used more frequently as a variables reduction technique. And recently, an evolving group of studies makes use of machine learning regression algorithms to improve the estimation of empirical models. One of the most frequently used machines learning regression models is support vector regression with various kernel functions. However, an ensemble of support vector regression and principal component analysis is also possible. So, this paper aims to investigate the competence of support vector regression techniques after performing principal component analysis to explore the possibility of reducing data and having more accurate estimations. Some new proposals are introduced and the behavior of two different models 𝜀𝜀-SVR and 𝑣𝑣-SVR are compared through an extensive simulation study under four different kernel functions; linear, radial, polynomial, and sigmoid kernel functions, with different sample sizes, ranges from small, moderate to large. The models are compared with their counterparts in terms of coefficient of determination (𝑅𝑅2 ) and root mean squared error (RMSE). The comparative results show that applying SVR after PCA models improve the results in terms of SV numbers between 30% and 60% on average and it can be applied with real data. In addition, the linear kernel function gave the best values rather than other kernel functions and the sigmoid kernel gave the worst values. Under 𝜀𝜀-SVR the results improved which did not happen with 𝑣𝑣-SVR. It is also drawn that, RMSE values decreased with increasing sample size.


Sign in / Sign up

Export Citation Format

Share Document