scholarly journals Locally Linear Embedding as Nonlinear Feature Extraction to Discriminate Liquids with a Cyclic Voltammetric Electronic Tongue

2021 ◽  
Vol 5 (1) ◽  
pp. 56
Author(s):  
Jersson X. Leon-Medina ◽  
Maribel Anaya ◽  
Diego A. Tibaduiza

Electronic tongues are devices used in the analysis of aqueous matrices for classification or quantification tasks. These systems are composed of several sensors of different materials, a data acquisition unit, and a pattern recognition system. Voltammetric sensors have been used in electronic tongues using the cyclic voltammetry method. By using this method, each sensor yields a voltammogram that relates the response in current to the change in voltage applied to the working electrode. A great amount of data is obtained in the experimental procedure which allows handling the analysis as a pattern recognition application; however, the development of efficient machine-learning-based methodologies is still an open research interest topic. As a contribution, this work presents a novel data processing methodology to classify signals acquired by a cyclic voltammetric electronic tongue. This methodology is composed of several stages such as data normalization through group scaling method and a nonlinear feature extraction step with locally linear embedding (LLE) technique. The reduced-size feature vector input to a k-Nearest Neighbors (k-NN) supervised classifier algorithm. A leave-one-out cross-validation (LOOCV) procedure is performed to obtain the final classification accuracy. The methodology is validated with a data set of five different juices as liquid substances.Two screen-printed electrodes voltametric sensors were used in the electronic tongue. Specifically the materials of their working electrodes were platinum and graphite. The results reached an 80% classification accuracy after applying the developed methodology.

Sensors ◽  
2020 ◽  
Vol 20 (17) ◽  
pp. 4834
Author(s):  
Jersson X. Leon-Medina ◽  
Maribel Anaya ◽  
Francesc Pozo ◽  
Diego Tibaduiza

A nonlinear feature extraction-based approach using manifold learning algorithms is developed in order to improve the classification accuracy in an electronic tongue sensor array. The developed signal processing methodology is composed of four stages: data unfolding, scaling, feature extraction, and classification. This study aims to compare seven manifold learning algorithms: Isomap, Laplacian Eigenmaps, Locally Linear Embedding (LLE), modified LLE, Hessian LLE, Local Tangent Space Alignment (LTSA), and t-Distributed Stochastic Neighbor Embedding (t-SNE) to find the best classification accuracy in a multifrequency large-amplitude pulse voltammetry electronic tongue. A sensitivity study of the parameters of each manifold learning algorithm is also included. A data set of seven different aqueous matrices is used to validate the proposed data processing methodology. A leave-one-out cross validation was employed in 63 samples. The best accuracy (96.83%) was obtained when the methodology uses Mean-Centered Group Scaling (MCGS) for data normalization, the t-SNE algorithm for feature extraction, and k-nearest neighbors (kNN) as classifier.


2014 ◽  
Vol 533 ◽  
pp. 247-251
Author(s):  
Hai Bing Xiao ◽  
Xiao Peng Xie

This paper deals with the study of Locally Linear Embedding (LLE) and Hessian LLE nonlinear feature extraction for high dimensional data dimension reduction. LLE and Hessian LLE algorithm which reveals the characteristics of nonlinear manifold learning were analyzed. LLE and Hessian LLE algorithm simulation research was studied through different kinds of sample for dimensionality reduction. LLE and Hessian LLE algorithm’s classification performance was compared in accordance with MDS. The simulation experimental results show that LLE and Hessian LLE are very effective feature extraction method for nonlinear manifold learning.


2014 ◽  
Vol 51 (1) ◽  
pp. 57-73 ◽  
Author(s):  
Karol Deręgowski ◽  
Mirosław Krzyśko

SUMMARY Kernel principal components (KPC) and kernel discriminant coordinates (KDC), which are the extensions of principal components and discriminant coordinates, respectively, from a linear domain to a nonlinear domain via the kernel trick, are two very popular nonlinear feature extraction methods. The kernel discriminant coordinates space has proven to be a very powerful space for pattern recognition. However, further study shows that there are still drawbacks in this method. To improve the performance of pattern recognition, we propose a new learning algorithm combining the advantages of KPC and KDC


2021 ◽  
Vol 428 ◽  
pp. 280-290
Author(s):  
Yuanhong Liu ◽  
Zebiao Hu ◽  
Yansheng Zhang

Sign in / Sign up

Export Citation Format

Share Document