scholarly journals EEG-Based Epilepsy Recognition via Multiple Kernel Learning

2020 ◽  
Vol 2020 ◽  
pp. 1-9
Author(s):  
Yufeng Yao ◽  
Yan Ding ◽  
Shan Zhong ◽  
Zhiming Cui ◽  
Chenxi Huang

In the field of brain-computer interfaces, it is very common to use EEG signals for disease diagnosis. In this study, a style regularized least squares support vector machine based on multikernel learning is proposed and applied to the recognition of epilepsy abnormal signals. The algorithm uses the style conversion matrix to represent the style information contained in the sample, regularizes it in the objective function, optimizes the objective function through the commonly used alternative optimization method, and simultaneously updates the style conversion matrix and classifier during the iteration process parameter. In order to use the learned style information in the prediction process, two new rules are added to the traditional prediction method, and the style conversion matrix is used to standardize the sample style before classification.


Author(s):  
T. E. Potter ◽  
K. D. Willmert ◽  
M. Sathyamoorthy

Abstract Mechanism path generation problems which use link deformations to improve the design lead to optimization problems involving a nonlinear sum-of-squares objective function subjected to a set of linear and nonlinear constraints. Inclusion of the deformation analysis causes the objective function evaluation to be computationally expensive. An optimization method is presented which requires relatively few objective function evaluations. The algorithm, based on the Gauss method for unconstrained problems, is developed as an extension of the Gauss constrained technique for linear constraints and revises the Gauss nonlinearly constrained method for quadratic constraints. The derivation of the algorithm, using a Lagrange multiplier approach, is based on the Kuhn-Tucker conditions so that when the iteration process terminates, these conditions are automatically satisfied. Although the technique was developed for mechanism problems, it is applicable to any optimization problem having the form of a sum of squares objective function subjected to nonlinear constraints.



Author(s):  
Lochi Yu ◽  
Cristian Ureña

Since the first recordings of brain electrical activity more than 100 years ago remarkable contributions have been done to understand the brain functionality and its interaction with environment. Regardless of the nature of the brain-computer interface BCI, a world of opportunities and possibilities has been opened not only for people with severe disabilities but also for those who are pursuing innovative human interfaces. Deeper understanding of the EEG signals along with refined technologies for its recording is helping to improve the performance of EEG based BCIs. Better processing and features extraction methods, like Independent Component Analysis (ICA) and Wavelet Transform (WT) respectively, are giving promising results that need to be explored. Different types of classifiers and combination of them have been used on EEG BCIs. Linear, neural and nonlinear Bayesian have been the most used classifiers providing accuracies ranges between 60% and 90%. Some demand more computational resources like Support Vector Machines (SVM) classifiers but give good generality. Linear Discriminant Analysis (LDA) classifiers provide poor generality but low computational resources, making them optimal for some real time BCIs. Better classifiers must be developed to tackle the large patterns variability across different subjects by using every available resource, method or technology.



2014 ◽  
Vol 981 ◽  
pp. 171-174 ◽  
Author(s):  
Li Wang ◽  
Xiong Zhang ◽  
Xue Fei Zhong ◽  
Zhao Wen Fan

The hybrid brain-computer interface (BCI) based on electroencephalography (EEG) become more and more popular. Motor imagery, steady state visual evoked potentials (SSVEPs) and P300 are main training Paradigms. In our previous research, BCI systems based on motor imagery can be extended by speech imagery. However, noise and artifact may be produced by different mental tasks and EEG signals are also different among users, so the classification accuracy can be improved by selecting optimum frequency range for each user. Mutual information (MI) is usually used to choose optimal features. After extracted the features from each narrow frequency range of EEG by common spatial patterns (CSP), the features are assessed by MI. Then, the optimum frequency range can be acquired. The final classification results are calculated by support vector machine (SVM). The average result of optimum frequency range from seven subjects is better than the result of a fixed frequency range.



SPE Journal ◽  
2018 ◽  
Vol 23 (06) ◽  
pp. 2428-2443 ◽  
Author(s):  
Zhenyu Guo ◽  
Chaohui Chen ◽  
Guohua Gao ◽  
Jeroen Vink

Summary Numerical optimization is an integral part of many history-matching (HM) workflows. However, the optimization performance can be affected negatively by the numerical noise existent in the forward models when the gradients are estimated numerically. As an unavoidable part of reservoir simulation, numerical noise refers to the error caused by the incomplete convergence of linear or nonlinear solvers or truncation errors caused by different timestep cuts. More precisely, the allowed solver tolerances and allowed changes of pressure and saturation imply that simulation results no longer smoothly change with changing model parameters. For HM with linear-distributed Gaussian-Newton (L-DGN), caused by the discontinuity of simulation results, the sensitivity matrix computed by linear interpolation might be less accurate, which might result in slow convergence or, even worse, failure of convergence. Recently, we have developed an HM workflow by integrating the support-vector regression (SVR) with the distributed-Gaussian-Newton (DGN) method optimization method referred to as SVR-DGN. Unlike L-DGN that computes the sensitivity matrix with a simple linear proxy, SVR-DGN computes the sensitivity matrix by taking the gradient of the SVR proxies. In this paper, we provide theoretical analysis and case studies to show that SVR-DGN can compute a more-accurate sensitivity matrix than L-DGN, and SVR-DGN is insensitive to the negative influence of numerical noise. We also propose a cost-saving training procedure by replacing bad-training points, which correspond to relatively large values of the objective function, with those training-data points (simulation data) that have smaller values of the objective function and are generated at most-recent iterations for training the SVR proxies. Both the L-DGN approach and the newly proposed SVR-DGN approach are tested first with a 2D toy problem to show the effect of numerical noise on their convergence performance. We find that their performance is comparable when the toy problem is free of numerical noise. As the numerical-noise level increases, the performance of the L-DGN degrades sharply. By contrast, the SVR-DGN performance is quite stable. Then, both methods are tested using a real-field HM example. The convergence performance of the SVR-DGN is quite robust for both the tight and loose numerical settings, whereas the performance of the L-DGN degrades significantly when loose numerical settings are applied.



2018 ◽  
Vol 7 (2) ◽  
pp. 279-285
Author(s):  
Sandy Akbar Dewangga ◽  
Handayani Tjandrasa ◽  
Darlis Herumurti

Brain-computer interfaces have been explored for years with the intent of using human thoughts to control mechanical system. By capturing the transmission of signals directly from the human brain or electroencephalogram (EEG), human thoughts can be made as motion commands to the robot. This paper presents a prototype for an electroencephalogram (EEG) based brain-actuated robot control system using mental commands. In this study, Linear Discriminant Analysis (LDA) and Support Vector Machine (SVM) method were combined to establish the best model. Dataset containing features of EEG signals were obtained from the subject non-invasively using Emotiv EPOC headset. The best model was then used by Brain-Computer Interface (BCI) to classify the EEG signals into robot motion commands to control the robot directly. The result of the classification gave the average accuracy of 69.06%.



2014 ◽  
Author(s):  
Γρηγόριος Τζώρτζης

Η παρούσα διατριβή μελετά το πρόβλημα της ομαδοποίησης (clustering), που έχει ως στόχο τον διαχωρισμό ενός συνόλου δεδομένων σε ομάδες (clusters), χωρίς τη χρήση επίβλεψης, ώστε τα δεδομένα που ανήκουν στη ίδια ομάδα να είναι όμοια μεταξύ τους και ανόμοια με αυτά των άλλων ομάδων, βάσει ενός μέτρου ομοιότητας/ανομοιότητας. Συγκεκριμένα, η διατριβή επικεντρώνεται στην παρουσίαση μεθόδων ομαδοποίησης που αφορούν τρεις βασικούς θεματικούς άξονες: α) την ομαδοποίηση δεδομένων για τα οποία έχουμε διαθέσιμο μόνο τον πίνακα εγγύτητας και όχι τα ίδια τα δεδομένα (proximity-based clustering), β) την μάθηση με πολλαπλές όψεις (multi-view learning), όπου για τα ίδια δεδομένα έχουμε στη διάθεσή μας πολλαπλές αναπαραστάσεις (όψεις) που προέρχονται από διαφορετικές πηγές ή/και διαφορετικούς χώρους χαρακτηριστικών και γ) την μάθηση με πολλαπλούς πυρήνες (multiple kernel learning), όπου ταυτόχρονα με την ομαδοποίηση θέλουμε να μάθουμε και τον κατάλληλο πυρήνα (kernel) για τα δεδομένα. Συνήθως ο πυρήνας παραμετροποιείται ως ένας συνδυασμός από δοθέντες πυρήνες (basis kernels) και στοχεύουμε στην μάθηση κατάλληλων τιμών για τις παραμέτρους του συνδυασμού.Αρχικά προτείνεται μια μέθοδος για την αντιμετώπιση του γνωστού προβλήματος της αρχικοποίησης (initialization problem), από το οποίο πάσχει ο αλγόριθμος k-means. Συγκεκριμένα, τροποποιούμε το κριτήριο (objective function) του k-means έτσι ώστε να δίδεται μεγαλύτερη έμφαση στην ελαχιστοποίηση των ομάδων που στην τρέχουσα επανάληψη εμφανίζουν μεγάλη διακύμανση (intra-cluster variance). Κατ’ αυτόν τον τρόπο ο χώρος λύσεων σταδιακά περιορίζεται σε ομάδες που εμφανίζουν παρεμφερή διακύμανση, το οποίο επιτρέπει στη μέθοδό μας να εντοπίζει σε συστηματική βάση λύσεις καλύτερης ποιότητας σε σχέση με τον k-means, καθώς επανεκκινείται από τυχαία αρχικά κέντρα. Επιπλέον, παρουσιάζεται η προσαρμογή της μεθόδου ώστε να μπορεί να εφαρμοστεί για ομαδοποίηση με πίνακα ομοιότητας (kernel matrix), τροποποιώντας το κριτήριο του αλγορίθμου kernel k -means.Στη συνέχεια, η διατριβή εστιάζεται στο πρόβλημα της ομαδοποίησης με πολλαπλές όψεις. Η βασική συνεισφορά στο αντικείμενο αυτό σχετίζεται με την ανάθεση βαρών στις όψεις, τα οποία μαθαίνονται αυτόματα και τα οποία αντικατοπτρίζουν την ποιότητα των όψεων. Οι υπάρχουσες προσεγγίσεις θεωρούν όλες τις όψεις εξίσου σημαντικές, κάτι που μπορεί να οδηγήσει σε σημαντική μείωση της απόδοσης εάν υπάρχουν εκφυλισμένες όψεις (π.χ. όψεις με θόρυβο) στο σύνολο δεδομένων. Ειδικότερα, παρουσιάζονται για το ανωτέρω πρόβλημα δύο διαφορετικές μεθοδολογίες. Στην πρώτη περίπτωση αναπαριστούμε τις όψεις μέσω κυρτών μικτών μοντέλων (convex mixture models) λαμβάνοντας υπόψη τις διαφορετικές στατιστικές ιδιότητές τους και παρουσιάζουμε έναν αλγόριθμο με βάρη στις όψεις και έναν χωρίς βάρη. Στην δεύτερη περίπτωση αναπαριστούμε την κάθε όψη μέσω ενός πίνακα ομοιότητας (kernel matrix) και μαθαίνουμε ένα συνδυασμό με βάρη από τους πίνακες αυτούς. Το προτεινόμενο μοντέλο διαθέτει μία παράμετρο που ελέγχει την αραιότητα των βαρών, επιτρέποντας την καλύτερη προσαρμογή του συνδυασμού στα δεδομένα.Η τελευταία ενότητα της διατριβής αφορά στην ομαδοποίηση με πολλαπλούς πυρήνες, όπου συνήθως το κριτήριο που βελτιστοποιείται είναι το εύρος (margin) της λύσης, όπως είναι γνωστό από τον ταξινομητή SVM (support vector machine). Στην προσέγγιση που προτείνεται, βελτιστοποιείται ο λόγος μεταξύ του εύρους και της διακύμανσης (intra-cluster variance) των ομάδων, λαμβάνοντας έτσι υπόψη τόσο τον διαχωρισμό (separability) τους όσο και το πόσο συμπαγείς (compactness) είναι οι ομάδες, το οποίο δύναται να οδηγήσει σε καλύτερες λύσεις. Έχει δειχθεί ότι το εύρος από μόνο του δεν επαρκεί ως κριτήριο για την μάθηση του κατάλληλου πυρήνα, καθότι μπορεί να γίνει αυθαίρετα μεγάλο μέσω μίας απλής κλιμάκωσης (scaling) του πυρήνα. Αντιθέτως, το κριτήριο που προτείνουμε είναι αμετάβλητο (invariant) σε κλιμακώσεις του πυρήνα και, επιπλέον, το ολικό του βέλτιστο είναι αμετάβλητο ως προς τον τύπο της νόρμας που εφαρμόζεται στους περιορισμούς (constraints) των παραμέτρων του πυρήνα. Τα πειραματικά αποτελέσματα επιβεβαιώνουν τις ιδιότητες του κριτηρίου μας, καθώς και τις αναμενόμενες βελτιωμένες επιδόσεις ομαδοποίησης.



Sensors ◽  
2014 ◽  
Vol 14 (7) ◽  
pp. 12784-12802 ◽  
Author(s):  
Xiaoou Li ◽  
Xun Chen ◽  
Yuning Yan ◽  
Wenshi Wei ◽  
Z. Wang


Energies ◽  
2021 ◽  
Vol 14 (18) ◽  
pp. 5752
Author(s):  
Zhenxing Zhao ◽  
Kaijie Chen ◽  
Ying Chen ◽  
Yuxing Dai ◽  
Zeng Liu ◽  
...  

With existing power prediction algorithms, it is difficult to satisfy the requirements for prediction accuracy and time when PV output power fluctuates sharply within seconds, so this paper proposes a high-precision and ultra-fast PV power prediction algorithm. Firstly, in order to shorten the optimization time and improve the optimization accuracy, the single-iteration Gray Wolf Optimization (SiGWO) method is used to simplify the iteration process of the hyperparameters of Least Squares Support Vector Machine (LSSVM), and then the hybrid local search algorithm composed of Iterative Local Search (ILS) and Self-adaptive Differential Evolution (SaDE) is used to improve the accuracy of hyperparameters, so as to achieve high-precision and ultra-fast PV power prediction. The power prediction model is established, and the proposed algorithm is applied in a test experiment which can complete the power prediction within 3 s, and the RMSE is only 0.44%. Finally, combined with the PV-storage advanced smoothing control strategy, it is verified that the performance of the proposed algorithm can satisfy the system’s requirements for prediction accuracy and time under the condition of power mutation in a PV power generation system.



2019 ◽  
Vol 11 (3) ◽  
pp. 168781401983710
Author(s):  
Peng Zheng ◽  
Dong-liang Liu ◽  
Xue-hao Tian ◽  
Zhan-xin Zhi ◽  
Lin-na Zhang

In any grinding process, compensation regulation value is a crucial factor for maintaining precision during the batch processing of workpieces. Geometric characteristics, buffing allowance, temperature, wheel speed, and workpiece speed are the main factors that affect compensation regulation value in any grinding process. In this article, a novel prediction method for compensation regulation value is proposed based on incremental support vector machine and mixed kernel function. The support vectors for the prediction model are extracted using the convex hull vertex optimization algorithm, and the speed of the operation can be increased effectively. In addition, the parameters of the model are optimized using cross-validation optimization method to improve the accuracy of the prediction model. Then, the feedback control strategy of compensation regulation value for the grinding process is also proposed. Single-factor and multi-factor experiments are implemented respectively using the proposed method. The results verify the feasibility and effectiveness of the proposed method. It is also noted that the machining accuracy is improved significantly in comparison with the machining without prediction and compensation control. Moreover, by applying the prediction compensation control of compensation regulation value to the active measurement and control of the grinding process, a feedback system is formed, and then the intelligentization of the grinding system can be realized.



2019 ◽  
Vol 13 ◽  
Author(s):  
Yan Zhang ◽  
Ren Sheng

Background: In order to improve the efficiency of fault treatment of mining motor, the method of model construction is used to construct the type of kernel function based on the principle of vector machine classification and the optimization method of parameters. Methodology: One-to-many algorithm is used to establish two kinds of support vector machine models for fault diagnosis of motor rotor of crusher. One of them is to obtain the optimal parameters C and g based on the input samples of the instantaneous power fault characteristic data of some motor rotors which have not been processed by rough sets. Patents on machine learning have also shows their practical usefulness in the selction of the feature for fault detection. Results: The results show that the instantaneous power fault feature extracted from the rotor of the crusher motor is obtained by the cross validation method of grid search k-weights (where k is 3) and the final data of the applied Gauss radial basis penalty parameter C and the nuclear parameter g are obtained. Conclusion: The model established by the optimal parameters is used to classify and diagnose the sample of instantaneous power fault characteristic measurement of motor rotor. Therefore, the classification accuracy of the sample data processed by rough set is higher.



Sign in / Sign up

Export Citation Format

Share Document