nonlinear feature
Recently Published Documents


TOTAL DOCUMENTS

254
(FIVE YEARS 56)

H-INDEX

24
(FIVE YEARS 2)

Author(s):  
Turker Tuncer ◽  
Sengul Dogan ◽  
Abdulhamit Subasi

AbstractElectroencephalography (EEG) signals collected from human brains have generally been used to diagnose diseases. Moreover, EEG signals can be used in several areas such as emotion recognition, driving fatigue detection. This work presents a new emotion recognition model by using EEG signals. The primary aim of this model is to present a highly accurate emotion recognition framework by using both a hand-crafted feature generation and a deep classifier. The presented framework uses a multilevel fused feature generation network. This network has three primary phases, which are tunable Q-factor wavelet transform (TQWT), statistical feature generation, and nonlinear textural feature generation phases. TQWT is applied to the EEG data for decomposing signals into different sub-bands and create a multilevel feature generation network. In the nonlinear feature generation, an S-box of the LED block cipher is utilized to create a pattern, which is named as Led-Pattern. Moreover, statistical feature extraction is processed using the widely used statistical moments. The proposed LED pattern and statistical feature extraction functions are applied to 18 TQWT sub-bands and an original EEG signal. Therefore, the proposed hand-crafted learning model is named LEDPatNet19. To select the most informative features, ReliefF and iterative Chi2 (RFIChi2) feature selector is deployed. The proposed model has been developed on the two EEG emotion datasets, which are GAMEEMO and DREAMER datasets. Our proposed hand-crafted learning network achieved 94.58%, 92.86%, and 94.44% classification accuracies for arousal, dominance, and valance cases of the DREAMER dataset. Furthermore, the best classification accuracy of the proposed model for the GAMEEMO dataset is equal to 99.29%. These results clearly illustrate the success of the proposed LEDPatNet19.


2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Haiyan Zhao

A synthetic aperture radar (SAR) target recognition method combining linear and nonlinear feature extraction and classifiers is proposed. The principal component analysis (PCA) and kernel PCA (KPCA) are used to extract feature vectors of the original SAR image, respectively, which are classical and reliable feature extraction algorithms. In addition, KPCA can effectively make up for the weak linear description ability of PCA. Afterwards, support vector machine (SVM) and kernel sparse representation-based classification (KSRC) are used to classify the KPCA and PCA feature vectors, respectively. Similar to the idea of feature extraction, KSRC mainly introduces kernel functions to improve the processing and classification capabilities of nonlinear data. Through the combination of linear and nonlinear features and classifiers, the internal data structure of SAR images and the correspondence between test and training samples can be better investigated. In the experiment, the performance of the proposed method is tested based on the MSTAR dataset. The results show the effectiveness and robustness of the proposed method.


2021 ◽  
Vol 2071 (1) ◽  
pp. 012041
Author(s):  
I Amalina ◽  
A Saidatul ◽  
C Y Fook ◽  
R F Navea

Abstract The brain signals recorded by EEG devices are largely developed in for biometric authentication purposes. Those signals are very informative and reliable to be classified using signal processing. In this paper, the feature extraction and feature fusion are further studied to observe their performance towards the typing tasks. The signals are pre-processed to eliminate the unwanted noise present in the signals. The feature extraction method such as Welch’s method, Burg’s method and Yule Walk’s method are applied to extract the mean, median, standard deviation and variance in the data. Nonlinear feature such as fuzzy entropy is also been extracted. The extracted features are further classified by using k-Nearest Neighbour (k-NN), Random Forest (RF) and Ensemble Bagged Tree (EBT). The performance of feature extraction and feature fusion through concatenation are recorded and compared. For comparison, the feature fusion shows a better performance accuracy rather than feature extraction. The highest percentage accuracy was produced by Burg’s method for frontal-parietal lobes feature fusion which is 95.94% using Ensemble Bagged Tree (EBT).


Author(s):  
Jui-Teng Lin

The synergetic features of a three-component photoinitiating systems(A/B/C) based on the measured data and proposed mechanism of Liu et al are analyzed. The co-initiators/additivesB and C have dual-functions of : (i) regeneration of photoinitiator A, and (ii) generation of extra radicals for enhanced conversion efficacy (CE), the synergic effects lead to higher CE for both free radical polymerization(FRP) and cationic polymerization (CP). The CE of FRP has 3 terms due to the direct (tyep-I) coupling and two radicals. Therefore, it is always higher than that of CP having only one radical. The CE of CP has a transient state proportional to the effective absorption constnat (b), the light intensity (I) and initiator concentration (A0), but a steady state given independent to the light intensity (I). For the CE of FRP, the contribution from radical R could have two cases: (i)linear dependence on T'=bIA0, or (ii) nonlinear square root dependence T0.5. The nonlinear feature is due to the bimolecular termination term of k'R2.The key factors influencing the conversion efficacy are explored by analytic formulas. The synergetic effects lead to higher conversion of FRP and CP are consistent with the measured work. However, there are other theoretically predicted new features (findings) which are either not identified or explored experimentally including: (i) co-initiator [C] always enhances both FRP and CP conversions, whereas co-initiator [B] leads to more efficient FRP, but it also reduces CP. The specific systems analyzed are: benzophenonederivatives(A) ethyl 4-(dimethylamino)benzoate (B), and (4-tert-butylphenyl)iodonium hexafluorophosphate (C) under a UV (365 nm) LED irradiation; and two monomers of trimethylolpropane triacrylate (TMPTA, for FRP) and (3,4- epoxycyclohexane)methyl 3,4-epoxycyclohexylcarboxylate (EPOX, for CP).


Author(s):  
Zheng Wang ◽  
Feiping Nie ◽  
Canyu Zhang ◽  
Rong Wang ◽  
Xuelong Li

2021 ◽  
Vol 5 (1) ◽  
pp. 56
Author(s):  
Jersson X. Leon-Medina ◽  
Maribel Anaya ◽  
Diego A. Tibaduiza

Electronic tongues are devices used in the analysis of aqueous matrices for classification or quantification tasks. These systems are composed of several sensors of different materials, a data acquisition unit, and a pattern recognition system. Voltammetric sensors have been used in electronic tongues using the cyclic voltammetry method. By using this method, each sensor yields a voltammogram that relates the response in current to the change in voltage applied to the working electrode. A great amount of data is obtained in the experimental procedure which allows handling the analysis as a pattern recognition application; however, the development of efficient machine-learning-based methodologies is still an open research interest topic. As a contribution, this work presents a novel data processing methodology to classify signals acquired by a cyclic voltammetric electronic tongue. This methodology is composed of several stages such as data normalization through group scaling method and a nonlinear feature extraction step with locally linear embedding (LLE) technique. The reduced-size feature vector input to a k-Nearest Neighbors (k-NN) supervised classifier algorithm. A leave-one-out cross-validation (LOOCV) procedure is performed to obtain the final classification accuracy. The methodology is validated with a data set of five different juices as liquid substances.Two screen-printed electrodes voltametric sensors were used in the electronic tongue. Specifically the materials of their working electrodes were platinum and graphite. The results reached an 80% classification accuracy after applying the developed methodology.


Sign in / Sign up

Export Citation Format

Share Document