scholarly journals A new feature extraction approach based on non linear source separation

Author(s):  
Hela Elmannai ◽  
Mohamed Saber Naceur ◽  
Mohamed Anis Loghmari ◽  
Abeer AlGarni

A new feature extraction approach is proposed in this paper to improve the classification performance in remotely sensed data. The proposed method is based on a primary sources subset (PSS) obtained by nonlinear transform that provides lower space for land pattern recognition. First, the underlying sources are approximated using multilayer neural networks. Given that, Bayesian inferences update unknown sources’ knowledge and model parameters with information’s data. Then, a source dimension minimizing technique is adopted to provide more efficient land cover description. The support vector machine (SVM) scheme is developed by using feature extraction. The experimental results on real multispectral imagery demonstrates that the proposed approach ensures efficient feature extraction by using several descriptors for texture identification and multiscale analysis. In a pixel based approach, the reduced PSS space improved the overall classification accuracy by 13% and reaches 82%. Using texture and multi resolution descriptors, the overall accuracy is 75.87% for the original observations, while using the reduced source space the overall accuracy reaches 81.67% when using jointly wavelet and Gabor transform and 86.67% when using Gabor transform. Thus, the source space enhanced the feature extraction process and allow more land use discrimination than the multispectral observations.

2018 ◽  
Vol 14 (02) ◽  
pp. 60
Author(s):  
Wang Fei ◽  
Fang Liqing ◽  
Qi Ziyuan

<p>As the vibration signal <a href="app:ds:characteristic" target="_self">characteristic</a>s of hydraulic pump <a href="app:ds:present%20(a%20certain%20appearance)" target="_self">present</a> non-stationary and the fault features is difficult to extract, a new feature extraction method was proposed .This approach combines wavelet packet analysis techniques, fuzzy entropy and LLTSA (liner local tangent space alignment) which is one of typical manifold learning methods to <a href="app:ds:extract" target="_self">extract</a>ing  <a href="app:ds:fault" target="_self">fault</a>  feature. Firstly, the vibration signals were decomposed into eight signals in different <a href="app:ds:scale" target="_self">scale</a>s, then the fuzzy entropies of signals were calculated to constitute eight <a href="app:ds:many%20dimensions" target="_self">dimensions</a> <a href="app:ds:feature" target="_self">feature</a> vector. Secondly, LLTSA method was applied to compress the high-dimension features into low-dimension features which have a better classification performance. Finally, the SVM (support vector machine) was employed to <a href="app:ds:distinguish" target="_self">distinguish</a> different <a href="app:ds:fault" target="_self">fault</a> features. Experiment results of hydraulic pump feature extraction show that the proposed method can exactly classify different fault type of hydraulic pump and this method has a significant advantage <a href="app:ds:compare" target="_self">compare</a>d with other feature extraction means mentioned in this paper.</p><p> </p>


2020 ◽  
Author(s):  
Hoda Heidari ◽  
Zahra Einalou ◽  
Mehrdad Dadgostar ◽  
Hamidreza Hosseinzadeh

Abstract Most of the studies in the field of Brain-Computer Interface (BCI) based on electroencephalography have a wide range of applications. Extracting Steady State Visual Evoked Potential (SSVEP) is regarded as one of the most useful tools in BCI systems. In this study, different methods such as feature extraction with different spectral methods (Shannon entropy, skewness, kurtosis, mean, variance) (bank of filters, narrow-bank IIR filters, and wavelet transform magnitude), feature selection performed by various methods (decision tree, principle component analysis (PCA), t-test, Wilcoxon, Receiver operating characteristic (ROC)), and classification step applying k nearest neighbor (k-NN), perceptron, support vector machines (SVM), Bayesian, multiple layer perceptron (MLP) were compared from the whole stream of signal processing. Through combining such methods, the effective overview of the study indicated the accuracy of classical methods. In addition, the present study relied on a rather new feature selection described by decision tree and PCA, which is used for the BCI-SSVEP systems. Finally, the obtained accuracies were calculated based on the four recorded frequencies representing four directions including right, left, up, and down.


Author(s):  
Rashmi K. Thakur ◽  
Manojkumar V. Deshpande

Sentiment analysis is one of the popular techniques gaining attention in recent times. Nowadays, people gain information on reviews of users regarding public transportation, movies, hotel reservation, etc., by utilizing the resources available, as they meet their needs. Hence, sentiment classification is an essential process employed to determine the positive and negative responses. This paper presents an approach for sentiment classification of train reviews using MapReduce model with the proposed Kernel Optimized-Support Vector Machine (KO-SVM) classifier. The MapReduce framework handles big data using a mapper, which performs feature extraction and reducer that classifies the review based on KO-SVM classification. The feature extraction process utilizes features that are classification-specific and SentiWordNet-based. KO-SVM adopts SVM for the classification, where the exponential kernel is replaced by an optimized kernel, finding the weights using a novel optimizer, Self-adaptive Lion Algorithm (SLA). In a comparative analysis, the performance of KO-SVM classifier is compared with SentiWordNet, NB, NN, and LSVM, using the evaluation metrics, specificity, sensitivity, and accuracy, with train review and movie review database. The proposed KO-SVM classifier could attain maximum sensitivity of 93.46% and 91.249% specificity of 74.485% and 70.018%; and accuracy of 84.341% and 79.611% respectively, for train review and movie review databases.


Electrocardiogram (ECG) examination via computer techniques that involve feature extraction, pre-processing and post-processing was implemented due to its significant advantages. Extracting ECG signal standard features that requires high processing operation level was the main focusing point for many studies. In this paper, up to 6 different ECG signal classes are accurately predicted in the absence of ECG feature extraction. The corner stone of the proposed technique in this paper is the Linear predictive coding (LPC) technique that regress and normalize the signal during the pre-processing phase. Prior to the feature extraction using Wavelet energy (WE), a direct Wavelet transform (DWT) is implemented that converted ECG signal to frequency domain. In addition, the dataset was divided into two parts , one for training and the other for testing purposes Which have been classified in this proposed algorithm using support vector machine (SVM). Moreover, using MIT AI2 Companion was developed by MIT Center for Mobile Learning, the classification result was shared to the patient mobile phone that can call the ambulance and send the location in case of serious emergency. Finally, the confusion matrix values are used to measure the proposed classification performance. For 6 different ECG classes, an accuracy ration of about 98.15% was recorded. This ratio became 100% for 3 ECG signal classes and decreases to 97.95% by increasing ECG signal to 7 classes.


Electrocardiogram (ECG) is the analysis of the electrical movement of the heart over a period of time. The detailed information about the condition of the heart is measured by analyzing the ECG signal. Wavelet transform, fast Fourier transform are the different methods to disorganize cardiac disease. The paper elaborates the survey on ECG signal analysis and related study on arrhythmic and non arrhythmic data. Here we discuss the efficient feature extraction process for electrocardiogram, where based on position and priority six best P-QRS-T fragments are studied. This survey examines the the outcome of the system by using various Machine learning classification algorithms for feature extraction and analysis of ECG Signals. Support Vector Machine (SVM), K-Nearest Neighbor (KNN), Artificial Neural Network (ANN) are the most important algorithms used here for this purpose. There are several publicly available data sets which are used for arrhythmia analysis and among them MIT-BIH ECG-ID database is mostly used. The drawbacks and limitations are also discussed here and from there future challenges and concluding remarks can be done.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Zhizeng Luo ◽  
Ronghang Jin ◽  
Hongfei Shi ◽  
Xianju Lu

Feature extraction is essential for classifying different motor imagery (MI) tasks in a brain-computer interface. To improve classification accuracy, we propose a novel feature extraction method in which the connectivity increment rate (CIR) of the brain function network (BFN) is extracted. First, the BFN is constructed on the basis of the threshold matrix of the Pearson correlation coefficient of the mu rhythm among the channels. In addition, a weighted BFN is constructed and expressed by the sum of the existing edge weights to characterize the cerebral cortex activation degree in different movement patterns. Then, on the basis of the topological structures of seven mental tasks, three regional networks centered on the C3, C4, and Cz channels are constructed, which are consistent with correspondence between limb movement patterns and cerebral cortex in neurophysiology. Furthermore, the CIR of each regional functional network is calculated to form three-dimensional vectors. Finally, we use the support vector machine to learn a classifier for multiclass MI tasks. Experimental results show a significant improvement and demonstrate the success of the extracted feature CIR in dealing with MI classification. Specifically, the average classification performance reaches 88.67% which is higher than other competing methods, indicating that the extracted CIR is effective for MI classification.


2021 ◽  
Vol 15 ◽  
Author(s):  
Thanh-Tung Trinh ◽  
Chia-Fen Tsai ◽  
Yu-Tsung Hsiao ◽  
Chun-Ying Lee ◽  
Chien-Te Wu ◽  
...  

Individuals with mild cognitive impairment (MCI) are at high risk of developing into dementia (e. g., Alzheimer's disease, AD). A reliable and effective approach for early detection of MCI has become a critical challenge. Although compared with other costly or risky lab tests, electroencephalogram (EEG) seems to be an ideal alternative measure for early detection of MCI, searching for valid EEG features for classification between healthy controls (HCs) and individuals with MCI remains to be largely unexplored. Here, we design a novel feature extraction framework and propose that the spectral-power-based task-induced intra-subject variability extracted by this framework can be an encouraging candidate EEG feature for the early detection of MCI. In this framework, we extracted the task-induced intra-subject spectral power variability of resting-state EEGs (as measured by a between-run similarity) before and after participants performing cognitively exhausted working memory tasks as the candidate feature. The results from 74 participants (23 individuals with AD, 24 individuals with MCI, 27 HC) showed that the between-run similarity over the frontal and central scalp regions in the HC group is higher than that in the AD or MCI group. Furthermore, using a feature selection scheme and a support vector machine (SVM) classifier, the between-run similarity showed encouraging leave-one-participant-out cross-validation (LOPO-CV) classification performance for the classification between the MCI and HC (80.39%) groups and between the AD vs. HC groups (78%), and its classification performance is superior to other widely-used features such as spectral powers, coherence, and the complexity estimated by Katz's method extracted from single-run resting-state EEGs (a common approach in previous studies). The results based on LOPO-CV, therefore, suggest that the spectral-power-based task-induced intra-subject EEG variability extracted by the proposed feature extraction framework has the potential to serve as a neurophysiological feature for the early detection of MCI in individuals.


2021 ◽  
Author(s):  
Hoda Heidari ◽  
zahra einalou ◽  
Mehrdad Dadgostar ◽  
Hamidreza Hosseinzadeh

Abstract Most of the studies in the field of Brain-Computer Interface (BCI) based on electroencephalography have a wide range of applications. Extracting Steady State Visual Evoked Potential (SSVEP) is regarded as one of the most useful tools in BCI systems. In this study, different methods which includes 1) feature extraction with different spectral methods (Shannon entropy, skewness, kurtosis, mean, variance) and wavelet transform magnitude, 2) feature selection performed by various methods (decision tree, principle component analysis (PCA), t-test, Wilcoxon, Receiver operating characteristic (ROC)), 3) classification step applying k nearest neighbor (k-NN), support vector machines (SVM), Bayesian, multiple layer perceptron (MLP) were compared from the whole stream of signal processing. Through combining such methods, the effective overview of the study indicated the accuracy of classical methods. In addition, the present study relied on a rather new feature selection described by decision tree and PCA, which is used for the BCI-SSVEP systems. Finally, the obtained accuracies were calculated based on the four recorded frequencies representing four directions including right, left, up, and down. The highest level of accuracy was obtained 91.39%.


2008 ◽  
Vol 16 (04) ◽  
pp. 495-517 ◽  
Author(s):  
ASHISH CHOUDHARY ◽  
JIANPING HUA ◽  
MICHAEL L. BITTNER ◽  
EDWARD R. DOUGHERTY

Classifying a patient based on disease type, treatment prognosis, survivability, or other such criteria has become a major focus of genomics and proteomics. From the perspective of the general population of a particular kind of cell, one would like a classifier that applies to the whole population; however, it is often the case that the population is sufficiently structurally diverse that a satisfactory classifier cannot be designed from available sample data. In such a circumstance, it can be useful to identify cellular contexts within which a disease can be reliably diagnosed, which in effect means that one would like to find classifiers that apply to different sub-populations within the overall population. Using a model-based approach, this paper quantifies the effect of contexts on classification performance as a function of the classifier used and the sample size. The advantage of a model-based approach is that we can vary the contextual confusion as a function of the model parameters, thereby allowing us to compare the classification performance in terms of the degree of discriminatory confusion caused by the contexts. We consider five popular classifiers: linear discriminant analysis, three nearest neighbor, linear support vector machine, polynomial support vector machine, and Boosting. We contrast the case where classification is done with a single classifier without discriminating between the contexts to the case where there are context markers that facilitate context separation before classifier design. We observe that little can be done if there is high contextual confusion, but when the contextual confusion is low, context separation can be beneficial, the benefit depending on the classifier.


Entropy ◽  
2019 ◽  
Vol 21 (12) ◽  
pp. 1149
Author(s):  
Ersoy Öz ◽  
Öyküm Esra Aşkın

Classifying nucleic acid trace files is an important issue in molecular biology researches. For the purpose of obtaining better classification performance, the question of which features are used and what classifier is implemented to best represent the properties of nucleic acid trace files plays a vital role. In this study, different feature extraction methods based on statistical and entropy theory are utilized to discriminate deoxyribonucleic acid chromatograms, and distinguishing their signals visually is almost impossible. Extracted features are used as the input feature set for the classifiers of Support Vector Machines (SVM) with different kernel functions. The proposed framework is applied to a total number of 200 hepatitis nucleic acid trace files which consist of Hepatitis B Virus (HBV) and Hepatitis C Virus (HCV). While the use of statistical-based feature extraction methods allows representing the properties of hepatitis nucleic acid trace files with descriptive measures such as mean, median and standard deviation, entropy-based feature extraction methods including permutation entropy and multiscale permutation entropy enable quantifying the complexity of these files. The results indicate that using statistical and entropy-based features produces exceptionally high performances in terms of accuracies (reached at nearly 99%) in classifying HBV and HCV.


Sign in / Sign up

Export Citation Format

Share Document