Design of Text Categorization System Based on SVM

2012 ◽  
Vol 532-533 ◽  
pp. 1191-1195 ◽  
Author(s):  
Zhen Yan Liu ◽  
Wei Ping Wang ◽  
Yong Wang

This paper introduces the design of a text categorization system based on Support Vector Machine (SVM). It analyzes the high dimensional characteristic of text data, the reason why SVM is suitable for text categorization. According to system data flow this system is constructed. This system consists of three subsystems which are text representation, classifier training and text classification. The core of this system is the classifier training, but text representation directly influences the currency of classifier and the performance of the system. Text feature vector space can be built by different kinds of feature selection and feature extraction methods. No research can indicate which one is the best method, so many feature selection and feature extraction methods are all developed in this system. For a specific classification task every feature selection method and every feature extraction method will be tested, and then a set of the best methods will be adopted.

Sensors ◽  
2020 ◽  
Vol 20 (17) ◽  
pp. 4749
Author(s):  
Shaorong Zhang ◽  
Zhibin Zhu ◽  
Benxin Zhang ◽  
Bao Feng ◽  
Tianyou Yu ◽  
...  

The common spatial pattern (CSP) is a very effective feature extraction method in motor imagery based brain computer interface (BCI), but its performance depends on the selection of the optimal frequency band. Although a lot of research works have been proposed to improve CSP, most of these works have the problems of large computation costs and long feature extraction time. To this end, three new feature extraction methods based on CSP and a new feature selection method based on non-convex log regularization are proposed in this paper. Firstly, EEG signals are spatially filtered by CSP, and then three new feature extraction methods are proposed. We called them CSP-wavelet, CSP-WPD and CSP-FB, respectively. For CSP-Wavelet and CSP-WPD, the discrete wavelet transform (DWT) or wavelet packet decomposition (WPD) is used to decompose the spatially filtered signals, and then the energy and standard deviation of the wavelet coefficients are extracted as features. For CSP-FB, the spatially filtered signals are filtered into multiple bands by a filter bank (FB), and then the logarithm of variances of each band are extracted as features. Secondly, a sparse optimization method regularized with a non-convex log function is proposed for the feature selection, which we called LOG, and an optimization algorithm for LOG is given. Finally, ensemble learning is used for secondary feature selection and classification model construction. Combing feature extraction and feature selection methods, a total of three new EEG decoding methods are obtained, namely CSP-Wavelet+LOG, CSP-WPD+LOG, and CSP-FB+LOG. Four public motor imagery datasets are used to verify the performance of the proposed methods. Compared to existing methods, the proposed methods achieved the highest average classification accuracy of 88.86, 83.40, 81.53, and 80.83 in datasets 1–4, respectively. The feature extraction time of CSP-FB is the shortest. The experimental results show that the proposed methods can effectively improve the classification accuracy and reduce the feature extraction time. With comprehensive consideration of classification accuracy and feature extraction time, CSP-FB+LOG has the best performance and can be used for the real-time BCI system.


2014 ◽  
Vol 599-601 ◽  
pp. 1824-1828
Author(s):  
Juan Wang ◽  
Zhi Xun Zhang ◽  
Yong Dong Wang

Feature extraction is a key point of text categorization[1]. The accuracy of extraction will directly affect the accuracy of text classification. This paper introduces and compares 4 commonly used methods of text feature extraction: IG (Information gain), MI (Mutual information), CHI (statistics), DF (Document frequency), and proposes an improved method based on the method of CHI. Experiment result shows that the proposed method can improve the accuracy of text categorization.


Sensors ◽  
2020 ◽  
Vol 20 (6) ◽  
pp. 1790
Author(s):  
Zi Zhang ◽  
Hong Pan ◽  
Xingyu Wang ◽  
Zhibin Lin

Lamb wave approaches have been accepted as efficiently non-destructive evaluations in structural health monitoring for identifying damage in different states. Despite significant efforts in signal process of Lamb waves, physics-based prediction is still a big challenge due to complexity nature of the Lamb wave when it propagates, scatters and disperses. Machine learning in recent years has created transformative opportunities for accelerating knowledge discovery and accurately disseminating information where conventional Lamb wave approaches cannot work. Therefore, the learning framework was proposed with a workflow from dataset generation, to sensitive feature extraction, to prediction model for lamb-wave-based damage detection. A total of 17 damage states in terms of different damage type, sizes and orientations were designed to train the feature extraction and sensitive feature selection. A machine learning method, support vector machine (SVM), was employed for the learning model. A grid searching (GS) technique was adopted to optimize the parameters of the SVM model. The results show that the machine learning-enriched Lamb wave-based damage detection method is an efficient and accuracy wave to identify the damage severity and orientation. Results demonstrated that different features generated from different domains had certain levels of sensitivity to damage, while the feature selection method revealed that time-frequency features and wavelet coefficients exhibited the highest damage-sensitivity. These features were also much more robust to noise. With increase of noise, the accuracy of the classification dramatically dropped.


2014 ◽  
Vol 2014 ◽  
pp. 1-17 ◽  
Author(s):  
Jieming Yang ◽  
Zhaoyang Qu ◽  
Zhiying Liu

The filtering feature-selection algorithm is a kind of important approach to dimensionality reduction in the field of the text categorization. Most of filtering feature-selection algorithms evaluate the significance of a feature for category based on balanced dataset and do not consider the imbalance factor of dataset. In this paper, a new scheme was proposed, which can weaken the adverse effect caused by the imbalance factor in the corpus. We evaluated the improved versions of nine well-known feature-selection methods (Information Gain, Chi statistic, Document Frequency, Orthogonal Centroid Feature Selection, DIA association factor, Comprehensive Measurement Feature Selection, Deviation from Poisson Feature Selection, improved Gini index, and Mutual Information) using naïve Bayes and support vector machines on three benchmark document collections (20-Newsgroups, Reuters-21578, and WebKB). The experimental results show that the improved scheme can significantly enhance the performance of the feature-selection methods.


2020 ◽  
Vol 54 (5) ◽  
pp. 585-601
Author(s):  
N. Venkata Sailaja ◽  
L. Padmasree ◽  
N. Mangathayaru

PurposeText mining has been used for various knowledge discovery based applications, and thus, a lot of research has been contributed towards it. Latest trending research in the text mining is adopting the incremental learning data, as it is economical while dealing with large volume of information.Design/methodology/approachThe primary intention of this research is to design and develop a technique for incremental text categorization using optimized Support Vector Neural Network (SVNN). The proposed technique involves four major steps, such as pre-processing, feature selection, classification and feature extraction. Initially, the data is pre-processed based on stop word removal and stemming. Then, the feature extraction is done by extracting semantic word-based features and Term Frequency and Inverse Document Frequency (TF-IDF). From the extracted features, the important features are selected using Bhattacharya distance measure and the features are subjected as the input to the proposed classifier. The proposed classifier performs incremental learning using SVNN, wherein the weights are bounded in a limit using rough set theory. Moreover, for the optimal selection of weights in SVNN, Moth Search (MS) algorithm is used. Thus, the proposed classifier, named Rough set MS-SVNN, performs the text categorization for the incremental data, given as the input.FindingsFor the experimentation, the 20 News group dataset, and the Reuters dataset are used. Simulation results indicate that the proposed Rough set based MS-SVNN has achieved 0.7743, 0.7774 and 0.7745 for the precision, recall and F-measure, respectively.Originality/valueIn this paper, an online incremental learner is developed for the text categorization. The text categorization is done by developing the Rough set MS-SVNN classifier, which classifies the incoming texts based on the boundary condition evaluated by the Rough set theory, and the optimal weights from the MS. The proposed online text categorization scheme has the basic steps, like pre-processing, feature extraction, feature selection and classification. The pre-processing is carried out to identify the unique words from the dataset, and the features like semantic word-based features and TF-IDF are obtained from the keyword set. Feature selection is done by setting a minimum Bhattacharya distance measure, and the selected features are provided to the proposed Rough set MS-SVNN for the classification.


2017 ◽  
Vol 4 (1) ◽  
pp. 12-17
Author(s):  
Ahmad Firdaus

The classification of hoax news or news with incorrect information is one of the text categorization applications.Like text-based categorization of machine applications in general, this system consists of pre-processing andexecution of classification models. In this study, experiments were conducted to select the best technique in each sub-process by using 1200 articles hoax and 600 articles no hoax collected manually. This research Triedexperimenting to determine the best preprocessing stages between stop removals and stemming and showing the results of the deception Tree algorithm achieving an accuracy of 100% concluded above naive byes more stable level of accuracy in the number of datasets used in all candidates. Information gain, TFIDF and GGA based on using Naive Byes algorithm, supporting Vector Machine and Decision Tree no significant percentage change occurred on all candidates. But after using GGA (Optimize Generation) feature selection there is an increase of accuracy level The results of a comparison of classification algorithms between Naive Byes, decision trees and Support Vector machines combined with the GGA feature selection method for classifying the best result is generated by the selection of GGA + Decision Tree feature on candidate 2 (Paslon2) 100% and in the selection of the Information Gain + Decision Tree Feature selection with the lowest accuracy Candidate 3 at 36.67%, but overall improvement of accuracy Occurred on all algorithm after using feature selection and Naive byes more stable level of accuracy in the number of datasets used in all candidates.


2019 ◽  
Vol 9 (22) ◽  
pp. 4901 ◽  
Author(s):  
Lei Fu ◽  
Tiantian Zhu ◽  
Guobing Pan ◽  
Sihan Chen ◽  
Qi Zhong ◽  
...  

Power quality disturbances (PQDs) have a large negative impact on electric power systems with the increasing use of sensitive electrical loads. This paper presents a novel hybrid algorithm for PQD detection and classification. The proposed method is constructed while using the following main steps: computer simulation of PQD signals, signal decomposition, feature extraction, heuristic selection of feature selection, and classification. First, different types of PQD signals are generated by computer simulation. Second, variational mode decomposition (VMD) is used to decompose the signals into several instinct mode functions (IMFs). Third, the statistical features are calculated in the time series for each IMF. Next, a two-stage feature selection method is imported to eliminate the redundant features by utilizing permutation entropy and the Fisher score algorithm. Finally, the selected feature vectors are fed into a multiclass support vector machine (SVM) model to classify the PQDs. Several experimental investigations are performed to verify the performance and effectiveness of the proposed method in a noisy environment. Moreover, the results demonstrate that the start and end points of the PQD can be efficiently detected.


2022 ◽  
Vol 2022 ◽  
pp. 1-12
Author(s):  
Yuan Tang ◽  
Zining Zhao ◽  
Shaorong Zhang ◽  
Zhi Li ◽  
Yun Mo ◽  
...  

Feature extraction and selection are important parts of motor imagery electroencephalogram (EEG) decoding and have always been the focus and difficulty of brain-computer interface (BCI) system research. In order to improve the accuracy of EEG decoding and reduce model training time, new feature extraction and selection methods are proposed in this paper. First, a new spatial-frequency feature extraction method is proposed. The original EEG signal is preprocessed, and then the common spatial pattern (CSP) is used for spatial filtering and dimensionality reduction. Finally, the filter bank method is used to decompose the spatially filtered signals into multiple frequency subbands, and the logarithmic band power feature of each frequency subband is extracted. Second, to select the subject-specific spatial-frequency features, a hybrid feature selection method based on the Fisher score and support vector machine (SVM) is proposed. The Fisher score of each feature is calculated, then a series of threshold parameters are set to generate different feature subsets, and finally, SVM and cross-validation are used to select the optimal feature subset. The effectiveness of the proposed method is validated using two sets of publicly available BCI competition data and a set of self-collected data. The total average accuracy of the three data sets achieved by the proposed method is 82.39%, which is 2.99% higher than the CSP method. The experimental results show that the proposed method has a better classification effect than the existing methods, and at the same time, feature extraction and feature selection time also have greater advantages.


2006 ◽  
Vol 17 (02) ◽  
pp. 197-212 ◽  
Author(s):  
HANGUANG XIAO ◽  
CONGZHONG CAI ◽  
YUZONG CHEN

It is a difficult and important task to classify the types of military vehicles using the acoustic and seismic signals generated by military vehicles. For improving the classification accuracy and reducing the computing time and memory size, we investigated different pre-processing technology, feature extraction and selection methods. Short Time Fourier Transform (STFT) was employed for feature extraction. Genetic Algorithms (GA) and Principal Component Analysis (PCA) were used for feature selection and extraction further. A new feature vector construction method was proposed by uniting PCA and another feature selection method. K-Nearest Neighbor Classifier (KNN) and Support Vector Machines (SVM) were used for classification. The experimental results showed the accuracies of KNN and SVM were affected obviously by the window size which was used to frame the time series of the acoustic and seismic signals. The classification results indicated the performance of SVM was superior to that of KNN. The comparison of the four feature selection and extraction methods showed the proposed method is a simple, none time-consuming, and reliable technique for feature selection and helps the classifier SVM to achieve more better results than solely using PCA, GA, or combination.


2021 ◽  
Vol 25 (1) ◽  
pp. 21-34
Author(s):  
Rafael B. Pereira ◽  
Alexandre Plastino ◽  
Bianca Zadrozny ◽  
Luiz H.C. Merschmann

In many important application domains, such as text categorization, biomolecular analysis, scene or video classification and medical diagnosis, instances are naturally associated with more than one class label, giving rise to multi-label classification problems. This has led, in recent years, to a substantial amount of research in multi-label classification. More specifically, feature selection methods have been developed to allow the identification of relevant and informative features for multi-label classification. This work presents a new feature selection method based on the lazy feature selection paradigm and specific for the multi-label context. Experimental results show that the proposed technique is competitive when compared to multi-label feature selection techniques currently used in the literature, and is clearly more scalable, in a scenario where there is an increasing amount of data.


Sign in / Sign up

Export Citation Format

Share Document