A universal problem with text classification has a problem due to the high dimensionality of feature space, e.g. word frequency vectors. To overcome this problem, this paper proposed a feature selection which focuses on statistical pattern based on SVM Attribute. Experiments have shown that the determination of word importance may increase the speed of the classification algorithm and save their resource used significantly. The proposed method was studied by comparing classification performance among Decision Tree, Naïve Bayes, and Support Vector Machine. The results showed that Support Vector Machine was found to be the best algorithm with F-measure 93.6%. It is found that the feature selection can reduce dimensionality of data significantly.