optimal hyperplane
Recently Published Documents


TOTAL DOCUMENTS

10
(FIVE YEARS 1)

H-INDEX

1
(FIVE YEARS 0)

2020 ◽  
Vol 17 (4) ◽  
pp. 1255
Author(s):  
Ghadeer Jasim Mahdi

Support vector machines (SVMs) are supervised learning models that analyze data for classification or regression. For classification, SVM is widely used by selecting an optimal hyperplane that separates two classes. SVM has very good accuracy and extremally robust comparing with some other classification methods such as logistics linear regression, random forest, k-nearest neighbor and naïve model. However, working with large datasets can cause many problems such as time-consuming and inefficient results. In this paper, the SVM has been modified by using a stochastic Gradient descent process. The modified method, stochastic gradient descent SVM (SGD-SVM), checked by using two simulation datasets. Since the classification of different cancer types is important for cancer diagnosis and drug discovery, SGD-SVM is applied for classifying the most common leukemia cancer type dataset. The results that are gotten using SGD-SVM are much accurate than other results of many studies that used the same leukemia datasets.


Author(s):  
Gharib M Subhi ◽  
Azeddine Messikh

Machine learning plays a key role in many applications such as data mining and image recognition.Classification is one subcategory under machine learning. In this paper we propose a simple quantum circuitbased on the nearest mean classifier to classified handwriting characters. Our circuit is a simplified circuit fromthe quantum support vector machine [Phys. Rev. Lett. 114, 140504 (2015)] which uses quantum matrix inversealgorithm to find optimal hyperplane that separated two different classes. In our case the hyperplane is foundusing projections and rotations on the Bloch sphere.


2013 ◽  
Vol 33 (9) ◽  
pp. 2553-2556
Author(s):  
Ting YANG ◽  
Xiangru MENG ◽  
Xiangxi WEN ◽  
Wen WU

2013 ◽  
Vol 333-335 ◽  
pp. 1430-1434
Author(s):  
Lin Fang Hu ◽  
Lei Qiao ◽  
Min De Huang

A feature selection algorithm based on the optimal hyperplane of SVM is raised. Using the algorithm, the contribution to the classification of each feature in the candidate feature set is test, and then the feature subset with best classification ability will be selected. The algorithm is used in the recognition process of storm monomers in weather forecast, and experimental data show that the classification ability of the features can be effectively evaluated; the optimal feature subset is selected to enhance the working performance of the classifier.


Author(s):  
Hiroshi Murata ◽  
◽  
Takashi Onoda ◽  
Seiji Yamada ◽  

Support Vector Machines (SVMs) were applied to interactive document retrieval that uses active learning. In such a retrieval system, the degree of relevance is evaluated by using a signed distance from the optimal hyperplane. It is not clear, however, how the signed distance in SVMs has characteristics of vector space model. We therefore formulated the degree of relevance by using the signed distance in SVMs and comparatively analyzed it with a conventional Rocchio-based method. Although vector normalization has been utilized as preprocessing for document retrieval, few studies explained why vector normalization was effective. Based on our comparative analysis, we theoretically show the effectiveness of normalizing document vectors in SVM-based interactive document retrieval. We then propose a cosine kernel that is suitable for SVM-based interactive document retrieval. The effectiveness of the method was compared experimentally with conventional relevance feedback for Boolean, Term Frequency and Term Frequency-Inverse Document Frequency representations of document vectors. Experimental results for a Text REtrieval Conference data set showed that the cosine kernel is effective for all document representations, especially Term Frequency representation.


2010 ◽  
Vol 07 (04) ◽  
pp. 347-356
Author(s):  
E. SIVASANKAR ◽  
R. S. RAJESH

In this paper, Principal Component Analysis is used for feature extraction, and a statistical learning based Support Vector Machine is designed for functional classification of clinical data. Appendicitis data collected from BHEL Hospital, Trichy is taken and classified under three classes. Feature extraction transforms the data in the high-dimensional space to a space of fewer dimensions. The classification is done by constructing an optimal hyperplane that separates the members from the nonmembers of the class. For linearly nonseparable data, Kernel functions are used to map data to a higher dimensional space and there the optimal hyperplane is found. This paper works with different SVMs based on radial basis and polynomial kernels, and their performances are compared.


2010 ◽  
Vol 2010 ◽  
pp. 1-14 ◽  
Author(s):  
Shang Zhaowei ◽  
Zhang Lingfeng ◽  
Ma Shangjun ◽  
Fang Bin ◽  
Zhang Taiping

This paper discusses the prediction of time series with missing data. A novel forecast model is proposed based on max-margin classification of data with absent features. The issue of modeling incomplete time series is considered as classification of data with absent features. We employ the optimal hyperplane of classification to predict the future values. Compared with traditional predicting process of incomplete time series, our method solves the problem directly rather than fills the missing data in advance. In addition, we introduce an imputation method to estimate the missing data in the history series. Experimental results validate the effectiveness of our model in both prediction and imputation.


Sign in / Sign up

Export Citation Format

Share Document