Optimal hyperplane classifier based on entropy number bound

Author(s):  
K. Tsuda
2010 ◽  
Vol 2010 ◽  
pp. 1-14 ◽  
Author(s):  
Shang Zhaowei ◽  
Zhang Lingfeng ◽  
Ma Shangjun ◽  
Fang Bin ◽  
Zhang Taiping

This paper discusses the prediction of time series with missing data. A novel forecast model is proposed based on max-margin classification of data with absent features. The issue of modeling incomplete time series is considered as classification of data with absent features. We employ the optimal hyperplane of classification to predict the future values. Compared with traditional predicting process of incomplete time series, our method solves the problem directly rather than fills the missing data in advance. In addition, we introduce an imputation method to estimate the missing data in the history series. Experimental results validate the effectiveness of our model in both prediction and imputation.


Author(s):  
Hiroshi Murata ◽  
◽  
Takashi Onoda ◽  
Seiji Yamada ◽  

Support Vector Machines (SVMs) were applied to interactive document retrieval that uses active learning. In such a retrieval system, the degree of relevance is evaluated by using a signed distance from the optimal hyperplane. It is not clear, however, how the signed distance in SVMs has characteristics of vector space model. We therefore formulated the degree of relevance by using the signed distance in SVMs and comparatively analyzed it with a conventional Rocchio-based method. Although vector normalization has been utilized as preprocessing for document retrieval, few studies explained why vector normalization was effective. Based on our comparative analysis, we theoretically show the effectiveness of normalizing document vectors in SVM-based interactive document retrieval. We then propose a cosine kernel that is suitable for SVM-based interactive document retrieval. The effectiveness of the method was compared experimentally with conventional relevance feedback for Boolean, Term Frequency and Term Frequency-Inverse Document Frequency representations of document vectors. Experimental results for a Text REtrieval Conference data set showed that the cosine kernel is effective for all document representations, especially Term Frequency representation.


2013 ◽  
Vol 33 (9) ◽  
pp. 2553-2556
Author(s):  
Ting YANG ◽  
Xiangru MENG ◽  
Xiangxi WEN ◽  
Wen WU

2010 ◽  
Vol 07 (04) ◽  
pp. 347-356
Author(s):  
E. SIVASANKAR ◽  
R. S. RAJESH

In this paper, Principal Component Analysis is used for feature extraction, and a statistical learning based Support Vector Machine is designed for functional classification of clinical data. Appendicitis data collected from BHEL Hospital, Trichy is taken and classified under three classes. Feature extraction transforms the data in the high-dimensional space to a space of fewer dimensions. The classification is done by constructing an optimal hyperplane that separates the members from the nonmembers of the class. For linearly nonseparable data, Kernel functions are used to map data to a higher dimensional space and there the optimal hyperplane is found. This paper works with different SVMs based on radial basis and polynomial kernels, and their performances are compared.


Author(s):  
Gharib M Subhi ◽  
Azeddine Messikh

Machine learning plays a key role in many applications such as data mining and image recognition.Classification is one subcategory under machine learning. In this paper we propose a simple quantum circuitbased on the nearest mean classifier to classified handwriting characters. Our circuit is a simplified circuit fromthe quantum support vector machine [Phys. Rev. Lett. 114, 140504 (2015)] which uses quantum matrix inversealgorithm to find optimal hyperplane that separated two different classes. In our case the hyperplane is foundusing projections and rotations on the Bloch sphere.


2020 ◽  
Vol 17 (4) ◽  
pp. 1255
Author(s):  
Ghadeer Jasim Mahdi

Support vector machines (SVMs) are supervised learning models that analyze data for classification or regression. For classification, SVM is widely used by selecting an optimal hyperplane that separates two classes. SVM has very good accuracy and extremally robust comparing with some other classification methods such as logistics linear regression, random forest, k-nearest neighbor and naïve model. However, working with large datasets can cause many problems such as time-consuming and inefficient results. In this paper, the SVM has been modified by using a stochastic Gradient descent process. The modified method, stochastic gradient descent SVM (SGD-SVM), checked by using two simulation datasets. Since the classification of different cancer types is important for cancer diagnosis and drug discovery, SGD-SVM is applied for classifying the most common leukemia cancer type dataset. The results that are gotten using SGD-SVM are much accurate than other results of many studies that used the same leukemia datasets.


2019 ◽  
Vol 07 (03) ◽  
pp. 738-745
Author(s):  
Jin Chen ◽  
Wenjing Lu ◽  
Hanyue Xiao ◽  
Yanan Wang ◽  
Xin Tan

2019 ◽  
Vol 29 (1) ◽  
pp. 1246-1260 ◽  
Author(s):  
Ankita Bisht ◽  
Mohit Dua ◽  
Shelza Dua ◽  
Priyanka Jaroli

Abstract The paper presents an approach to encrypt the color images using bit-level permutation and alternate logistic map. The proposed method initially segregates the color image into red, green, and blue channels, transposes the segregated channels from the pixel-plane to bit-plane, and scrambles the bit-plane matrix using Arnold cat map (ACM). Finally, the red, blue, and green channels of the scrambled image are confused and diffused by applying alternate logistic map that uses a four-dimensional Lorenz system to generate a pseudorandom number sequence for the three channels. The parameters of ACM are generated with the help of Logistic-Sine map and Logistic-Tent map. The intensity values of scrambled pixels are altered by Tent-Sine map. One-dimensional and two-dimensional logistic maps are used for alternate logistic map implementation. The performance and security parameters histogram, correlation distribution, correlation coefficient, entropy, number of pixel change rate, and unified averaged changed intensity are computed to show the potential of the proposed encryption technique.


2020 ◽  
Vol 18 (1) ◽  
pp. 1635-1644
Author(s):  
Yongjie Han ◽  
Hanyue Xiao ◽  
Guanggui Chen

Abstract In this paper, we define the entropy number in probabilistic setting and determine the exact order of entropy number of finite-dimensional space in probabilistic setting. Moreover, we also estimate the sharp order of entropy number of univariate Sobolev space in probabilistic setting by discretization method.


Sign in / Sign up

Export Citation Format

Share Document