separating hyperplane
Recently Published Documents


TOTAL DOCUMENTS

46
(FIVE YEARS 12)

H-INDEX

7
(FIVE YEARS 2)

Author(s):  
Alan Beggs

AbstractThis paper presents a proof of Afriat’s (Int Econ Rev 8:67–77) theorem on revealed preference by using the idea that a rational consumer should not be vulnerable to arbitrage. The main mathematical tool is the separating hyperplane theorem.


Symmetry ◽  
2021 ◽  
Vol 13 (8) ◽  
pp. 1378
Author(s):  
Yulan Wang ◽  
Zhixia Yang ◽  
Xiaomei Yang

In this paper, we propose a novel binary classification method called the kernel-free quadratic surface minimax probability machine (QSMPM), that makes use of the kernel-free techniques of the quadratic surface support vector machine (QSSVM) and inherits the advantage of the minimax probability machine (MPM) without any parameters. Specifically, it attempts to find a quadratic hypersurface that separates two classes of samples with maximum probability. However, the optimization problem derived directly was too difficult to solve. Therefore, a nonlinear transformation was introduced to change the quadratic function involved into a linear function. Through such processing, our optimization problem finally became a second-order cone programming problem, which was solved efficiently by an alternate iteration method. It should be pointed out that our method is both kernel-free and parameter-free, making it easy to use. In addition, the quadratic hypersurface obtained by our method was allowed to be any general form of quadratic hypersurface. It has better interpretability than the methods with the kernel function. Finally, in order to demonstrate the geometric interpretation of our QSMPM, five artificial datasets were implemented, including showing the ability to obtain a linear separating hyperplane. Furthermore, numerical experiments on benchmark datasets confirmed that the proposed method had better accuracy and less CPU time than corresponding methods.


2021 ◽  
Vol 12 ◽  
Author(s):  
Theodore Raphan ◽  
Sergei B. Yakushin

Vasovagal syncope (VVS) or neurogenically induced fainting has resulted in falls, fractures, and death. Methods to deal with VVS are to use implanted pacemakers or beta blockers. These are often ineffective because the underlying changes in the cardiovascular system that lead to the syncope are incompletely understood and diagnosis of frequent occurrences of VVS is still based on history and a tilt test, in which subjects are passively tilted from a supine position to 20° from the spatial vertical (to a 70° position) on the tilt table and maintained in that orientation for 10–15 min. Recently, is has been shown that vasovagal responses (VVRs), which are characterized by transient drops in blood pressure (BP), heart rate (HR), and increased amplitude of low frequency oscillations in BP can be induced by sinusoidal galvanic vestibular stimulation (sGVS) and were similar to the low frequency oscillations that presaged VVS in humans. This transient drop in BP and HR of 25 mmHg and 25 beats per minute (bpm), respectively, were considered to be a VVR. Similar thresholds have been used to identify VVR's in human studies as well. However, this arbitrary threshold of identifying a VVR does not give a clear understanding of the identifying features of a VVR nor what triggers a VVR. In this study, we utilized our model of VVR generation together with a machine learning approach to learn a separating hyperplane between normal and VVR patterns. This methodology is proposed as a technique for more broadly identifying the features that trigger a VVR. If a similar feature identification could be associated with VVRs in humans, it potentially could be utilized to identify onset of a VVS, i.e, fainting, in real time.


2021 ◽  
Vol 1776 (1) ◽  
pp. 012063
Author(s):  
Susilo Hariyanto ◽  
Y.D. Sumanto ◽  
Titi Udjiani ◽  
Yuri C Sagala

Author(s):  
Liping Yan ◽  
Xuezhi Dong ◽  
Hualiang Zhang ◽  
Haisheng Chen

Abstract Fault diagnosis is a very important section of gas turbine maintenance. Kernel extreme learning machine (KELM), a novel artificial intelligence algorithm, is a potentially effective diagnosis technology. The existing KELMs are all assumed that there is the same influence to the optimal separating hyperplane from all features, which reduces its generalization performance. In this study, a feature weighted kernel extreme learning machine ensemble method (FWKELM-RF) is developed for application in the field of gas turbine fault diagnosis. First, information gain ratio is introduced to assign different weights to the feature space. Furthermore, random forest is used to enhance stable performance of feature weighted KELM. The fault datasets from a gas turbine with three shafts is generated to validate the performance of the developed method, and the results demonstrate that FWKELM-RF can achieve better accuracy and stability for detecting fault in gas turbine.


2020 ◽  
Vol 13 (3) ◽  
pp. 531-535
Author(s):  
Vijayasherly Velayutham ◽  
Srimathi Chandrasekaran

Aim: To develop a prediction model grounded on Machine Learning using Support Vector Machine (SVM). Background: Prediction of workload in a Cloud Environment is one of the primary task in provisioning resources. Forecasting the requirements of future workload lies in the competency of predicting technique which could maximize the usage of resources in a cloud computing environment. Objective: To reduce the training time of SVM model. Methods: K-Means clustering is applied on the training dataset to form ‘n’ clusters firstly. Then, for every tuple in the cluster, the tuple’s class label is compared with the tuple’s cluster label. If the two labels are identical then the tuple is rightly classified and such a tuple would not contribute much during the SVM training process that formulates the separating hyperplane with lowest generalization error. Otherwise the tuple is added to the reduced training dataset. This selective addition of tuples to train SVM is carried for all clusters. The support vectors are a few among the samples in reduced training dataset that determines the optimal separating hyperplane. Results: On Google Cluster Trace dataset, the proposed model incurred a reduction in the training time, Root Mean Square Error and a marginal increase in the R2 Score than the traditional SVM. The model has also been tested on Los Alamos National Laboratory’s Mustang and Trinity cluster traces. Conclusion: The Cloudsim’s CPU utilization (VM and Cloudlet utilization) was measured and it was found to increase upon running the same set of tasks through our proposed model.


Symmetry ◽  
2020 ◽  
Vol 12 (6) ◽  
pp. 874 ◽  
Author(s):  
Aliyu Muhammed Awwal ◽  
Lin Wang ◽  
Poom Kumam ◽  
Hassan Mohammad

In this paper, we propose a two-step iterative algorithm based on projection technique for solving system of monotone nonlinear equations with convex constraints. The proposed two-step algorithm uses two search directions which are defined using the well-known Barzilai and Borwein (BB) spectral parameters.The BB spectral parameters can be viewed as the approximations of Jacobians with scalar multiple of identity matrices. If the Jacobians are close to symmetric matrices with clustered eigenvalues then the BB parameters are expected to behave nicely. We present a new line search technique for generating the separating hyperplane projection step of Solodov and Svaiter (1998) that generalizes the one used in most of the existing literature. We establish the convergence result of the algorithm under some suitable assumptions. Preliminary numerical experiments demonstrate the efficiency and computational advantage of the algorithm over some existing algorithms designed for solving similar problems. Finally, we apply the proposed algorithm to solve image deblurring problem.


Author(s):  
Byeongho Heo ◽  
Minsik Lee ◽  
Sangdoo Yun ◽  
Jin Young Choi

An activation boundary for a neuron refers to a separating hyperplane that determines whether the neuron is activated or deactivated. It has been long considered in neural networks that the activations of neurons, rather than their exact output values, play the most important role in forming classificationfriendly partitions of the hidden feature space. However, as far as we know, this aspect of neural networks has not been considered in the literature of knowledge transfer. In this paper, we propose a knowledge transfer method via distillation of activation boundaries formed by hidden neurons. For the distillation, we propose an activation transfer loss that has the minimum value when the boundaries generated by the student coincide with those by the teacher. Since the activation transfer loss is not differentiable, we design a piecewise differentiable loss approximating the activation transfer loss. By the proposed method, the student learns a separating boundary between activation region and deactivation region formed by each neuron in the teacher. Through the experiments in various aspects of knowledge transfer, it is verified that the proposed method outperforms the current state-of-the-art.


Sign in / Sign up

Export Citation Format

Share Document