scholarly journals On Fuzzy Bi-Level Multi-Objective Large Scale Integer Quadratic Programming Problem

2017 ◽  
Vol 159 (2) ◽  
pp. 28-33
Author(s):  
O. E. ◽  
E. Fathy ◽  
A. A.
2013 ◽  
Vol 312 ◽  
pp. 771-776
Author(s):  
Min Juan Zheng ◽  
Guo Jian Cheng ◽  
Fei Zhao

The quadratic programming problem in the standard support vector machine (SVM) algorithm has high time complexity and space complexity in solving the large-scale problems which becomes a bottleneck in the SVM applications. Ball Vector Machine (BVM) converts the quadratic programming problem of the traditional SVM into the minimum enclosed ball problem (MEB). It can indirectly get the solution of quadratic programming through solving the MEB problem which significantly reduces the time complexity and space complexity. The experiments show that when handling five large-scale and high-dimensional data sets, the BVM and standard SVM have a considerable accuracy, but the BVM has higher speed and less requirement space than standard SVM.


2016 ◽  
Vol 15 (5) ◽  
pp. 6738-6748 ◽  
Author(s):  
Usama Emam

This paper proposes an algorithm to solve multi-level multi-objective quadratic programming problem with fuzzy parameters in the objective functions, This algorithm uses the tolerance membership function concepts and multi-objective optimization at each level to develop a fuzzy Max-Min decision model for generating satisfactory solution after applying linear ranking method on trapezoidal fuzzy numbers in the objective functions, An illustrative example is included to explain the results.


2020 ◽  
Vol 10 (19) ◽  
pp. 6979
Author(s):  
Minho Ryu ◽  
Kichun Lee

Support vector machines (SVMs) are a well-known classifier due to their superior classification performance. They are defined by a hyperplane, which separates two classes with the largest margin. In the computation of the hyperplane, however, it is necessary to solve a quadratic programming problem. The storage cost of a quadratic programming problem grows with the square of the number of training sample points, and the time complexity is proportional to the cube of the number in general. Thus, it is worth studying how to reduce the training time of SVMs without compromising the performance to prepare for sustainability in large-scale SVM problems. In this paper, we proposed a novel data reduction method for reducing the training time by combining decision trees and relative support distance. We applied a new concept, relative support distance, to select good support vector candidates in each partition generated by the decision trees. The selected support vector candidates improved the training speed for large-scale SVM problems. In experiments, we demonstrated that our approach significantly reduced the training time while maintaining good classification performance in comparison with existing approaches.


Sign in / Sign up

Export Citation Format

Share Document