Regional-Scale Mineral Prospectivity Mapping: Support Vector Machines and an Improved Data-Driven Multi-criteria Decision-Making Technique

Author(s):  
Reza Ghezelbash ◽  
Abbas Maghsoudi ◽  
Amirreza Bigdeli ◽  
Emmanuel John M. Carranza
2010 ◽  
Vol 121-122 ◽  
pp. 825-831
Author(s):  
Yong Zhao ◽  
Ye Zheng Liu

Knowledge employee’s turnover forecast is a multi-criteria decision-making problem involving various factors. In order to forecast accurately turnover of knowledge employees, the potential support vector machines(P-SVM) is introduced to develop a turnover forecast model. In the model development, a chaos algorithm and a genetic algorithm (GA) are employed to optimize P-SVM parameters selection. The simulation results show that the model based on potential support vector machine with chaos not only has much stronger generalization ability but also has the ability of feature selection.


2020 ◽  
Vol 12 (4) ◽  
pp. 297-308
Author(s):  
Chris H. Miller ◽  
Matthew D. Sacchet ◽  
Ian H. Gotlib

Support vector machines (SVMs) are being used increasingly in affective science as a data-driven classification method and feature reduction technique. Whereas traditional statistical methods typically compare group averages on selected variables, SVMs use a predictive algorithm to learn multivariate patterns that optimally discriminate between groups. In this review, we provide a framework for understanding the methods of SVM-based analyses and summarize the findings of seminal studies that use SVMs for classification or data reduction in the behavioral and neural study of emotion and affective disorders. We conclude by discussing promising directions and potential applications of SVMs in future research in affective science.


Author(s):  
Mojtaba Montazery ◽  
Nic Wilson

Support Vector Machines (SVM) are among the most well-known machine learning methods, with broad use in different scientific areas. However, one necessary pre-processing phase for SVM is normalization (scaling) of features, since SVM is not invariant to the scales of the features’ spaces, i.e., different ways of scaling may lead to different results. We define a more robust decision-making approach for binary classification, in which one sample strongly belongs to a class if it belongs to that class for all possible rescalings of features. We derive a way of characterising the approach for binary SVM that allows determining when an instance strongly belongs to a class and when the classification is invariant to rescaling. The characterisation leads to a computation method to determine whether one sample is strongly positive, strongly negative or neither. Our experimental results back up the intuition that being strongly positive suggests stronger confidence that an instance really is positive.


Sign in / Sign up

Export Citation Format

Share Document