scholarly journals Multi-Label Feature Selection Combining Three Types of Conditional Relevance

Entropy ◽  
2021 ◽  
Vol 23 (12) ◽  
pp. 1617
Author(s):  
Lingbo Gao ◽  
Yiqiang Wang ◽  
Yonghao Li ◽  
Ping Zhang ◽  
Liang Hu

With the rapid growth of the Internet, the curse of dimensionality caused by massive multi-label data has attracted extensive attention. Feature selection plays an indispensable role in dimensionality reduction processing. Many researchers have focused on this subject based on information theory. Here, to evaluate feature relevance, a novel feature relevance term (FR) that employs three incremental information terms to comprehensively consider three key aspects (candidate features, selected features, and label correlations) is designed. A thorough examination of the three key aspects of FR outlined above is more favorable to capturing the optimal features. Moreover, we employ label-related feature redundancy as the label-related feature redundancy term (LR) to reduce unnecessary redundancy. Therefore, a designed multi-label feature selection method that integrates FR with LR is proposed, namely, Feature Selection combining three types of Conditional Relevance (TCRFS). Numerous experiments indicate that TCRFS outperforms the other 6 state-of-the-art multi-label approaches on 13 multi-label benchmark data sets from 4 domains.

2020 ◽  
Vol 10 (15) ◽  
pp. 5170
Author(s):  
José Alberto Hernández-Muriel ◽  
Jhon Bryan Bermeo-Ulloa ◽  
Mauricio Holguin-Londoño ◽  
Andrés Marino Álvarez-Meza ◽  
Álvaro Angel Orozco-Gutiérrez

Nowadays, bearings installed in industrial electric motors are constituted as the primary mode of a failure affecting the global energy consumption. Since industries’ energy demand has a growing tendency, interest for efficient maintenance in electric motors is decisive. Vibration signals from bearings are employed commonly as a non-invasive approach to support fault diagnosis and severity evaluation of rotating machinery. However, vibration-based diagnosis poses a challenge concerning the signal properties, e.g., highly dynamic and non-stationary. Here, we introduce a knowledge-based tool to analyze multiple health conditions in bearings. Our approach includes a stochastic feature selection method, termed Stochastic Feature Selection (SFS), highlighting and interpreting relevant multi-domain attributes (time, frequency, and time–frequency) related to the bearing faults discriminability. In particular, a relief-F-based ranking and a Hidden Markov Model are trained under a windowing scheme to achieve our SFS. Obtained results in a public database demonstrate that our proposal is competitive compared to state-of-the-art algorithms concerning both the number of features selected and the classification accuracy.


2021 ◽  
pp. 1-12
Author(s):  
Emmanuel Tavares ◽  
Alisson Marques Silva ◽  
Gray Farias Moita ◽  
Rodrigo Tomas Nogueira Cardoso

Feature Selection (FS) is currently a very important and prominent research area. The focus of FS is to identify and to remove irrelevant and redundant features from large data sets in order to reduced processing time and to improve the predictive ability of the algorithms. Thus, this work presents a straightforward and efficient FS method based on the mean ratio of the attributes (features) associated with each class. The proposed filtering method, here called MRFS (Mean Ratio Feature Selection), has only equations with low computational cost and with basic mathematical operations such as addition, division, and comparison. Initially, in the MRFS method, the average from the data sets associated with the different outputs is computed for each attribute. Then, the calculation of the ratio between the averages extracted from each attribute is performed. Finally, the attributes are ordered based on the mean ratio, from the smallest to the largest value. The attributes that have the lowest values are more relevant to the classification algorithms. The proposed method is evaluated and compared with three state-of-the-art methods in classification using four classifiers and ten data sets. Computational experiments and their comparisons against other feature selection methods show that MRFS is accurate and that it is a promising alternative in classification tasks.


2016 ◽  
Vol 28 (4) ◽  
pp. 716-742 ◽  
Author(s):  
Saurabh Paul ◽  
Petros Drineas

We introduce single-set spectral sparsification as a deterministic sampling–based feature selection technique for regularized least-squares classification, which is the classification analog to ridge regression. The method is unsupervised and gives worst-case guarantees of the generalization power of the classification function after feature selection with respect to the classification function obtained using all features. We also introduce leverage-score sampling as an unsupervised randomized feature selection method for ridge regression. We provide risk bounds for both single-set spectral sparsification and leverage-score sampling on ridge regression in the fixed design setting and show that the risk in the sampled space is comparable to the risk in the full-feature space. We perform experiments on synthetic and real-world data sets; a subset of TechTC-300 data sets, to support our theory. Experimental results indicate that the proposed methods perform better than the existing feature selection methods.


Author(s):  
MINGXIA LIU ◽  
DAOQIANG ZHANG

As thousands of features are available in many pattern recognition and machine learning applications, feature selection remains an important task to find the most compact representation of the original data. In the literature, although a number of feature selection methods have been developed, most of them focus on optimizing specific objective functions. In this paper, we first propose a general graph-preserving feature selection framework where graphs to be preserved vary in specific definitions, and show that a number of existing filter-type feature selection algorithms can be unified within this framework. Then, based on the proposed framework, a new filter-type feature selection method called sparsity score (SS) is proposed. This method aims to preserve the structure of a pre-defined l1 graph that is proven robust to data noise. Here, the modified sparse representation based on an l1-norm minimization problem is used to determine the graph adjacency structure and corresponding affinity weight matrix simultaneously. Furthermore, a variant of SS called supervised SS (SuSS) is also proposed, where the l1 graph to be preserved is constructed by using only data points from the same class. Experimental results of clustering and classification tasks on a series of benchmark data sets show that the proposed methods can achieve better performance than conventional filter-type feature selection methods.


2021 ◽  
Author(s):  
Ping Zhang ◽  
Jiyao Sheng ◽  
Wanfu Gao ◽  
Juncheng Hu ◽  
Yonghao Li

Abstract Multi-label feature selection attracts considerable attention from multi-label learning. Information-theory based multi-label feature selection methods intend to select the most informative features and reduce the uncertain amount of information of labels. Previous methods regard the uncertain amount of information of labels as constant. In fact, as the classification information of the label set is captured by features, the remaining uncertainty of each label is changing dynamically. In this paper, we categorize labels into two groups: one contains the labels with few remaining uncertainty, which means that most of classification information with respect to the labels has been obtained by the already-selected features; another group contains the labels with extensive remaining uncertainty, which means that the classification information of these labels is neglected by already-selected features. Feature selection aims to select the new features with highly relevant to the labels in the second group. Existing methods do not distinguish the difference between two label groups and ignore the dynamic change amount of information of labels. To this end, a Relevancy Ratio is designed to clarify the dynamic change amount of information of each label under the condition of the already-selected features. Afterwards, a Weighted Feature Relevancy is defined to evaluate the candidate features. Finally, a new multi-label Feature Selection method based on Weighted Feature Relevancy (WFRFS) is proposed. The experiments obtain encouraging results of WFRFS in comparison to six multi-label feature selection methods on thirteen real-world data sets.


Author(s):  
Jia Zhang ◽  
Yidong Lin ◽  
Min Jiang ◽  
Shaozi Li ◽  
Yong Tang ◽  
...  

Information theoretical based methods have attracted a great attention in recent years, and gained promising results to deal with multi-label data with high dimensionality. However, most of the existing methods are either directly transformed from heuristic single-label feature selection methods or inefficient in exploiting labeling information. Thus, they may not be able to get an optimal feature selection result shared by multiple labels. In this paper, we propose a general global optimization framework, in which feature relevance, label relevance (i.e., label correlation), and feature redundancy are taken into account, thus facilitating multi-label feature selection. Moreover, the proposed method has an excellent mechanism for utilizing inherent properties of multi-label learning. Specially, we provide a formulation to extend the proposed method with label-specific features. Empirical studies on twenty multi-label data sets reveal the effectiveness and efficiency of the proposed method. Our implementation of the proposed method is available online at: https://jiazhang-ml.pub/GRRO-master.zip.


Author(s):  
Mingyu Fan ◽  
Xiaojun Chang ◽  
Xiaoqin Zhang ◽  
Di Wang ◽  
Liang Du

Recently, structured sparsity inducing based feature selection has become a hot topic in machine learning and pattern recognition. Most of the sparsity inducing feature selection methods are designed to rank all features by certain criterion and then select the k top ranked features, where k is an integer. However, the k top features are usually not the top k features and therefore maybe a suboptimal result. In this paper, we propose a novel supervised feature selection method to directly identify the top k features. The new method is formulated as a classic regularized least squares regression model with two groups of variables. The problem with respect to one group of the variables turn out to be a 0-1 integer programming, which had been considered very hard to solve. To address this, we utilize an efficient optimization method to solve the integer programming, which first replaces the discrete 0-1 constraints with two continuous constraints and then utilizes the alternating direction method of multipliers to optimize the equivalent problem. The obtained result is the top subset with k features under the proposed criterion rather than the subset of k top features. Experiments have been conducted on benchmark data sets to show the effectiveness of proposed method.


Sign in / Sign up

Export Citation Format

Share Document