class separability
Recently Published Documents


TOTAL DOCUMENTS

110
(FIVE YEARS 22)

H-INDEX

16
(FIVE YEARS 2)

2021 ◽  
Author(s):  
Liangchen Hu

As one of the ways to acquire efficient image compact representation, graph embedding (GE) based manifold learning has been widely developed over the last two decades. Good graph embedding depends on the construction of graphs concerning intra-class compactness and inter-class separability, which are crucial indicators of the effectiveness of a model in generating discriminative features. Unsupervised approaches are intended to reveal the data structure information from a local or global perspective, but the resulting compact representation often has poorly inter-class margins due to the lack of label information. Moreover, supervised techniques only consider enhancing the adjacency affinity within the class but excluding the affinity of different classes, which results in the inability to fully capture the marginal structure between distributions of different classes. To overcome these issues, we propose a learning framework that implements Category-Oriented Self-Learning Graph Embedding (COSLGE), in which we achieve a flexible low-dimensional compact representation by imposing an adaptive graph learning process across the entire data while examining the inter-class separability of low-dimensional embedding by jointly learning a linear classifier. Besides, our framework can easily be extended to the semi-supervised situation. Extensive experiments on several widely-used benchmark databases demonstrate the effectiveness of the proposed method comparing with some state-of-the-art approaches.


2021 ◽  
Author(s):  
Liangchen Hu

As one of the ways to acquire efficient image compact representation, graph embedding (GE) based manifold learning has been widely developed over the last two decades. Good graph embedding depends on the construction of graphs concerning intra-class compactness and inter-class separability, which are crucial indicators of the effectiveness of a model in generating discriminative features. Unsupervised approaches are intended to reveal the data structure information from a local or global perspective, but the resulting compact representation often has poorly inter-class margins due to the lack of label information. Moreover, supervised techniques only consider enhancing the adjacency affinity within the class but excluding the affinity of different classes, which results in the inability to fully capture the marginal structure between distributions of different classes. To overcome these issues, we propose a learning framework that implements Category-Oriented Self-Learning Graph Embedding (COSLGE), in which we achieve a flexible low-dimensional compact representation by imposing an adaptive graph learning process across the entire data while examining the inter-class separability of low-dimensional embedding by jointly learning a linear classifier. Besides, our framework can easily be extended to the semi-supervised situation. Extensive experiments on several widely-used benchmark databases demonstrate the effectiveness of the proposed method comparing with some state-of-the-art approaches.


2021 ◽  
Author(s):  
Aarush Mohit Mittal ◽  
Andrew C. Lin ◽  
Nitin Gupta

Scientific studies often require assessment of similarity between ordered sets of values. Each set, containing one value for every dimension or class of data, can be conveniently represented as a vector. The commonly used metrics for vector similarity include angle-based metrics, such as cosine similarity or Pearson correlation, which compare the relative patterns of values, and distance-based metrics, such as the Euclidean distance, which compare the magnitudes of values. Here we evaluate a newly proposed metric, pairwise relative distance (PRED), which considers both relative patterns and magnitudes to provide a single measure of vector similarity. PRED essentially reveals whether the vectors are so similar that their values across the classes are separable. By comparing PRED to other common metrics in a variety of applications, we show that PRED provides a stable chance level irrespective of the number of classes, is invariant to global translation and scaling operations on data, has high dynamic range and low variability in handling noisy data, and can handle multi-dimensional data, as in the case of vectors containing temporal or population responses for each class. We also found that PRED can be adapted to function as a reliable metric of class separability even for datasets that lack the vector structure and simply contain multiple values for each class.


2021 ◽  
Vol 13 (11) ◽  
pp. 2029
Author(s):  
Zhi Hong Kok ◽  
Abdul Rashid Bin Mohamed Shariff ◽  
Siti Khairunniza-Bejo ◽  
Hyeon-Tae Kim ◽  
Tofael Ahamed ◽  
...  

Oil palm crops are essential for ensuring sustainable edible oil production, in which production is highly dependent on fertilizer applications. Using Landsat-8 imageries, the feasibility of macronutrient level classification with Machine Learning (ML) was studied. Variable rates of compost and inorganic fertilizer were applied to experimental plots and the following nutrients were studied: nitrogen (N), phosphorus (P), potassium (K), magnesium (Mg) and calcium (Ca). By applying image filters, separability metrics, vegetation indices (VI) and feature selection, spectral features for each plot were acquired and used with ML models to classify macronutrient levels of palm stands from chemical foliar analysis of their 17th frond. The models were calibrated and validated with 30 repetitions, with the best mean overall accuracy reported for N and K at 79.7 ± 4.3% and 76.6 ± 4.1% respectively, while accuracies for P, Mg and Ca could not be accurately classified due to the limitations of the dataset used. The study highlighted the effectiveness of separability metrics in quantifying class separability, the importance of indices for N and K level classification, and the effects of filter and feature selection on model performance, as well as concluding RF or SVM models for excessive N and K level detection. Future improvements should focus on further model validation and the use of higher-resolution imaging.


2021 ◽  
Vol 2 (4) ◽  
Author(s):  
Nazneen N. Sultana ◽  
Bappaditya Mandal ◽  
N. B. Puhan

AbstractTraditional linear discriminant analysis (LDA) approach discards the eigenvalues which are very small or equivalent to zero, but quite often eigenvectors corresponding to zero eigenvalues are the important dimensions for discriminant analysis. We propose an objective function which would utilize both the principal as well as nullspace eigenvalues and simultaneously inherit the class separability information onto its latent space representation. The idea is to build a convolutional neural network (CNN) and perform the regularized discriminant analysis on top of this and train it in an end-to-end fashion. The backpropagation is performed with a suitable optimizer to update the parameters so that the whole CNN approach minimizes the within class variance and maximizes the total class variance information suitable for both multi-class and binary class classification problems. Experimental results on four databases for multiple computer vision classification tasks show the efficacy of our proposed approach as compared to other popular methods.


2021 ◽  
Vol 11 (2) ◽  
pp. 472
Author(s):  
Hyeongmin Cho ◽  
Sangkyun Lee

Machine learning has been proven to be effective in various application areas, such as object and speech recognition on mobile systems. Since a critical key to machine learning success is the availability of large training data, many datasets are being disclosed and published online. From a data consumer or manager point of view, measuring data quality is an important first step in the learning process. We need to determine which datasets to use, update, and maintain. However, not many practical ways to measure data quality are available today, especially when it comes to large-scale high-dimensional data, such as images and videos. This paper proposes two data quality measures that can compute class separability and in-class variability, the two important aspects of data quality, for a given dataset. Classical data quality measures tend to focus only on class separability; however, we suggest that in-class variability is another important data quality factor. We provide efficient algorithms to compute our quality measures based on random projections and bootstrapping with statistical benefits on large-scale high-dimensional data. In experiments, we show that our measures are compatible with classical measures on small-scale data and can be computed much more efficiently on large-scale high-dimensional datasets.


2021 ◽  
pp. 305-315
Author(s):  
David Charte ◽  
Iván Sevillano-García ◽  
María Jesús Lucena-González ◽  
José Luis Martín-Rodríguez ◽  
Francisco Charte ◽  
...  

Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-11
Author(s):  
Zhao Yang ◽  
Jiehao Liu ◽  
Tie Liu ◽  
Li Wang ◽  
Sai Zhao

Person reidentification (re-id) aims to recognize a specific pedestrian from uncrossed surveillance camera views. Most re-id methods perform the retrieval task by comparing the similarity of pedestrian features extracted from deep learning models. Therefore, learning a discriminative feature is critical for person reidentification. Many works supervise the model learning with one or more loss functions to obtain the discriminability of features. Softmax loss is one of the widely used loss functions in re-id. However, traditional softmax loss inherently focuses on the feature separability and fails to consider the compactness of within-class features. To further improve the accuracy of re-id, many efforts are conducted to shrink within-class discrepancy as well as between-class similarity. In this paper, we propose a circle-based ratio loss for person re-identification. Concretely, we normalize the learned features and classification weights to map these vectors in the hypersphere. Then we take the ratio of the maximal intraclass distance and the minimal interclass distance as an objective loss, so the between-class separability and within-class compactness can be optimized simultaneously during the training stage. Finally, with the joint training of an improved softmax loss and the ratio loss, the deep model could mine discriminative pedestrian information and learn robust features for the re-id task. Comprehensive experiments on three re-id benchmark datasets are carried out to illustrate the effectiveness of the proposed method. Specially, 83.12% mAP on Market-1501, 71.70% mAP on DukeMTMC-reID, and 66.26%/63.24% mAP on CUHK03 labeled/detected are achieved, respectively.


2020 ◽  
Author(s):  
lin cao ◽  
xibao huo ◽  
yanan guo ◽  
yuying shao ◽  
kangning du

Abstract Face photo-sketch recognition refers to the process of matching sketches to photos. Recently, there has been a growing interest in using a convolutional neural network to learn discriminatively deep features. However, due to the large domain discrepancy and the high cost of acquiring sketches, the discriminative power of the deeply learned features will be inevitably reduced. In this paper, we propose a discriminative center loss to learn domain invariant features for face photo-sketch recognition. Specifically, two Mahalanobis distance matrices are proposed to enhance the intra-class compactness during inter-class separability. Moreover, a regularization technique is adopted on the Mahalanobis matrices to alleviate the small sample problem. Extensive experimental results on the e-PRIP dataset verified the effectiveness of the proposed discriminative center loss.


Sign in / Sign up

Export Citation Format

Share Document