low dimensional features
Recently Published Documents


TOTAL DOCUMENTS

29
(FIVE YEARS 14)

H-INDEX

5
(FIVE YEARS 2)

Author(s):  
Amirhossein Ahmadian ◽  
Fredrik Lindsten

Likelihood of generative models has been used traditionally as a score to detect atypical (Out-of-Distribution, OOD) inputs. However, several recent studies have found this approach to be highly unreliable, even with invertible generative models, where computing the likelihood is feasible. In this paper, we present a different framework for generative model--based OOD detection that employs the model in constructing a new representation space, instead of using it directly in computing typicality scores, where it is emphasized that the score function should be interpretable as the similarity between the input and training data in the new space. In practice, with a focus on invertible models, we propose to extract low-dimensional features (statistics) based on the model encoder and complexity of input images, and then use a One-Class SVM to score the data. Contrary to recently proposed OOD detection methods for generative models, our method does not require computing likelihood values. Consequently, it is much faster when using invertible models with iteratively approximated likelihood (e.g. iResNet), while it still has a performance competitive with other related methods.


2021 ◽  
Author(s):  
Yunan Wu ◽  
Arne Schmidt ◽  
Enrique Hernandez Sanchez ◽  
Rafael Molina ◽  
Aggelos K. Katsaggelos

Intracranial hemorrhage (ICH) is a life-threatening emergency with high rates of mortality and morbidity. Rapid and accurate detection of ICH is crucial for patients to get a timely treatment. In order to achieve the automatic diagnosis of ICH, most deep learning models rely on huge amounts of slice labels for training. Unfortunately, the manual annotation of CT slices by radiologists is time-consuming and costly. To diagnose ICH, in this work, we propose to use an attention-based multiple instance learning (Att-MIL) approach implemented through the combination of an attention-based convolutional neural network (Att-CNN) and a variational Gaussian process for multiple instance learning (VGPMIL). Only labels at scan-level are necessary for training. Our method (a) trains the model using scan labels and assigns each slice with an attention weight, which can be used to provide slice-level predictions, and (b) uses the VGPMIL model based on low-dimensional features extracted by the Att-CNN to obtain improved predictions both at slice and scan levels. To analyze the performance of the proposed approach, our model has been trained on 1150 scans from an RSNA dataset and evaluated on 490 scans from an external CQ500 dataset. Our method outperforms other methods using the same scan-level training and is able to achieve comparable or even better results than other methods relying on slice-level annotations.


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Zhenjun Tang ◽  
Shaopeng Zhang ◽  
Zhenhai Chen ◽  
Xianquan Zhang

Multimedia hashing is a useful technology of multimedia management, e.g., multimedia search and multimedia security. This paper proposes a robust multimedia hashing for processing videos. The proposed video hashing constructs a high-dimensional matrix via gradient features in the discrete wavelet transform (DWT) domain of preprocessed video, learns low-dimensional features from high-dimensional matrix via multidimensional scaling, and calculates video hash by ordinal measures of the learned low-dimensional features. Extensive experiments on 8300 videos are performed to examine the proposed video hashing. Performance comparisons reveal that the proposed scheme is better than several state-of-the-art schemes in balancing the performances of robustness and discrimination.


Information ◽  
2020 ◽  
Vol 12 (1) ◽  
pp. 1
Author(s):  
Shingchern D. You ◽  
Ming-Jen Hung

This paper studies the use of three different approaches to reduce the dimensionality of a type of spectral–temporal features, called motion picture expert group (MPEG)-7 audio signature descriptors (ASD). The studied approaches include principal component analysis (PCA), independent component analysis (ICA), and factor analysis (FA). These approaches are applied to ASD features obtained from audio items with or without distortion. These low-dimensional features are used as queries to a dataset containing low-dimensional features extracted from undistorted items. Doing so, we may investigate the distortion-resistant capability of each approach. The experimental results show that features obtained by the ICA or FA reduction approaches have higher identification accuracy than the PCA approach for moderately distorted items. Therefore, to extract features from distorted items, ICA or FA approaches should also be considered in addition to the PCA approach.


2020 ◽  
Vol 11 ◽  
Author(s):  
Xueli Xu ◽  
Zhongming Xie ◽  
Zhenyu Yang ◽  
Dongfang Li ◽  
Ximing Xu

As a data-driven dimensionality reduction and visualization tool, t-distributed stochastic neighborhood embedding (t-SNE) has been successfully applied to a variety of fields. In recent years, it has also received increasing attention for classification and regression analysis. This study presented a t-SNE based classification approach for compositional microbiome data, which enabled us to build classifiers and classify new samples in the reduced dimensional space produced by t-SNE. The Aitchison distance was employed to modify the conditional probabilities in t-SNE to account for the compositionality of microbiome data. To classify a new sample, its low-dimensional features were obtained as the weighted mean vector of its nearest neighbors in the training set. Using the low-dimensional features as input, three commonly used machine learning algorithms, logistic regression (LR), support vector machine (SVM), and decision tree (DT) were considered for classification tasks in this study. The proposed approach was applied to two disease-associated microbiome datasets, achieving better classification performance compared with the classifiers built in the original high-dimensional space. The analytic results also showed that t-SNE with Aitchison distance led to improvement of classification accuracy in both datasets. In conclusion, we have developed a t-SNE based classification approach that is suitable for compositional microbiome data and may also serve as a baseline for more complex classification models.


2020 ◽  
Author(s):  
Sandeep R. Bukka ◽  
Allan Ross Magee ◽  
Rajeev K. Jaiman

Abstract In this paper, an end-to-end nonlinear model reduction methodology is presented based on the convolutional recurrent autoencoder networks. The methodology is developed in the context of overall data-driven reduced order model framework proposed in the paper. The basic idea behind the methodology is to obtain the low dimensional representations via convolutional neural networks and evolve these low dimensional features via recurrent neural networks in time domain. The high dimensional representations are constructed from the evolved low dimensional features via transpose convolutional neural networks. With an unsupervised training strategy, the model serves as an end to end tool which can evolve the flow state of the nonlinear dynamical system. The convolutional recurrent autoencoder network model is applied on the problem of flow past bluff bodies for the first time. To demonstrate the effectiveness of the methodology, two canonical problems namely the flow past plain cylinder and the flow past side-by-side cylinders are explored in this paper. Pressure and velocity fields of the unsteady flow are predicted in future via the convolutional recurrent autoencoder model. The performance of the model is satisfactory for both the problems. Specifically, the multiscale nature and the gap flow dynamics of the side-by-side cylinders are captured by the proposed data-driven model reduction methodology. The error metrics, the normalized squared error and the normalized reconstruction error are considered for the assessment of the data-driven framework.


2020 ◽  
Vol 12 (11) ◽  
pp. 1738
Author(s):  
Xiayuan Huang ◽  
Xiangli Nie ◽  
Hong Qiao

Dimensionality reduction (DR) methods based on graph embedding are widely used for feature extraction. For these methods, the weighted graph plays a vital role in the process of DR because it can characterize the data’s structure information. Moreover, the similarity measurement is a crucial factor for constructing a weighted graph. Wishart distance of covariance matrices and Euclidean distance of polarimetric features are two important similarity measurements for polarimetric synthetic aperture radar (PolSAR) image classification. For obtaining a satisfactory PolSAR image classification performance, a co-regularized graph embedding (CRGE) method by combing the two distances is proposed for PolSAR image feature extraction in this paper. Firstly, two weighted graphs are constructed based on the two distances to represent the data’s local structure information. Specifically, the neighbouring samples are sought in a local patch to decrease computation cost and use spatial information. Next the DR model is constructed based on the two weighted graphs and co-regularization. The co-regularization aims to minimize the dissimilarity of low-dimensional features corresponding to two weighted graphs. We employ two types of co-regularization and the corresponding algorithms are proposed. Ultimately, the obtained low-dimensional features are used for PolSAR image classification. Experiments are implemented on three PolSAR datasets and results show that the co-regularized graph embedding can enhance the performance of PolSAR image classification.


2020 ◽  
Vol 34 (04) ◽  
pp. 6845-6852 ◽  
Author(s):  
Xuchao Zhang ◽  
Yifeng Gao ◽  
Jessica Lin ◽  
Chang-Tien Lu

With the advance of sensor technologies, the Multivariate Time Series classification (MTSC) problem, perhaps one of the most essential problems in the time series data mining domain, has continuously received a significant amount of attention in recent decades. Traditional time series classification approaches based on Bag-of-Patterns or Time Series Shapelet have difficulty dealing with the huge amounts of feature candidates generated in high-dimensional multivariate data but have promising performance even when the training set is small. In contrast, deep learning based methods can learn low-dimensional features efficiently but suffer from a shortage of labelled data. In this paper, we propose a novel MTSC model with an attentional prototype network to take the strengths of both traditional and deep learning based approaches. Specifically, we design a random group permutation method combined with multi-layer convolutional networks to learn the low-dimensional features from multivariate time series data. To handle the issue of limited training labels, we propose a novel attentional prototype network to train the feature representation based on their distance to class prototypes with inadequate data labels. In addition, we extend our model into its semi-supervised setting by utilizing the unlabeled data. Extensive experiments on 18 datasets in a public UEA Multivariate time series archive with eight state-of-the-art baseline methods exhibit the effectiveness of the proposed model.


2020 ◽  
Vol 46 (3) ◽  
pp. 232-257
Author(s):  
I. A. Gospodarev ◽  
V. A. Sirenko ◽  
E. S. Syrkin ◽  
S. B. Feodosyev ◽  
K. A. Minakova

Sign in / Sign up

Export Citation Format

Share Document