scholarly journals Low-Dimensional Sensory Feature Representation by Trigeminal Primary Afferents

2013 ◽  
Vol 33 (29) ◽  
pp. 12003-12012 ◽  
Author(s):  
M. R. Bale ◽  
K. Davies ◽  
O. J. Freeman ◽  
R. A. A. Ince ◽  
R. S. Petersen
2020 ◽  
Vol 49 (3) ◽  
pp. 421-437
Author(s):  
Genggeng Liu ◽  
Lin Xie ◽  
Chi-Hua Chen

Dimensionality reduction plays an important role in the data processing of machine learning and data mining, which makes the processing of high-dimensional data more efficient. Dimensionality reduction can extract the low-dimensional feature representation of high-dimensional data, and an effective dimensionality reduction method can not only extract most of the useful information of the original data, but also realize the function of removing useless noise. The dimensionality reduction methods can be applied to all types of data, especially image data. Although the supervised learning method has achieved good results in the application of dimensionality reduction, its performance depends on the number of labeled training samples. With the growing of information from internet, marking the data requires more resources and is more difficult. Therefore, using unsupervised learning to learn the feature of data has extremely important research value. In this paper, an unsupervised multilayered variational auto-encoder model is studied in the text data, so that the high-dimensional feature to the low-dimensional feature becomes efficient and the low-dimensional feature can retain mainly information as much as possible. Low-dimensional feature obtained by different dimensionality reduction methods are used to compare with the dimensionality reduction results of variational auto-encoder (VAE), and the method can be significantly improved over other comparison methods.


Author(s):  
Liang Lan ◽  
Yu Geng

Factorization Machines (FMs), a general predictor that can efficiently model high-order feature interactions, have been widely used for regression, classification and ranking problems. However, despite many successful applications of FMs, there are two main limitations of FMs: (1) FMs consider feature interactions among input features by using only polynomial expansion which fail to capture complex nonlinear patterns in data. (2) Existing FMs do not provide interpretable prediction to users. In this paper, we present a novel method named Subspace Encoding Factorization Machines (SEFM) to overcome these two limitations by using non-parametric subspace feature mapping. Due to the high sparsity of new feature representation, our proposed method achieves the same time complexity as the standard FMs but can capture more complex nonlinear patterns. Moreover, since the prediction score of our proposed model for a sample is a sum of contribution scores of the bins and grid cells that this sample lies in low-dimensional subspaces, it works similar like a scoring system which only involves data binning and score addition. Therefore, our proposed method naturally provides interpretable prediction. Our experimental results demonstrate that our proposed method efficiently provides accurate and interpretable prediction.


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Shicheng Li ◽  
Qinghua Liu ◽  
Jiangyan Dai ◽  
Wenle Wang ◽  
Xiaolin Gui ◽  
...  

Feature representation learning is a key issue in artificial intelligence research. Multiview multimedia data can provide rich information, which makes feature representation become one of the current research hotspots in data analysis. Recently, a large number of multiview data feature representation methods have been proposed, among which matrix factorization shows the excellent performance. Therefore, we propose an adaptive-weighted multiview deep basis matrix factorization (AMDBMF) method that integrates matrix factorization, deep learning, and view fusion together. Specifically, we first perform deep basis matrix factorization on data of each view. Then, all views are integrated to complete the procedure of multiview feature learning. Finally, we propose an adaptive weighting strategy to fuse the low-dimensional features of each view so that a unified feature representation can be obtained for multiview multimedia data. We also design an iterative update algorithm to optimize the objective function and justify the convergence of the optimization algorithm through numerical experiments. We conducted clustering experiments on five multiview multimedia datasets and compare the proposed method with several excellent current methods. The experimental results demonstrate that the clustering performance of the proposed method is better than those of the other comparison methods.


2020 ◽  
Vol 10 (22) ◽  
pp. 8003
Author(s):  
Yi-Chun Chen ◽  
Cheng-Te Li

In the scenarios of location-based social networks (LBSN), the goal of location promotion is to find information propagators to promote a specific point-of-interest (POI). While existing studies mainly focus on accurately recommending POIs for users, less effort is made for identifying propagators in LBSN. In this work, we propose and tackle two novel tasks, Targeted Propagator Discovery (TPD) and Targeted Customer Discovery (TCD), in the context of Location Promotion. Given a target POI l to be promoted, TPD aims at finding a set of influential users, who can generate more users to visit l in the future, and TCD is to find a set of potential users, who will visit l in the future. To deal with TPD and TCD, we propose a novel graph embedding method, LBSN2vec. The main idea is to jointly learn a low dimensional feature representation for each user and each location in an LBSN. Equipped with learned embedding vectors, we propose two similarity-based measures, Influential and Visiting scores, to find potential targeted propagators and customers. Experiments conducted on a large-scale Instagram LBSN dataset exhibit that LBSN2vec and its variant can significantly outperform well-known network embedding methods in both tasks.


2018 ◽  
Author(s):  
Robin Winter ◽  
Floriane Montanari ◽  
Frank Noé ◽  
Djork-Arné Clevert

<p></p><p>There has been a recent surge of interest in using machine learning across chemical space in order to predict properties of molecules or design molecules and materials with desired properties. Most of this work relies on defining clever feature representations, in which the chemical graph structure is encoded in a uniform way such that predictions across chemical space can be made. In this work, we propose to exploit the powerful ability of deep neural networks to learn a feature representation from low-level encodings of a huge corpus of chemical structures. Our model borrows ideas from neural machine translation: it translates between two semantically equivalent but syntactically different representations of molecular structures, compressing the meaningful information both representations have in common in a low-dimensional representation vector. Once the model is trained, this representation can be extracted for any new molecule and utilized as descriptor. In fair benchmarks with respect to various human-engineered molecular fingerprints and graph-convolution models, our method shows competitive performance in modelling quantitative structure-activity relationships in all analyzed datasets. Additionally, we show that our descriptor significantly outperforms all baseline molecular fingerprints in two ligand-based virtual screening tasks. Overall, our descriptors show the most consistent performances over all experiments. The continuity of the descriptor space and the existence of the decoder that permits to deduce a chemical structure from an embedding vector allows for exploration of the space and opens up new opportunities for compound optimization and idea generation.</p><br><p></p>


2021 ◽  
Vol 12 ◽  
Author(s):  
Yuanyuan Ma ◽  
Lifang Liu ◽  
Qianjun Chen ◽  
Yingjun Ma

Metabolites are closely related to human disease. The interaction between metabolites and drugs has drawn increasing attention in the field of pharmacomicrobiomics. However, only a small portion of the drug-metabolite interactions were experimentally observed due to the fact that experimental validation is labor-intensive, costly, and time-consuming. Although a few computational approaches have been proposed to predict latent associations for various bipartite networks, such as miRNA-disease, drug-target interaction networks, and so on, to our best knowledge the associations between drugs and metabolites have not been reported on a large scale. In this study, we propose a novel algorithm, namely inductive logistic matrix factorization (ILMF) to predict the latent associations between drugs and metabolites. Specifically, the proposed ILMF integrates drug–drug interaction, metabolite–metabolite interaction, and drug-metabolite interaction into this framework, to model the probability that a drug would interact with a metabolite. Moreover, we exploit inductive matrix completion to guide the learning of projection matrices U and V that depend on the low-dimensional feature representation matrices of drugs and metabolites: Fm and Fd. These two matrices can be obtained by fusing multiple data sources. Thus, FdU and FmV can be viewed as drug-specific and metabolite-specific latent representations, different from classical LMF. Furthermore, we utilize the Vicus spectral matrix that reveals the refined local geometrical structure inherent in the original data to encode the relationships between drugs and metabolites. Extensive experiments are conducted on a manually curated “DrugMetaboliteAtlas” dataset. The experimental results show that ILMF can achieve competitive performance compared with other state-of-the-art approaches, which demonstrates its effectiveness in predicting potential drug-metabolite associations.


2018 ◽  
Author(s):  
Ayse Berceste Dincer ◽  
Safiye Celik ◽  
Naozumi Hiranuma ◽  
Su-In Lee

AbstractWe present the DeepProfile framework, which learns a variational autoencoder (VAE) network from thousands of publicly available gene expression samples and uses this network to encode a low-dimensional representation (LDR) to predict complex disease phenotypes. To our knowledge, DeepProfile is the first attempt to use deep learning to extract a feature representation from a vast quantity of unlabeled (i.e, lacking phenotype information) expression samples that are not incorporated into the prediction problem. We use Deep-Profile to predict acute myeloid leukemia patients’ in vitro responses to 160 chemotherapy drugs. We show that, when compared to the original features (i.e., expression levels) and LDRs from two commonly used dimensionality reduction methods, DeepProfile: (1) better predicts complex phenotypes, (2) better captures known functional gene groups, and (3) better reconstructs the input data. We show that DeepProfile is generalizable to other diseases and phenotypes by using it to predict ovarian cancer patients’ tumor invasion patterns and breast cancer patients’ disease subtypes.


Author(s):  
Dattatray V. Jadhav ◽  
V. Jadhav Dattatray ◽  
Raghunath S. Holambe ◽  
S. Holambe Raghunath

Various changes in illumination, expression, viewpoint, and plane rotation present challenges to face recognition. Low dimensional feature representation with enhanced discrimination power is of paramount importance to face recognition system. This chapter presents transform based techniques for extraction of efficient and effective features to solve some of the challenges in face recognition. The techniques are based on the combination of Radon transform, Discrete Cosine Transform (DCT), and Discrete Wavelet Transform (DWT). The property of Radon transform to enhance the low frequency components, which are useful for face recognition, has been exploited to derive the effective facial features. The comparative study of various transform based techniques under different conditions like varying illumination, changing facial expressions, and in-plane rotation is presented in this chapter. The experimental results using FERET, ORL, and Yale databases are also presented in the chapter.


Author(s):  
Jianzhong Wang

Let [Formula: see text] be a data set in [Formula: see text], where [Formula: see text] is the training set and [Formula: see text] is the test one. Many unsupervised learning algorithms based on kernel methods have been developed to provide dimensionality reduction (DR) embedding for a given training set [Formula: see text] ([Formula: see text]) that maps the high-dimensional data [Formula: see text] to its low-dimensional feature representation [Formula: see text]. However, these algorithms do not straightforwardly produce DR of the test set [Formula: see text]. An out-of-sample extension method provides DR of [Formula: see text] using an extension of the existent embedding [Formula: see text], instead of re-computing the DR embedding for the whole set [Formula: see text]. Among various out-of-sample DR extension methods, those based on Nyström approximation are very attractive. Many papers have developed such out-of-extension algorithms and shown their validity by numerical experiments. However, the mathematical theory for the DR extension still need further consideration. Utilizing the reproducing kernel Hilbert space (RKHS) theory, this paper develops a preliminary mathematical analysis on the out-of-sample DR extension operators. It treats an out-of-sample DR extension operator as an extension of the identity on the RKHS defined on [Formula: see text]. Then the Nyström-type DR extension turns out to be an orthogonal projection. In the paper, we also present the conditions for the exact DR extension and give the estimate for the error of the extension.


2014 ◽  
Vol 989-994 ◽  
pp. 4209-4212
Author(s):  
Zhao Kui Li ◽  
Yan Wang

In this paper, a feature representation method based on Kirsch masks filter for face recognition is proposed. We firstly obtain eight direction images by performing Kirsch masks filter. For each direction image, the low-dimensional feature vector is computed by Linear Discriminant Analysisis. Then, a fusion strategy is used to combine different direction image according to their respective salience. Experimental results show that our methods significantly outperform popular methods such as Gabor features, Local Binary Patterns, Regularized Robust Coding (RRC), and achieve state-of-the-art performance for difficult problems such as illumination and occlusion-robust face recognition.


Sign in / Sign up

Export Citation Format

Share Document