supervised feature extraction
Recently Published Documents


TOTAL DOCUMENTS

63
(FIVE YEARS 10)

H-INDEX

11
(FIVE YEARS 1)

2021 ◽  
Author(s):  
Yinjun Jia ◽  
Shuai-shuai Li ◽  
Xuan Guo ◽  
Junqiang Hu ◽  
Xiao-Hong Xu ◽  
...  

Fast and accurately characterizing animal behaviors is crucial for neuroscience research. Deep learning models are efficiently used in the laboratories for behavior analysis. However, it has not been achieved to use a fully unsupervised method to extract comprehensive and discriminative features directly from raw behavior video frames for annotation and analysis purposes. Here, we report a self supervised feature extraction (Selfee) convolutional neural network with multiple downstream applications to process video frames of animal behavior in an end to end way. Visualization and classification of the extracted features (Meta representations) validate that Selfee processes animal behaviors in a comparable way of human understanding. We demonstrate that Meta representations can be efficiently used to detect anomalous behaviors that are indiscernible to human observation and hint in depth analysis. Furthermore, time series analyses of Meta representations reveal the temporal dynamics of animal behaviors. In conclusion, we present a self supervised learning approach to extract comprehensive and discriminative features directly from raw video recordings of animal behaviors and demonstrate its potential usage for various downstream applications.


2021 ◽  
Vol 13 (23) ◽  
pp. 4927
Author(s):  
Zhao Wang ◽  
Fenlong Jiang ◽  
Tongfei Liu ◽  
Fei Xie ◽  
Peng Li

Joint analysis of spatial and spectral features has always been an important method for change detection in hyperspectral images. However, many existing methods cannot extract effective spatial features from the data itself. Moreover, when combining spatial and spectral features, a rough uniform global combination ratio is usually required. To address these problems, in this paper, we propose a novel attention-based spatial and spectral network with PCA-guided self-supervised feature extraction mechanism to detect changes in hyperspectral images. The whole framework is divided into two steps. First, a self-supervised mapping from each patch of the difference map to the principal components of the central pixel of each patch is established. By using the multi-layer convolutional neural network, the main spatial features of differences can be extracted. In the second step, the attention mechanism is introduced. Specifically, the weighting factor between the spatial and spectral features of each pixel is adaptively calculated from the concatenated spatial and spectral features. Then, the calculated factor is applied proportionally to the corresponding features. Finally, by the joint analysis of the weighted spatial and spectral features, the change status of pixels in different positions can be obtained. Experimental results on several real hyperspectral change detection data sets show the effectiveness and advancement of the proposed method.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Zeynab Mousavikhamene ◽  
Daniel J. Sykora ◽  
Milan Mrksich ◽  
Neda Bagheri

AbstractAccurate cancer detection and diagnosis is of utmost importance for reliable drug-response prediction. Successful cancer characterization relies on both genetic analysis and histological scans from tumor biopsies. It is known that the cytoskeleton is significantly altered in cancer, as cellular structure dynamically remodels to promote proliferation, migration, and metastasis. We exploited these structural differences with supervised feature extraction methods to introduce an algorithm that could distinguish cancer from non-cancer cells presented in high-resolution, single cell images. In this paper, we successfully identified the features with the most discriminatory power to successfully predict cell type with as few as 100 cells per cell line. This trait overcomes a key barrier of machine learning methodologies: insufficient data. Furthermore, normalizing cell shape via microcontact printing on self-assembled monolayers enabled better discrimination of cell lines with difficult-to-distinguish phenotypes. Classification accuracy remained robust as we tested dissimilar cell lines across various tissue origins, which supports the generalizability of our algorithm.


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Sejin Park ◽  
Jihee Soh ◽  
Hyunju Lee

Abstract Background Predicting the drug response of a patient is important for precision oncology. In recent studies, multi-omics data have been used to improve the prediction accuracy of drug response. Although multi-omics data are good resources for drug response prediction, the large dimension of data tends to hinder performance improvement. In this study, we aimed to develop a new method, which can effectively reduce the large dimension of data, based on the supervised deep learning model for predicting drug response. Results We proposed a novel method called Supervised Feature Extraction Learning using Triplet loss (Super.FELT) for drug response prediction. Super.FELT consists of three stages, namely, feature selection, feature encoding using a supervised method, and binary classification of drug response (sensitive or resistant). We used multi-omics data including mutation, copy number aberration, and gene expression, and these were obtained from cell lines [Genomics of Drug Sensitivity in Cancer (GDSC), Cancer Cell Line Encyclopedia (CCLE), and Cancer Therapeutics Response Portal (CTRP)], patient-derived tumor xenografts (PDX), and The Cancer Genome Atlas (TCGA). GDSC was used for training and cross-validation tests, and CCLE, CTRP, PDX, and TCGA were used for external validation. We performed ablation studies for the three stages and verified that the use of multi-omics data guarantees better performance of drug response prediction. Our results verified that Super.FELT outperformed the other methods at external validation on PDX and TCGA and was good at cross-validation on GDSC and external validation on CCLE and CTRP. In addition, through our experiments, we confirmed that using multi-omics data is useful for external non-cell line data. Conclusion By separating the three stages, Super.FELT achieved better performance than the other methods. Through our results, we found that it is important to train encoders and a classifier independently, especially for external test on PDX and TCGA. Moreover, although gene expression is the most powerful data on cell line data, multi-omics promises better performance for external validation on non-cell line data than gene expression data. Source codes of Super.FELT are available at https://github.com/DMCB-GIST/Super.FELT.


Author(s):  
Paula A Marin Zapata ◽  
Sina Roth ◽  
Dirk Schmutzler ◽  
Thomas Wolf ◽  
Erica Manesso ◽  
...  

Abstract Motivation Image-based profiling combines high-throughput screening with multiparametric feature analysis to capture the effect of perturbations on biological systems. This technology has attracted increasing interest in the field of plant phenotyping, promising to accelerate the discovery of novel herbicides. However, the extraction of meaningful features from unlabeled plant images remains a big challenge. Results We describe a novel data-driven approach to find feature representations from plant time-series images in a self-supervised manner by using time as a proxy for image similarity. In the spirit of transfer learning, we first apply an ImageNet-pretrained architecture as a base feature extractor. Then, we extend this architecture with a triplet network to refine and reduce the dimensionality of extracted features by ranking relative similarities between consecutive and non-consecutive time points. Without using any labels, we produce compact, organized representations of plant phenotypes and demonstrate their superior applicability to clustering, image retrieval and classification tasks. Besides time, our approach could be applied using other surrogate measures of phenotype similarity, thus providing a versatile method of general interest to the phenotypic profiling community. Availability and implementation Source code is provided in https://github.com/bayer-science-for-a-better-life/plant-triplet-net. Supplementary information Supplementary data are available at Bioinformatics online.


Author(s):  
Tzofi Klinghoffer ◽  
Peter Morales ◽  
Young-Gyun Park ◽  
Nicholas Evans ◽  
Kwanghun Chung ◽  
...  

2020 ◽  
Vol 12 (7) ◽  
pp. 1179
Author(s):  
Bin Zhao ◽  
Magnus O. Ulfarsson ◽  
Johannes R. Sveinsson ◽  
Jocelyn Chanussot

This paper proposes three feature extraction (FE) methods based on density estimation for hyperspectral images (HSIs). The methods are a mixture of factor analyzers (MFA), deep MFA (DMFA), and supervised MFA (SMFA). The MFA extends the Gaussian mixture model to allow a low-dimensionality representation of the Gaussians. DMFA is a deep version of MFA and consists of a two-layer MFA, i.e, samples from the posterior distribution at the first layer are input to an MFA model at the second layer. SMFA consists of single-layer MFA and exploits labeled information to extract features of HSI effectively. Based on these three FE methods, the paper also proposes a framework that automatically extracts the most important features for classification from an HSI. The overall accuracy of a classifier is used to automatically choose the optimal number of features and hence performs dimensionality reduction (DR) before HSI classification. The performance of MFA, DMFA, and SMFA FE methods are evaluated and compared to five different types of unsupervised and supervised FE methods by using four real HSIs datasets.


Author(s):  
Jianglin Lu ◽  
Zhihui Lai ◽  
Hailing Wang ◽  
Yudong Chen ◽  
Jie Zhou ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document