nonnegative data
Recently Published Documents


TOTAL DOCUMENTS

18
(FIVE YEARS 6)

H-INDEX

6
(FIVE YEARS 0)

2021 ◽  
Author(s):  
Zuzana Rošťáková ◽  
Roman Rosipal

Background and Objective: Parallel factor analysis (PARAFAC) is a powerful tool for detecting latent components in higher-order arrays (tensors). As an essential input parameter, the number of latent components should be set in advance. However, any component number selection method already proposed in the literature became a rule of thumb. The study demonstrates the advantages and disadvantages of twelve different methods applied to well-controlled simulated data with a nonnegative structure that mimics the character of a real electroencephalogram.Methods: Existing studies have compared the methods’ performance on simulated data with a simplified structure. It was shown that the obtained results are not directly generalizable to real data. Using a real head model and cortical activation, our study focuses on nontrivial and nonnegative simulated data that resemble real electroencephalogram properties as closely as possible. Different noise levels and disruptions from the optimal structure are considered. Moreover, we validate a new method for component number selection, which we have already successfully applied to real electroencephalogram tasks. We also demonstrate that the existing approaches must be adapted whenever a nonnegative data structure is assumed. Results: We identified four methods that produce promising but not ideal results on nontrivial simulated data and present superior performance in electroencephalogram analysis practice.Conclusions: Component number selection in PARAFAC is a complex and unresolved problem. The nonnegative data structure assumption makes the problem more challenging. Although several methods have shown promising results, the issue remains open, and new approaches are needed.


2018 ◽  
Vol 30 (8) ◽  
pp. 2113-2174 ◽  
Author(s):  
Dennis Forster ◽  
Abdul-Saboor Sheikh ◽  
Jörg Lücke

We explore classifier training for data sets with very few labels. We investigate this task using a neural network for nonnegative data. The network is derived from a hierarchical normalized Poisson mixture model with one observed and two hidden layers. With the single objective of likelihood optimization, both labeled and unlabeled data are naturally incorporated into learning. The neural activation and learning equations resulting from our derivation are concise and local. As a consequence, the network can be scaled using standard deep learning tools for parallelized GPU implementation. Using standard benchmarks for nonnegative data, such as text document representations, MNIST, and NIST SD19, we study the classification performance when very few labels are used for training. In different settings, the network's performance is compared to standard and recently suggested semisupervised classifiers. While other recent approaches are more competitive for many labels or fully labeled data sets, we find that the network studied here can be applied to numbers of few labels where no other system has been reported to operate so far.


Sign in / Sign up

Export Citation Format

Share Document