deep boltzmann machines
Recently Published Documents


TOTAL DOCUMENTS

48
(FIVE YEARS 12)

H-INDEX

8
(FIVE YEARS 1)

2021 ◽  
Vol 127 (6) ◽  
Author(s):  
Yusuke Nomura ◽  
Nobuyuki Yoshioka ◽  
Franco Nori

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Martin Treppner ◽  
Adrián Salas-Bastos ◽  
Moritz Hess ◽  
Stefan Lenz ◽  
Tanja Vogel ◽  
...  

AbstractDeep generative models, such as variational autoencoders (VAEs) or deep Boltzmann machines (DBMs), can generate an arbitrary number of synthetic observations after being trained on an initial set of samples. This has mainly been investigated for imaging data but could also be useful for single-cell transcriptomics (scRNA-seq). A small pilot study could be used for planning a full-scale experiment by investigating planned analysis strategies on synthetic data with different sample sizes. It is unclear whether synthetic observations generated based on a small scRNA-seq dataset reflect the properties relevant for subsequent data analysis steps. We specifically investigated two deep generative modeling approaches, VAEs and DBMs. First, we considered single-cell variational inference (scVI) in two variants, generating samples from the posterior distribution, the standard approach, or the prior distribution. Second, we propose single-cell deep Boltzmann machines (scDBMs). When considering the similarity of clustering results on synthetic data to ground-truth clustering, we find that the $$scVI_{posterior}$$ s c V I posterior variant resulted in high variability, most likely due to amplifying artifacts of small datasets. All approaches showed mixed results for cell types with different abundance by overrepresenting highly abundant cell types and missing less abundant cell types. With increasing pilot dataset sizes, the proportions of the cells in each cluster became more similar to that of ground-truth data. We also showed that all approaches learn the univariate distribution of most genes, but problems occurred with bimodality. Across all analyses, in comparing 10$$\times$$ × Genomics and Smart-seq2 technologies, we could show that for 10$$\times$$ × datasets, which have higher sparsity, it is more challenging to make inference from small to larger datasets. Overall, the results show that generative deep learning approaches might be valuable for supporting the design of scRNA-seq experiments.


Author(s):  
Diego Alberici ◽  
Pierluigi Contucci ◽  
Emanuele Mingione

AbstractA class of deep Boltzmann machines is considered in the simplified framework of a quenched system with Gaussian noise and independent entries. The quenched pressure of a K-layers spin glass model is studied allowing interactions only among consecutive layers. A lower bound for the pressure is found in terms of a convex combination of K Sherrington–Kirkpatrick models and used to study the annealed and replica symmetric regimes of the system. A map with a one-dimensional monomer–dimer system is identified and used to rigorously control the annealed region at arbitrary depth K with the methods introduced by Heilmann and Lieb. The compression of this high-noise region displays a remarkable phenomenon of localisation of the processing layers. Furthermore, a replica symmetric lower bound for the limiting quenched pressure of the model is obtained in a suitable region of the parameters and the replica symmetric pressure is proved to have a unique stationary point.


2020 ◽  
Vol 97 ◽  
pp. 105717 ◽  
Author(s):  
Leandro Aparecido Passos ◽  
João Paulo Papa

2020 ◽  
Vol 68 (12) ◽  
pp. 7498-7510
Author(s):  
Qing Li ◽  
Yang Chen ◽  
Yongjune Kim

2020 ◽  
Vol 213 (1-4) ◽  
pp. 3-12
Author(s):  
Shota Ogawa ◽  
Hiroyuki Mori

2020 ◽  
Author(s):  
Martin Treppner ◽  
Adrián Salas-Bastos ◽  
Moritz Hess ◽  
Stefan Lenz ◽  
Tanja Vogel ◽  
...  

ABSTRACTDeep generative models, such as variational autoencoders (VAEs) or deep Boltzmann machines (DBM), can generate an arbitrary number of synthetic observations after being trained on an initial set of samples. This has mainly been investigated for imaging data but could also be useful for single-cell transcriptomics (scRNA-seq). A small pilot study could be used for planning a full-scale study by investigating planned analysis strategies on synthetic data with different sample sizes. It is unclear whether synthetic observations generated based on a small scRNA-seq dataset reflect the properties relevant for subsequent data analysis steps.We specifically investigated two deep generative modeling approaches, VAEs and DBMs. First, we considered single-cell variational inference (scVI) in two variants, generating samples from the posterior distribution, the standard approach, or the prior distribution. Second, we propose single-cell deep Boltzmann machines (scDBM). When considering the similarity of clustering results on synthetic data to ground-truth clustering, we find that the scVIposterior variant resulted in high variability, most likely due to amplifying artifacts of small data sets. All approaches showed mixed results for cell types with different abundance by overrepresenting highly abundant cell types and missing less abundant cell types. With increasing pilot dataset sizes, the proportions of the cells in each cluster became more similar to that of ground-truth data. We also showed that all approaches learn the univariate distribution of most genes, but problems occurred with bimodality. Overall, the results showed that generative deep learning approaches might be valuable for supporting the design of scRNA-seq experiments.


2020 ◽  
Vol 180 (1-6) ◽  
pp. 665-677 ◽  
Author(s):  
Diego Alberici ◽  
Adriano Barra ◽  
Pierluigi Contucci ◽  
Emanuele Mingione

Sign in / Sign up

Export Citation Format

Share Document