stochastic sampling
Recently Published Documents


TOTAL DOCUMENTS

229
(FIVE YEARS 57)

H-INDEX

27
(FIVE YEARS 4)

2022 ◽  
Vol 15 ◽  
Author(s):  
Vivek Parmar ◽  
Bogdan Penkovsky ◽  
Damien Querlioz ◽  
Manan Suri

With recent advances in the field of artificial intelligence (AI) such as binarized neural networks (BNNs), a wide variety of vision applications with energy-optimized implementations have become possible at the edge. Such networks have the first layer implemented with high precision, which poses a challenge in deploying a uniform hardware mapping for the network implementation. Stochastic computing can allow conversion of such high-precision computations to a sequence of binarized operations while maintaining equivalent accuracy. In this work, we propose a fully binarized hardware-friendly computation engine based on stochastic computing as a proof of concept for vision applications involving multi-channel inputs. Stochastic sampling is performed by sampling from a non-uniform (normal) distribution based on analog hardware sources. We first validate the benefits of the proposed pipeline on the CIFAR-10 dataset. To further demonstrate its application for real-world scenarios, we present a case-study of microscopy image diagnostics for pathogen detection. We then evaluate benefits of implementing such a pipeline using OxRAM-based circuits for stochastic sampling as well as in-memory computing-based binarized multiplication. The proposed implementation is about 1,000 times more energy efficient compared to conventional floating-precision-based digital implementations, with memory savings of a factor of 45.


Entropy ◽  
2021 ◽  
Vol 23 (12) ◽  
pp. 1629
Author(s):  
Ali Unlu ◽  
Laurence Aitchison

We developed Variational Laplace for Bayesian neural networks (BNNs), which exploits a local approximation of the curvature of the likelihood to estimate the ELBO without the need for stochastic sampling of the neural-network weights. The Variational Laplace objective is simple to evaluate, as it is the log-likelihood plus weight-decay, plus a squared-gradient regularizer. Variational Laplace gave better test performance and expected calibration errors than maximum a posteriori inference and standard sampling-based variational inference, despite using the same variational approximate posterior. Finally, we emphasize the care needed in benchmarking standard VI, as there is a risk of stopping before the variance parameters have converged. We show that early-stopping can be avoided by increasing the learning rate for the variance parameters.


Electronics ◽  
2021 ◽  
Vol 10 (23) ◽  
pp. 2899
Author(s):  
Tingting Zhu ◽  
Kun Ding ◽  
Zhenye Li ◽  
Xianxu Zhan ◽  
Rong Du ◽  
...  

Solid wood floors are widely used as an interior decoration material, and the color of solid wood surfaces plays a decisive role in the final decoration effect. Therefore, the color classification of solid wood floors is the final and most important step before laying. However, research on floor classification usually focuses on recognizing complex and diverse features but ignores execution speed, which causes common methods to not meet the requirements of online classification in practical production. In this paper, a new online classification method of solid wood floors was proposed by combining probability theory and machine learning. Firstly, a probability-based feature extraction method (stochastic sampling feature extractor) was developed to obtain rapid key features regardless of the disturbance of wood grain. The stochastic features were determined by a genetic algorithm. Then, an extreme learning machine—as a fast classification neural network—was selected and trained with the selected stochastic features to classify solid wood floors. Several experiments were carried out to evaluate the performance of the proposed method, and the results showed that the proposed method achieved a classification accuracy of 97.78% and less than 1 ms for each solid wood floor. The proposed method has advantages including a high execution speed, great accuracy, and flexible adaptability. Overall, it is suitable for online industry production.


Life ◽  
2021 ◽  
Vol 11 (11) ◽  
pp. 1183
Author(s):  
Satwik Pasani ◽  
Shruthi Viswanath

Integrative modeling of macromolecular assemblies requires stochastic sampling, for example, via MCMC (Markov Chain Monte Carlo), since exhaustively enumerating all structural degrees of freedom is infeasible. MCMC-based methods usually require tuning several parameters, such as the move sizes for coarse-grained beads and rigid bodies, for sampling to be efficient and accurate. Currently, these parameters are tuned manually. To automate this process, we developed a general heuristic for derivative-free, global, stochastic, parallel, multiobjective optimization, termed StOP (Stochastic Optimization of Parameters) and applied it to optimize sampling-related parameters for the Integrative Modeling Platform (IMP). Given an integrative modeling setup, list of parameters to optimize, their domains, metrics that they influence, and the target ranges of these metrics, StOP produces the optimal values of these parameters. StOP is adaptable to the available computing capacity and converges quickly, allowing for the simultaneous optimization of a large number of parameters. However, it is not efficient at high dimensions and not guaranteed to find optima in complex landscapes. We demonstrate its performance on several examples of random functions, as well as on two integrative modeling examples, showing that StOP enhances the efficiency of sampling the posterior distribution, resulting in more good-scoring models and better sampling precision.


2021 ◽  
Author(s):  
Hanjun Lee ◽  
Bruce Blumberg ◽  
Michael S. Lawrence ◽  
Toshi Shioda

AbstractIdentification of dynamic changes in chromatin conformation is a fundamental task in genetics. In 2020, Galan et al.1 presented CHESS (Comparison of Hi-C Experiments using Structural Similarity), a novel computational algorithm designed for systematic identification of structural differences in chromatin-contact maps. Using CHESS, the same group recently reported that chromatin organization is largely maintained across tissues during dorsoventral patterning of fruit fly embryos despite tissue-specific chromatin states and gene expression2. However, here we show that the primary outputs of CHESS–namely, the structural similarity index (SSIM) profiles–are nearly identical regardless of the input matrices, even when query and reference reads were shuffled to destroy any significant differences. This issue stems from the dominance of the regional counting noise arising from stochastic sampling in chromatin-contact maps, reflecting a fundamentally incorrect assumption of the CHESS algorithm. Therefore, biological interpretation of SSIM profiles generated by CHESS requires considerable caution.


2021 ◽  
Vol 508 (2) ◽  
pp. 2090-2097
Author(s):  
V D’Emilio ◽  
R Green ◽  
V Raymond

ABSTRACT The properties of black hole and neutron-star binaries are extracted from gravitational waves (GW) signals using Bayesian inference. This involves evaluating a multidimensional posterior probability function with stochastic sampling. The marginal probability distributions of the samples are sometimes interpolated with methods such as kernel density estimators. Since most post-processing analysis within the field is based on these parameter estimation products, interpolation accuracy of the marginals is essential. In this work, we propose a new method combining histograms and Gaussian processes (GPs) as an alternative technique to fit arbitrary combinations of samples from the source parameters. This method comes with several advantages such as flexible interpolation of non-Gaussian correlations, Bayesian estimate of uncertainty, and efficient resampling with Hamiltonian Monte Carlo.


2021 ◽  
Vol 263 (6) ◽  
pp. 863-874
Author(s):  
Gage Walters ◽  
Andrew Wixom ◽  
Sheri Martinelli

This work performs a direct comparison between generalized polynomial chaos (GPC) expansion techniques applied to structural acoustic problems. Broadly, the GPC techniques are grouped in two categories: , where the stochastic sampling is predetermined according to a quadrature rule; and , where an arbitrary selection of points is used as long as they are a representative sample of the random input. As a baseline comparison, Monte Carlo type simulations are also performed although they take many more sampling points. The test problems considered include both canonical and more applied cases that exemplify the features and types of calculations commonly arising in vibrations and acoustics. A range of different numbers of random input variables are considered. The primary point of comparison between the methods is the number of sampling points they require to generate an accurate GPC expansion. This is due to the general consideration that the most expensive part of a GPC analysis is evaluating the deterministic problem of interest; thus the method with the fewest sampling points will often be the fastest. Accuracy of each GPC expansion is judged using several metrics including basic statistical moments as well as features of the actual reconstructed probability density function.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Marco Mancastroppa ◽  
Claudio Castellano ◽  
Alessandro Vezzani ◽  
Raffaella Burioni

AbstractIsolation of symptomatic individuals, tracing and testing of their nonsymptomatic contacts are fundamental strategies for mitigating the current COVID-19 pandemic. The breaking of contagion chains relies on two complementary strategies: manual reconstruction of contacts based on interviews and a digital (app-based) privacy-preserving contact tracing. We compare their effectiveness using model parameters tailored to describe SARS-CoV-2 diffusion within the activity-driven model, a general empirically validated framework for network dynamics. We show that, even for equal probability of tracing a contact, manual tracing robustly performs better than the digital protocol, also taking into account the intrinsic delay and limited scalability of the manual procedure. This result is explained in terms of the stochastic sampling occurring during the case-by-case manual reconstruction of contacts, contrasted with the intrinsically prearranged nature of digital tracing, determined by the decision to adopt the app or not by each individual. The better performance of manual tracing is enhanced by heterogeneity in agent behavior: superspreaders not adopting the app are completely invisible to digital contact tracing, while they can be easily traced manually, due to their multiple contacts. We show that this intrinsic difference makes the manual procedure dominant in realistic hybrid protocols.


Sign in / Sign up

Export Citation Format

Share Document