scholarly journals Likelihood Approximation Networks (LANs) for Fast Inference of Simulation Models in Cognitive Neuroscience

2020 ◽  
Author(s):  
Alexander Fengler ◽  
Lakshmi N. Govindarajan ◽  
Tony Chen ◽  
Michael J. Frank

AbstractIn cognitive neuroscience, computational modeling can formally adjudicate between theories and affords quantitative fits to behavioral/brain data. Pragmatically, however, the space of plausible generative models considered is dramatically limited by the set of models with known likelihood functions. For many models, the lack of a closed-form likelihood typically impedes Bayesian inference methods. As a result, standard models are evaluated for convenience, even when other models might be superior. Likelihood-free methods exist but are limited by their computational cost or their restriction to particular inference scenarios. Here, we propose neural networks that learn approximate likelihoods for arbitrary generative models, allowing fast posterior sampling with only a one-off cost for model simulations that is amortized for future inference. We show that these methods can accurately recover posterior parameter distributions for a variety of neurocognitive process models. We provide code allowing users to deploy these methods for arbitrary hierarchical model instantiations without further training.

eLife ◽  
2021 ◽  
Vol 10 ◽  
Author(s):  
Alexander Fengler ◽  
Lakshmi N Govindarajan ◽  
Tony Chen ◽  
Michael J Frank

In cognitive neuroscience, computational modeling can formally adjudicate between theories and affords quantitative fits to behavioral/brain data. Pragmatically, however, the space of plausible generative models considered is dramatically limited by the set of models with known likelihood functions. For many models, the lack of a closed-form likelihood typically impedes Bayesian inference methods. As a result, standard models are evaluated for convenience, even when other models might be superior. Likelihood-free methods exist but are limited by their computational cost or their restriction to particular inference scenarios. Here, we propose neural networks that learn approximate likelihoods for arbitrary generative models, allowing fast posterior sampling with only a one-off cost for model simulations that is amortized for future inference. We show that these methods can accurately recover posterior parameter distributions for a variety of neurocognitive process models. We provide code allowing users to deploy these methods for arbitrary hierarchical model instantiations without further training.


2021 ◽  
Vol 7 ◽  
pp. e577
Author(s):  
Manuel Camargo ◽  
Marlon Dumas ◽  
Oscar González-Rojas

A generative model is a statistical model capable of generating new data instances from previously observed ones. In the context of business processes, a generative model creates new execution traces from a set of historical traces, also known as an event log. Two types of generative business process models have been developed in previous work: data-driven simulation models and deep learning models. Until now, these two approaches have evolved independently, and their relative performance has not been studied. This paper fills this gap by empirically comparing a data-driven simulation approach with multiple deep learning approaches for building generative business process models. The study sheds light on the relative strengths of these two approaches and raises the prospect of developing hybrid approaches that combine these strengths.


2019 ◽  
Author(s):  
Giulio Isacchini ◽  
Zachary Sethna ◽  
Yuval Elhanati ◽  
Armita Nourmohammad ◽  
Aleksandra M. Walczak ◽  
...  

T-cell receptors (TCR) are key proteins of the adaptive immune system, generated randomly in each individual, whose diversity underlies our ability to recognize infections and malignancies. Modeling the distribution of TCR sequences is of key importance for immunology and medical applications. Here, we compare two inference methods trained on high-throughput sequencing data: a knowledge-guided approach, which accounts for the details of sequence generation, supplemented by a physics-inspired model of selection; and a knowledge-free Variational Auto-Encoder based on deep artificial neural networks. We show that the knowledge-guided model outperforms the deep network approach at predicting TCR probabilities, while being more interpretable, at a lower computational cost.


2020 ◽  
Vol 11 ◽  
Author(s):  
Shuhei Kimura ◽  
Ryo Fukutomi ◽  
Masato Tokuhisa ◽  
Mariko Okada

Several researchers have focused on random-forest-based inference methods because of their excellent performance. Some of these inference methods also have a useful ability to analyze both time-series and static gene expression data. However, they are only of use in ranking all of the candidate regulations by assigning them confidence values. None have been capable of detecting the regulations that actually affect a gene of interest. In this study, we propose a method to remove unpromising candidate regulations by combining the random-forest-based inference method with a series of feature selection methods. In addition to detecting unpromising regulations, our proposed method uses outputs from the feature selection methods to adjust the confidence values of all of the candidate regulations that have been computed by the random-forest-based inference method. Numerical experiments showed that the combined application with the feature selection methods improved the performance of the random-forest-based inference method on 99 of the 100 trials performed on the artificial problems. However, the improvement tends to be small, since our combined method succeeded in removing only 19% of the candidate regulations at most. The combined application with the feature selection methods moreover makes the computational cost higher. While a bigger improvement at a lower computational cost would be ideal, we see no impediments to our investigation, given that our aim is to extract as much useful information as possible from a limited amount of gene expression data.


2014 ◽  
Vol 6 ◽  
pp. 217584 ◽  
Author(s):  
J. Schilp ◽  
C. Seidel ◽  
H. Krauss ◽  
J. Weirather

Process monitoring and modelling can contribute to fostering the industrial relevance of additive manufacturing. Process related temperature gradients and thermal inhomogeneities cause residual stresses, and distortions and influence the microstructure. Variations in wall thickness can cause heat accumulations. These occur predominantly in filigree part areas and can be detected by utilizing off-axis thermographic monitoring during the manufacturing process. In addition, numerical simulation models on the scale of whole parts can enable an analysis of temperature fields upstream to the build process. In a microscale domain, modelling of several exposed single hatches allows temperature investigations at a high spatial and temporal resolution. Within this paper, FEM-based micro- and macroscale modelling approaches as well as an experimental setup for thermographic monitoring are introduced. By discussing and comparing experimental data with simulation results in terms of temperature distributions both the potential of numerical approaches and the complexity of determining suitable computation time efficient process models are demonstrated. This paper contributes to the vision of adjusting the transient temperature field during manufacturing in order to improve the resulting part's quality by simulation based process design upstream to the build process and the inline process monitoring.


Entropy ◽  
2020 ◽  
Vol 22 (2) ◽  
pp. 258
Author(s):  
Zhihang Xu ◽  
Qifeng Liao

Optimal experimental design (OED) is of great significance in efficient Bayesian inversion. A popular choice of OED methods is based on maximizing the expected information gain (EIG), where expensive likelihood functions are typically involved. To reduce the computational cost, in this work, a novel double-loop Bayesian Monte Carlo (DLBMC) method is developed to efficiently compute the EIG, and a Bayesian optimization (BO) strategy is proposed to obtain its maximizer only using a small number of samples. For Bayesian Monte Carlo posed on uniform and normal distributions, our analysis provides explicit expressions for the mean estimates and the bounds of their variances. The accuracy and the efficiency of our DLBMC and BO based optimal design are validated and demonstrated with numerical experiments.


Author(s):  
Alessandro Bianchini ◽  
Francesco Balduzzi ◽  
Giovanni Ferrara ◽  
Giacomo Persico ◽  
Vincenzo Dossena ◽  
...  

To improve the efficiency of Darrieus wind turbines, which still lacks from that of horizontal-axis rotors, Computational Fluid Dynamics (CFD) techniques are now extensively applied, since they only provide a detailed and comprehensive flow representation. Their computational cost makes them, however, still prohibitive for routine application in the industrial context, which still makes large use of low-order simulation models like the Blade Element Momentum (BEM) theory. These models have been shown to provide relatively accurate estimations of the overall turbine performance; conversely, the description of the flow field suffers from the strong approximations introduced in the modelling of the flow physics. In the present study, the effectiveness of the simplified BEM approach was critically benchmarked against a comprehensive description of the flow field past the rotating blades coming from the combination of a two-dimensional unsteady CFD model and experimental wind tunnel tests; for both data sets, the overall performance and the wake characteristics on the mid plane of a small-scale H-shaped Darrieus turbine were available. Upon examination of the flow field, the validity of the ubiquitous use of induction factors is discussed, together with the resulting velocity profiles upstream and downstream the rotor. Particular attention is paid on the actual flow conditions (i.e. incidence angle and relative speed) experienced by the airfoils in motion at different azimuthal angles, for which a new procedure for the post-processing of CFD data is here proposed. Based on this model, the actual lift and drag coefficients produced by the airfoils in motion are analyzed and discussed, with particular focus on dynamic stall. The analysis highlights the main critical issues and flaws of the low-order BEM approach, but also sheds new light on the physical reasons why the overall performance prediction of these models is often acceptable for a first-design analysis.


2012 ◽  
Vol 4 (1) ◽  
pp. 20-21 ◽  
Author(s):  
Arthur M. Jacobs

In his review, Walter (2012) links conceptual perspectives on empathy with crucial results of neurocognitive and genetic studies and presents a descriptive neurocognitive model that identifies neuronal key structures and links them with both cognitive and affective empathy via a high and a low road. After discussion of this model, the remainder of this comment deals more generally with the possibilities and limitations of current neurocognitive models, considering ways to develop process models allowing specific quantitative predictions.


2016 ◽  
Vol 33 (7) ◽  
pp. 2007-2018 ◽  
Author(s):  
Slawomir Koziel ◽  
Adrian Bekasiewicz

Purpose Development of techniques for expedited design optimization of complex and numerically expensive electromagnetic (EM) simulation models of antenna structures validated both numerically and experimentally. The paper aims to discuss these issues. Design/methodology/approach The optimization task is performed using a technique that combines gradient search with adjoint sensitivities, trust region framework, as well as EM simulation models with various levels of fidelity (coarse, medium and fine). Adaptive procedure for switching between the models of increasing accuracy in the course of the optimization process is implemented. Numerical and experimental case studies are provided to validate correctness of the design approach. Findings Appropriate combination of suitable design optimization algorithm embedded in a trust region framework, as well as model selection techniques, allows for considerable reduction of the antenna optimization cost compared to conventional methods. Research limitations/implications The study demonstrates feasibility of EM-simulation-driven design optimization of antennas at low computational cost. The presented techniques reach beyond the common design approaches based on direct optimization of EM models using conventional gradient-based or derivative-free methods, particularly in terms of reliability and reduction of the computational costs of the design processes. Originality/value Simulation-driven design optimization of contemporary antenna structures is very challenging when high-fidelity EM simulations are utilized for performance utilization of structure at hand. The proposed variable-fidelity optimization technique with adjoint sensitivity and trust regions permits rapid optimization of numerically demanding antenna designs (here, dielectric resonator antenna and compact monopole), which cannot be achieved when conventional methods are of use. The design cost of proposed strategy is up to 60 percent lower than direct optimization exploiting adjoint sensitivities. Experimental validation of the results is also provided.


Sign in / Sign up

Export Citation Format

Share Document