scholarly journals Algorithmic Parameter Estimation and Uncertainty Quantification for Hodgkin-Huxley Neuron Models

2021 ◽  
Author(s):  
Y. Curtis Wang ◽  
Nirvik Sinha ◽  
Johann Rudi ◽  
James Velasco ◽  
Gideon Idumah ◽  
...  

Experimental data-based parameter search for Hodgkin-Huxley-style (HH) neuron models is a major challenge for neuroscientists and neuroengineers. Current search strategies are often computationally expensive, are slow to converge, have difficulty handling nonlinearities or multimodalities in the objective function, or require good initial parameter guesses. Most important, many existing approaches lack quantification of uncertainties in parameter estimates even though such uncertainties are of immense biological significance. We propose a novel method for parameter inference and uncertainty quantification in a Bayesian framework using the Markov chain Monte Carlo (MCMC) approach. This approach incorporates prior knowledge about model parameters (as probability distributions) and aims to map the prior to a posterior distribution of parameters informed by both the model and the data. Furthermore, using the adaptive parallel tempering strategy for MCMC, we tackle the highly nonlinear, noisy, and multimodal loss function, which depends on the HH neuron model. We tested the robustness of our approach using the voltage trace data generated from a 9-parameter HH model using five levels of injected currents (0.0, 0.1, 0.2, 0.3, and 0.4 nA). Each test consisted of running the ground truth with its respective currents to estimate the model parameters. To simulate the condition for fitting a frequency-current (F-I) curve, we also introduced an aggregate objective that runs MCMC against all five levels simultaneously. We found that MCMC was able to produce many solutions with acceptable loss values (e.g., for 0.0 nA, 889 solutions were within 0.5% of the best solution and 1,595 solutions within 1% of the best solution). Thus, an adaptive parallel tempering MCMC search provides a "landscape" of the possible parameter sets with acceptable loss values in a tractable manner. Our approach is able to obtain an intelligently sampled global view of the solution distributions within a search range in a single computation. Additionally, the advantage of uncertainty quantification allows for exploration of further solution spaces, which can serve to better inform future experiments.

2018 ◽  
Vol 11 (8) ◽  
pp. 3313-3325 ◽  
Author(s):  
Alex G. Libardoni ◽  
Chris E. Forest ◽  
Andrei P. Sokolov ◽  
Erwan Monier

Abstract. For over 20 years, the Massachusetts Institute of Technology Earth System Model (MESM) has been used extensively for climate change research. The model is under continuous development with components being added and updated. To provide transparency in the model development, we perform a baseline evaluation by comparing model behavior and properties in the newest version to the previous model version. In particular, changes resulting from updates to the land surface model component and the input forcings used in historical simulations of climate change are investigated. We run an 1800-member ensemble of MESM historical climate simulations where the model parameters that set climate sensitivity, the rate of ocean heat uptake, and the net anthropogenic aerosol forcing are systematically varied. By comparing model output to observed patterns of surface temperature changes and the linear trend in the increase in ocean heat content, we derive probability distributions for the three model parameters. Furthermore, we run a 372-member ensemble of transient climate simulations where all model forcings are fixed and carbon dioxide concentrations are increased at the rate of 1 % year−1. From these runs, we derive response surfaces for transient climate response and thermosteric sea level rise as a function of climate sensitivity and ocean heat uptake. We show that the probability distributions shift towards higher climate sensitivities and weaker aerosol forcing when using the new model and that the climate response surfaces are relatively unchanged between model versions. Because the response surfaces are independent of the changes to the model forcings and similar between model versions with different land surface models, we suggest that the change in land surface model has limited impact on the temperature evolution in the model. Thus, we attribute the shifts in parameter estimates to the updated model forcings.


2016 ◽  
Author(s):  
David N. Dralle ◽  
Nathaniel J. Karst ◽  
Kyriakos Charalampous ◽  
Sally E. Thompson

Abstract. The study of single streamflow recession events is receiving increasing attention following the presentation of novel theoretical explanations for the emergence of power-law forms of the recession relationship, and drivers of its variability. Individually characterizing streamflow recessions often involves describing the similarities and differences between model parameters fitted to each recession time series. Significant methodological sensitivity has been identified in the fitting and parameterization of models that describe populations of many recessions, but the dependence of estimated model parameters on methodological choices has not been evaluated for event-by-event forms of analysis. Here, we use daily streamflow data from 16 catchments in northern California and southern Oregon to investigate how combinations of commonly used streamflow recession definitions and fitting techniques impact parameter estimates of a widely-used power-law recession model. We show that: (i) methodological decisions, including ones that have received little attention in the literature, can impact parameter value estimates and model goodness-of-fit; (ii) the central tendencies of event-scale recession parameter probability distributions are largely robust to methodological choices, in the sense that differing methods rank catchments similarly according to the medians of these distributions; (iii) recession parameter distributions are method-dependent, but roughly catchment-independent, such that changing the choices made about a particular method affects a given parameter in similar ways across most catchments; and (iv) the observed correlative relationship between the power law recession scale parameter and catchment antecedent wetness varies depending on recession definition and fitting choices.


2019 ◽  
Author(s):  
Rohitash Chandra ◽  
Danial Azam ◽  
Arpit Kapoor ◽  
R. Dietmar Mulller

Abstract. The complex and computationally expensive features of the forward landscape and sedimentary basin evolution models pose a major challenge in the development of efficient inference and optimization methods. Bayesian inference provides a methodology for estimation and uncertainty quantification of free model parameters. In our previous work, parallel tempering Bayeslands was developed as a framework for parameter estimation and uncertainty quantification for the landscape and basin evolution modelling software Badlands. Parallel tempering Bayeslands features high-performance computing with dozens of processing cores running in parallel to enhance computational efficiency. Although parallel computing is used, the procedure remains computationally challenging since thousands of samples need to be drawn and evaluated. In large-scale landscape and basin evolution problems, a single model evaluation can take from several minutes to hours, and in certain cases, even days. Surrogate-assisted optimization has been with successfully applied to a number of engineering problems This motivates its use in optimisation and inference methods suited for complex models in geology and geophysics. Surrogates can speed up parallel tempering Bayeslands by developing computationally inexpensive surrogates to mimic expensive models. In this paper, we present an application of surrogate-assisted parallel tempering where that surrogate mimics a landscape evolution model including erosion, sediment transport and deposition, by estimating the likelihood function that is given by the model. We employ a machine learning model as a surrogate that learns from the samples generated by the parallel tempering algorithm and the likelihood from the model. The entire framework is developed in a parallel computing infrastructure to take advantage of parallelization. The results show that the proposed methodology is effective in lowering the overall computational cost significantly while retaining the quality of solutions.


2020 ◽  
Vol 13 (7) ◽  
pp. 2959-2979
Author(s):  
Rohitash Chandra ◽  
Danial Azam ◽  
Arpit Kapoor ◽  
R. Dietmar Müller

Abstract. The complex and computationally expensive nature of landscape evolution models poses significant challenges to the inference and optimization of unknown model parameters. Bayesian inference provides a methodology for estimation and uncertainty quantification of unknown model parameters. In our previous work, we developed parallel tempering Bayeslands as a framework for parameter estimation and uncertainty quantification for the Badlands landscape evolution model. Parallel tempering Bayeslands features high-performance computing that can feature dozens of processing cores running in parallel to enhance computational efficiency. Nevertheless, the procedure remains computationally challenging since thousands of samples need to be drawn and evaluated. In large-scale landscape evolution problems, a single model evaluation can take from several minutes to hours and in some instances, even days or weeks. Surrogate-assisted optimization has been used for several computationally expensive engineering problems which motivate its use in optimization and inference of complex geoscientific models. The use of surrogate models can speed up parallel tempering Bayeslands by developing computationally inexpensive models to mimic expensive ones. In this paper, we apply surrogate-assisted parallel tempering where the surrogate mimics a landscape evolution model by estimating the likelihood function from the model. We employ a neural-network-based surrogate model that learns from the history of samples generated. The entire framework is developed in a parallel computing infrastructure to take advantage of parallelism. The results show that the proposed methodology is effective in lowering the computational cost significantly while retaining the quality of model predictions.


2019 ◽  
Vol 11 (20) ◽  
pp. 2458 ◽  
Author(s):  
Bikram Koirala ◽  
Mahdi Khodadadzadeh ◽  
Cecilia Contreras ◽  
Zohreh Zahiri ◽  
Richard Gloaguen ◽  
...  

Due to the complex interaction of light with the Earth’s surface, reflectance spectra can be described as highly nonlinear mixtures of the reflectances of the material constituents occurring in a given resolution cell of hyperspectral data. Our aim is to estimate the fractional abundance maps of the materials from the nonlinear hyperspectral data. The main disadvantage of using nonlinear mixing models is that the model parameters are not properly interpretable in terms of fractional abundances. Moreover, not all spectra of a hyperspectral dataset necessarily follow the same particular mixing model. In this work, we present a supervised method for nonlinear spectral unmixing. The method learns a mapping from a true hyperspectral dataset to corresponding linear spectra, composed of the same fractional abundances. A simple linear unmixing then reveals the fractional abundances. To learn this mapping, ground truth information is required, in the form of actual spectra and corresponding fractional abundances, along with spectra of the pure materials, obtained from a spectral library or available in the dataset. Three methods are presented for learning nonlinear mapping, based on Gaussian processes, kernel ridge regression, and feedforward neural networks. Experimental results conducted on an artificial dataset, a data set obtained by ray tracing, and a drill core hyperspectral dataset shows that this novel methodology is very promising.


Algorithms ◽  
2020 ◽  
Vol 13 (8) ◽  
pp. 196
Author(s):  
Luc Bonnet ◽  
Jean-Luc Akian ◽  
Éric Savin ◽  
T. Sullivan

Motivated by the desire to numerically calculate rigorous upper and lower bounds on deviation probabilities over large classes of probability distributions, we present an adaptive algorithm for the reconstruction of increasing real-valued functions. While this problem is similar to the classical statistical problem of isotonic regression, the optimisation setting alters several characteristics of the problem and opens natural algorithmic possibilities. We present our algorithm, establish sufficient conditions for convergence of the reconstruction to the ground truth, and apply the method to synthetic test cases and a real-world example of uncertainty quantification for aerodynamic design.


2018 ◽  
Author(s):  
Olivia Eriksson ◽  
Alexandra Jauhiainen ◽  
Sara Maad Sasane ◽  
Andrei Kramer ◽  
Anu G Nair ◽  
...  

AbstractMotivationDynamical models describing intracellular phenomena are increasing in size and complexity as more information is obtained from experiments. These models are often over-parameterized with respect to the quantitative data used for parameter estimation, resulting in uncertainty in the individual parameter estimates as well as in the predictions made from the model. Here we combine Bayesian analysis with global sensitivity analysis in order to give better informed predictions; to point out weaker parts of the model that are important targets for further experiments, as well as give guidance on parameters that are essential in distinguishing different qualitative output behaviours.ResultsWe used approximate Bayesian computation (ABC) to estimate the model parameters from experimental data, as well as to quantify the uncertainty in this estimation (inverse uncertainty quantification), resulting in a posterior distribution for the parameters. This parameter uncertainty was next propagated to a corresponding uncertainty in the predictions (forward uncertainty propagation), and a global sensitivity analysis was performed on the prediction using the posterior distribution as the possible values for the parameters. This methodology was applied on a relatively large and complex model relevant for synaptic plasticity, using experimental data from several sources. We could hereby point out those parameters that by themselves have the largest contribution to the uncertainty of the prediction as well as identify parameters important to separate between qualitatively different predictions.This approach is useful both for experimental design as well as model building.


2020 ◽  
Vol 499 (4) ◽  
pp. 5641-5652
Author(s):  
Georgios Vernardos ◽  
Grigorios Tsagkatakis ◽  
Yannis Pantazis

ABSTRACT Gravitational lensing is a powerful tool for constraining substructure in the mass distribution of galaxies, be it from the presence of dark matter sub-haloes or due to physical mechanisms affecting the baryons throughout galaxy evolution. Such substructure is hard to model and is either ignored by traditional, smooth modelling, approaches, or treated as well-localized massive perturbers. In this work, we propose a deep learning approach to quantify the statistical properties of such perturbations directly from images, where only the extended lensed source features within a mask are considered, without the need of any lens modelling. Our training data consist of mock lensed images assuming perturbing Gaussian Random Fields permeating the smooth overall lens potential, and, for the first time, using images of real galaxies as the lensed source. We employ a novel deep neural network that can handle arbitrary uncertainty intervals associated with the training data set labels as input, provides probability distributions as output, and adopts a composite loss function. The method succeeds not only in accurately estimating the actual parameter values, but also reduces the predicted confidence intervals by 10 per cent in an unsupervised manner, i.e. without having access to the actual ground truth values. Our results are invariant to the inherent degeneracy between mass perturbations in the lens and complex brightness profiles for the source. Hence, we can quantitatively and robustly quantify the smoothness of the mass density of thousands of lenses, including confidence intervals, and provide a consistent ranking for follow-up science.


2008 ◽  
Vol 10 (2) ◽  
pp. 153-162 ◽  
Author(s):  
B. G. Ruessink

When a numerical model is to be used as a practical tool, its parameters should preferably be stable and consistent, that is, possess a small uncertainty and be time-invariant. Using data and predictions of alongshore mean currents flowing on a beach as a case study, this paper illustrates how parameter stability and consistency can be assessed using Markov chain Monte Carlo. Within a single calibration run, Markov chain Monte Carlo estimates the parameter posterior probability density function, its mode being the best-fit parameter set. Parameter stability is investigated by stepwise adding new data to a calibration run, while consistency is examined by calibrating the model on different datasets of equal length. The results for the present case study indicate that various tidal cycles with strong (say, >0.5 m/s) currents are required to obtain stable parameter estimates, and that the best-fit model parameters and the underlying posterior distribution are strongly time-varying. This inconsistent parameter behavior may reflect unresolved variability of the processes represented by the parameters, or may represent compensational behavior for temporal violations in specific model assumptions.


1991 ◽  
Vol 18 (2) ◽  
pp. 320-327 ◽  
Author(s):  
Murray A. Fitch ◽  
Edward A. McBean

A model is developed for the prediction of river flows resulting from combined snowmelt and precipitation. The model employs a Kalman filter to reflect uncertainty both in the measured data and in the system model parameters. The forecasting algorithm is used to develop multi-day forecasts for the Sturgeon River, Ontario. The algorithm is shown to develop good 1-day and 2-day ahead forecasts, but the linear prediction model is found inadequate for longer-term forecasts. Good initial parameter estimates are shown to be essential for optimal forecasting performance. Key words: Kalman filter, streamflow forecast, multi-day, streamflow, Sturgeon River, MISP algorithm.


Sign in / Sign up

Export Citation Format

Share Document