scholarly journals Mapping parameter spaces of biological switches

2021 ◽  
Vol 17 (2) ◽  
pp. e1008711
Author(s):  
Rocky Diegmiller ◽  
Lun Zhang ◽  
Marcio Gameiro ◽  
Justinn Barr ◽  
Jasmin Imran Alsous ◽  
...  

Since the seminal 1961 paper of Monod and Jacob, mathematical models of biomolecular circuits have guided our understanding of cell regulation. Model-based exploration of the functional capabilities of any given circuit requires systematic mapping of multidimensional spaces of model parameters. Despite significant advances in computational dynamical systems approaches, this analysis remains a nontrivial task. Here, we use a nonlinear system of ordinary differential equations to model oocyte selection in Drosophila, a robust symmetry-breaking event that relies on autoregulatory localization of oocyte-specification factors. By applying an algorithmic approach that implements symbolic computation and topological methods, we enumerate all phase portraits of stable steady states in the limit when nonlinear regulatory interactions become discrete switches. Leveraging this initial exact partitioning and further using numerical exploration, we locate parameter regions that are dense in purely asymmetric steady states when the nonlinearities are not infinitely sharp, enabling systematic identification of parameter regions that correspond to robust oocyte selection. This framework can be generalized to map the full parameter spaces in a broad class of models involving biological switches.

Author(s):  
Marcello Pericoli ◽  
Marco Taboga

Abstract We propose a general method for the Bayesian estimation of a very broad class of non-linear no-arbitrage term-structure models. The main innovation we introduce is a computationally efficient method, based on deep learning techniques, for approximating no-arbitrage model-implied bond yields to any desired degree of accuracy. Once the pricing function is approximated, the posterior distribution of model parameters and unobservable state variables can be estimated by standard Markov Chain Monte Carlo methods. As an illustrative example, we apply the proposed techniques to the estimation of a shadow-rate model with a time-varying lower bound and unspanned macroeconomic factors.


2020 ◽  
Vol 45 (3) ◽  
pp. 966-992
Author(s):  
Michael Jong Kim

Sequential Bayesian optimization constitutes an important and broad class of problems where model parameters are not known a priori but need to be learned over time using Bayesian updating. It is known that the solution to these problems can in principle be obtained by solving the Bayesian dynamic programming (BDP) equation. Although the BDP equation can be solved in certain special cases (for example, when posteriors have low-dimensional representations), solving this equation in general is computationally intractable and remains an open problem. A second unresolved issue with the BDP equation lies in its (rather generic) interpretation. Beyond the standard narrative of balancing immediate versus future costs—an interpretation common to all dynamic programs with or without learning—the BDP equation does not provide much insight into the underlying mechanism by which sequential Bayesian optimization trades off between learning (exploration) and optimization (exploitation), the distinguishing feature of this problem class. The goal of this paper is to develop good approximations (with error bounds) to the BDP equation that help address the issues of computation and interpretation. To this end, we show how the BDP equation can be represented as a tractable single-stage optimization problem that trades off between a myopic term and a “variance regularization” term that measures the total solution variability over the remaining planning horizon. Intuitively, the myopic term can be regarded as a pure exploitation objective that ignores the impact of future learning, whereas the variance regularization term captures a pure exploration objective that only puts value on solutions that resolve statistical uncertainty. We develop quantitative error bounds for this representation and prove that the error tends to zero like o(n-1) almost surely in the number of stages n, which as a corollary, establishes strong consistency of the approximate solution.


2014 ◽  
Vol 2014 ◽  
pp. 1-14 ◽  
Author(s):  
Ling Liu ◽  
Chongxin Liu

A novel nonlinear four-dimensional hyperchaotic system and its fractional-order form are presented. Some dynamical behaviors of this system are further investigated, including Poincaré mapping, parameter phase portraits, equilibrium points, bifurcations, and calculated Lyapunov exponents. A simple fourth-channel block circuit diagram is designed for generating strange attractors of this dynamical system. Specifically, a novel network module fractance is introduced to achieve fractional-order circuit diagram for hardware implementation of the fractional attractors of this nonlinear hyperchaotic system with order as low as 0.9. Observation results have been observed by using oscilloscope which demonstrate that the fractional-order nonlinear hyperchaotic attractors exist indeed in this new system.


2021 ◽  
Vol 67 (1) ◽  
pp. 1-59
Author(s):  
Christophe Chesneau ◽  

Engineers, economists, hydrologists, social scientists, and behavioural scientists often deal with data belonging to the unit interval. One of the most common approaches for modeling purposes is the use of unit distributions, beginning with the classical power distribution. A simple way to improve its applicability is proposed by the transmuted scheme. We propose an alternative in this article by slightly modifying this scheme with a logarithmic weighted function, thus creating the log-weighted power distribution. It can also be thought of as a variant of the log-Lindley distribution, and some other derived unit distributions. We investigate its statistical and functional capabilities, and discuss how it distinguishes between power and transmuted power distributions. Among the functions derived from the log-weighted distribution are the cumulative distribution, probability density, hazard rate, and quantile functions. When appropriate, a shape analysis of them is performed to increase the exibility of the proposed modelling. Various properties are investigated, including stochastic ordering (first order), generalized logarithmic moments, incomplete moments, Rényi entropy, order statistics, reliability measures, and a list of new distributions derived from the main one are offered. Subsequently, the estimation of the model parameters is discussed through the maximum likelihood procedure. Then, the proposed distribution is tested on a few data sets to show in what concrete statistical scenarios it may outperform the transmuted power distribution.


Minerals ◽  
2021 ◽  
Vol 11 (4) ◽  
pp. 425
Author(s):  
Jihoe Kwon ◽  
Heechan Cho

Despite its effectiveness in determining breakage function parameters (BFPs) for quantifying breakage characteristics in mineral grinding processes, the back-calculation method has limitations owing to the uncertainty regarding the distribution of the error function. In this work, using Korean uranium and molybdenum ores, we show that the limitation can be overcome by searching over a wide range of initial values based on the conjugate gradient method. We also visualized the distribution of the sum of squares of the error in the two-dimensional parameter space. The results showed that the error function was strictly convex, and the main problem in the back-calculation of the breakage functions was the flat surface of the objective function rather than the occurrence of local minima. Based on our results, we inferred that the flat surface problem could be significantly mitigated by searching over a wide range of initial values. Back-calculation using a wide range of initial values yields BFPs similar to those obtained from single-sized-feed breakage tests (SSFBTs) up to four-dimensional parameter spaces. Therefore, by searching over a wide range of initial values, the feasibility of the back-calculation approach can be significantly improved with a minimum number of SSFBTs.


2011 ◽  
Vol 21 (11) ◽  
pp. 3249-3258 ◽  
Author(s):  
Y. ZHAO ◽  
S. A. BILLINGS ◽  
D. COCA ◽  
Y. GUO ◽  
R. I. RISTIC ◽  
...  

This paper describes the identification of a temperature dependent FitzHugh–Nagumo model directly from experimental observations with controlled inputs. By studying the steady states and the trajectory of the phase of the variables, the stability of the model is analyzed and a rule to generate oscillation waves is proposed. The dependence of the oscillation frequency and propagation speed on the model parameters is then investigated to seek the appropriate control variables, which then become functions of temperature in the identified model. The results show that the proposed approach can provide a good representation of the dynamics of the oscillatory behavior of a Belousov–Zhabotinskii reaction.


2011 ◽  
Vol 8 (1) ◽  
pp. 151-155 ◽  
Author(s):  
Andrew M. Bush ◽  
Philip M. Novack-Gottshall

The ecological traits and functional capabilities of marine animals have changed significantly since their origin in the late Precambrian. These changes can be analysed quantitatively using multi-dimensional parameter spaces in which the ecological lifestyles of species are represented by particular combinations of parameter values. Here, we present models that describe the filling of this multi-dimensional ‘ecospace’ by ecological lifestyles during metazoan diversification. These models reflect varying assumptions about the processes that drove ecological diversification; they contrast diffusive expansion with driven expansion and niche conservatism with niche partitioning. Some models highlight the importance of interactions among organisms (ecosystem engineering and predator–prey escalation) in promoting new lifestyles or eliminating existing ones. These models reflect processes that were not mutually exclusive; rigorous analyses will continue to reveal their applicability to episodes in metazoan history.


2020 ◽  
Vol 80 (12) ◽  
Author(s):  
Taotao Qiu ◽  
Taishi Katsuragawa ◽  
Shulei Ni

AbstractThe recent observations from CMB have imposed a very stringent upper-limit on the tensor/scalar ratio r of inflation models, $$r < 0.064$$ r < 0.064 , which indicates that the primordial gravitational waves (PGW), even though possible to be detected, should have a power spectrum of a tiny amplitude. However, current experiments on PGW is ambitious to detect such a signal by improving the accuracy to an even higher level. Whatever their results are, it will give us much information about the early Universe, not only from the astrophysical side but also from the theoretical side, such as model building for the early Universe. In this paper, we are interested in analyzing what kind of inflation models can be favored by future observations, starting with a kind of general action offered by the effective field theory (EFT) approach. We show a general form of r that can be reduced to various models, and more importantly, we show how the accuracy of future observations can put constraints on model parameters by plotting the contours in their parameter spaces.


2019 ◽  
Vol 79 (11) ◽  
Author(s):  
Sascha Caron ◽  
Tom Heskes ◽  
Sydney Otten ◽  
Bob Stienen

AbstractConstraining the parameters of physical models with $$>5-10$$>5-10 parameters is a widespread problem in fields like particle physics and astronomy. The generation of data to explore this parameter space often requires large amounts of computational resources. The commonly used solution of reducing the number of relevant physical parameters hampers the generality of the results. In this paper we show that this problem can be alleviated by the use of active learning. We illustrate this with examples from high energy physics, a field where simulations are often expensive and parameter spaces are high-dimensional. We show that the active learning techniques query-by-committee and query-by-dropout-committee allow for the identification of model points in interesting regions of high-dimensional parameter spaces (e.g. around decision boundaries). This makes it possible to constrain model parameters more efficiently than is currently done with the most common sampling algorithms and to train better performing machine learning models on the same amount of data. Code implementing the experiments in this paper can be found on GitHub "Image missing"


2018 ◽  
Vol 52 (5) ◽  
pp. 1733-1761 ◽  
Author(s):  
María J. Cáceres ◽  
Ricarda Schneider

The network of noisy leaky integrate and fire (NNLIF) model is one of the simplest self-contained mean-field models considered to describe the behavior of neural networks. Even so, in studying its mathematical properties some simplifications are required [Cáceres and Perthame, J. Theor. Biol. 350 (2014) 81–89; Cáceres and Schneider, Kinet. Relat. Model. 10 (2017) 587–612; Cáceres, Carrillo and Perthame, J. Math. Neurosci. 1 (2011) 7] which disregard crucial phenomena. In this work we deal with the general NNLIF model without simplifications. It involves a network with two populations (excitatory and inhibitory), with transmission delays between the neurons and where the neurons remain in a refractory state for a certain time. In this paper we study the number of steady states in terms of the model parameters, the long time behaviour via the entropy method and Poincaré’s inequality, blow-up phenomena, and the importance of transmission delays between excitatory neurons to prevent blow-up and to give rise to synchronous solutions. Besides analytical results, we present a numerical solver, based on high order flux-splitting WENO schemes and an explicit third order TVD Runge-Kutta method, in order to describe the wide range of phenomena exhibited by the network: blow-up, asynchronous/synchronous solutions and instability/stability of the steady states. The solver also allows us to observe the time evolution of the firing rates, refractory states and the probability distributions of the excitatory and inhibitory populations.


Sign in / Sign up

Export Citation Format

Share Document