Modal Combination Methods in Deterministic and Stochastic Dynamics

Author(s):  
S. Benfratello ◽  
G. Muscolino
2017 ◽  
Author(s):  
Debasish Roy ◽  
G. Visweswara Rao
Keyword(s):  

Author(s):  
Sauro Succi

Dense fluids and liquids molecules are in constant interaction; hence, they do not fit into the Boltzmann’s picture of a clearcut separation between free-streaming and collisional interactions. Since the interactions are soft and do not involve large scattering angles, an effective way of describing dense fluids is to formulate stochastic models of particle motion, as pioneered by Einstein’s theory of Brownian motion and later extended by Paul Langevin. Besides its practical value for the study of the kinetic theory of dense fluids, Brownian motion bears a central place in the historical development of kinetic theory. Among others, it provided conclusive evidence in favor of the atomistic theory of matter. This chapter introduces the basic notions of stochastic dynamics and its connection with other important kinetic equations, primarily the Fokker–Planck equation, which bear a complementary role to the Boltzmann equation in the kinetic theory of dense fluids.


2020 ◽  
Author(s):  
Stefanie Winkelmann ◽  
Christof Schütte

2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Qingchao Jiang ◽  
Xiaoming Fu ◽  
Shifu Yan ◽  
Runlai Li ◽  
Wenli Du ◽  
...  

AbstractNon-Markovian models of stochastic biochemical kinetics often incorporate explicit time delays to effectively model large numbers of intermediate biochemical processes. Analysis and simulation of these models, as well as the inference of their parameters from data, are fraught with difficulties because the dynamics depends on the system’s history. Here we use an artificial neural network to approximate the time-dependent distributions of non-Markovian models by the solutions of much simpler time-inhomogeneous Markovian models; the approximation does not increase the dimensionality of the model and simultaneously leads to inference of the kinetic parameters. The training of the neural network uses a relatively small set of noisy measurements generated by experimental data or stochastic simulations of the non-Markovian model. We show using a variety of models, where the delays stem from transcriptional processes and feedback control, that the Markovian models learnt by the neural network accurately reflect the stochastic dynamics across parameter space.


Author(s):  
Zezheng Yan ◽  
Hanping Zhao ◽  
Xiaowen Mei

AbstractDempster–Shafer evidence theory is widely applied in various fields related to information fusion. However, the results are counterintuitive when highly conflicting evidence is fused with Dempster’s rule of combination. Many improved combination methods have been developed to address conflicting evidence. Nevertheless, all of these approaches have inherent flaws. To solve the existing counterintuitive problem more effectively and less conservatively, an improved combination method for conflicting evidence based on the redistribution of the basic probability assignment is proposed. First, the conflict intensity and the unreliability of the evidence are calculated based on the consistency degree, conflict degree and similarity coefficient among the evidence. Second, the redistribution equation of the basic probability assignment is constructed based on the unreliability and conflict intensity, which realizes the redistribution of the basic probability assignment. Third, to avoid excessive redistribution of the basic probability assignment, the precision degree of the evidence obtained by information entropy is used as the correction factor to modify the basic probability assignment for the second time. Finally, Dempster’s rule of combination is used to fuse the modified basic probability assignment. Several different types of examples and actual data sets are given to illustrate the effectiveness and potential of the proposed method. Furthermore, the comparative analysis reveals the proposed method to be better at obtaining the right results than other related methods.


2020 ◽  
Vol 23 (3) ◽  
pp. 656-693 ◽  
Author(s):  
Thomas M. Michelitsch ◽  
Alejandro P. Riascos

AbstractWe survey the ‘generalized fractional Poisson process’ (GFPP). The GFPP is a renewal process generalizing Laskin’s fractional Poisson counting process and was first introduced by Cahoy and Polito. The GFPP contains two index parameters with admissible ranges 0 < β ≤ 1, α > 0 and a parameter characterizing the time scale. The GFPP involves Prabhakar generalized Mittag-Leffler functions and contains for special choices of the parameters the Laskin fractional Poisson process, the Erlang process and the standard Poisson process. We demonstrate this by means of explicit formulas. We develop the Montroll-Weiss continuous-time random walk (CTRW) for the GFPP on undirected networks which has Prabhakar distributed waiting times between the jumps of the walker. For this walk, we derive a generalized fractional Kolmogorov-Feller equation which involves Prabhakar generalized fractional operators governing the stochastic motions on the network. We analyze in d dimensions the ‘well-scaled’ diffusion limit and obtain a fractional diffusion equation which is of the same type as for a walk with Mittag-Leffler distributed waiting times. The GFPP has the potential to capture various aspects in the dynamics of certain complex systems.


Sign in / Sign up

Export Citation Format

Share Document