inference procedure
Recently Published Documents


TOTAL DOCUMENTS

144
(FIVE YEARS 54)

H-INDEX

16
(FIVE YEARS 2)

2021 ◽  
Vol 2090 (1) ◽  
pp. 012109
Author(s):  
Richard L. Summers

Abstract In the analysis of physical systems, the forces and mechanics of all system changes as codified in the Newtonian laws can be redefined by the methods of Lagrange and Hamilton through an identification of the governing action principle as a more general framework for dynamics. For the living system, it is the dimensional and relational structure of its biologic continuum (both internal and external to the organism) that creates the signature informational metrics and course configurations for the action dynamics associated with any natural systems phenomena. From this dynamic information theoretic framework, an action functional can be also derived in accordance with the methods of Lagrange. The experiential process of acquiring information and translating it into actionable meaning for adaptive responses is the driving force for changes in the living system. The core axiomatic procedure of this adaptive process should include an innate action principle that can determine the system’s directional changes. This procedure for adaptive system reconciliation of divergences from steady state within the biocontinuum can be described by an information metric formulation of the process for actionable knowledge acquisition that incorporates the axiomatic inference of the Kullback’s Principle of Minimum Discrimination Information powered by the mechanics of survival replicator dynamics. This entropic driven trajectory naturally minimizes the biocontinuum information gradient differences like a least action principle and is an inference procedure for directional change. If the mathematical expression of this process is the Lagrangian integrand for adaptive changes within the biocontinuum, then it is also considered as an action functional for the living system.


Author(s):  
Assyr Abdulle ◽  
Giacomo Garegnani ◽  
Grigorios A. Pavliotis ◽  
Andrew M. Stuart ◽  
Andrea Zanoni

AbstractWe study the problem of drift estimation for two-scale continuous time series. We set ourselves in the framework of overdamped Langevin equations, for which a single-scale surrogate homogenized equation exists. In this setting, estimating the drift coefficient of the homogenized equation requires pre-processing of the data, often in the form of subsampling; this is because the two-scale equation and the homogenized single-scale equation are incompatible at small scales, generating mutually singular measures on the path space. We avoid subsampling and work instead with filtered data, found by application of an appropriate kernel function, and compute maximum likelihood estimators based on the filtered process. We show that the estimators we propose are asymptotically unbiased and demonstrate numerically the advantages of our method with respect to subsampling. Finally, we show how our filtered data methodology can be combined with Bayesian techniques and provide a full uncertainty quantification of the inference procedure.


2021 ◽  
Author(s):  
SEHINDE AKINBIOLA ◽  
Ayobami Salami ◽  
Olusegun Awotoye

Abstract The complexity of the tropical forest structure remains a challenge in forest physiognomy assessment, which is a crucial indicator of forest productivity with implications on the carbon cycle, biodiversity, and ecosystem services. The study assessed structural characteristics, described variability within forest stands, and estimated carbon stocks using simulation tools and tree modeling to focus on understanding and quantifying ecological relationships. The study discovered a site-specific wood density difference of 0.07g/cm3 compared with the generalized wood density for tropical forests by Food and Agricultural Organisation (FAO). Carbon stocks estimated with this site-specific wood density produced; 174 Mg Ca / ha-1, 155 Mg Ca / ha-1, and 78 Mg Ca / ha-1, respectively, from three sampled Forest Reserves. Furthermore, the result showed that the forest clusters' most productive layers (emergent and canopy layers) were predominantly hardwood species interspersed with softwood species with huge diameters. The height-diameter model indicated that although the height was a better predictor of the forest structural layer than the diameter, there was no clear margin for grouping species into layers in the region because of interspecies variations, temperature, and anthropogenic activities. The Bayesian Inference procedure provided a reliable approach for carbon stock estimate in the tropics with no legacy inventories.


Vestnik MGSU ◽  
2021 ◽  
pp. 876-884
Author(s):  
Evgeny V. Ganzen

Introduction. The article analyzes the determinants that condition the incorporation of public buildings into capital repair and reconstruction plans (CRR plan). These include the technical condition of a facility, specified in an expert opinion and identified on the basis of the survey results, the duration of effective operation, determined in accordance with guideline values, the work performance time, the availability of the engineering and transport infrastructure, and other factors. Materials and methods. The co-authors formulated fuzzy inference rules and compiled a table of expert opinions issued on their basis. To implement the fuzzy inference procedure, the co-authors applied the Mamdani algorithm, the min-conjunction for aggregating indicators, the max-disjunction for accumulating conclusions, and the centre of gravity method to ensure defuzzification. The Editor of Fuzzy Output Systems of the MatLab package is used to implement the proposed fuzzy inference pattern. The Saati hierarchy method is used to design membership functions (FP). The analysis of literature sources did not identify any works in which fuzzy logic methods or fuzzy inference rules were used to plan the CRR of public buildings. Results. Membership functions for all factors and the final indicator on the selected scale, corresponding to the Harrington desirability function in the range of values [0; 100], are designed. The results of the implementation of the proposed fuzzy inference system in the MatLab environment are presented in the form of graphs and numerical values of all input linguistic variables. Conclusions. Fuzzy inference allows to obtain the numerical value of the integral repair potential that underlies an informed decision about the incorporation of a facility into the CRR plan. The strength of the approach is the modifiability and expandability of the rule base in practical work. The proposed planning tool allows to consider a combination of principal factors, cut costs, reduce the time frame and improve the public building repair quality.


Universe ◽  
2021 ◽  
Vol 7 (7) ◽  
pp. 213
Author(s):  
Luis E. Padilla ◽  
Luis O. Tellez ◽  
Luis A. Escamilla ◽  
Jose Alberto Vazquez

Bayesian statistics and Markov Chain Monte Carlo (MCMC) algorithms have found their place in the field of Cosmology. They have become important mathematical and numerical tools, especially in parameter estimation and model comparison. In this paper, we review some fundamental concepts to understand Bayesian statistics and then introduce MCMC algorithms and samplers that allow us to perform the parameter inference procedure. We also introduce a general description of the standard cosmological model, known as the ΛCDM model, along with several alternatives, and current datasets coming from astrophysical and cosmological observations. Finally, with the tools acquired, we use an MCMC algorithm implemented in python to test several cosmological models and find out the combination of parameters that best describes the Universe.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Daniel L. Rabosky ◽  
Roger B. J. Benson

AbstractEstimates of evolutionary diversification rates – speciation and extinction – have been used extensively to explain global biodiversity patterns. Many studies have analyzed diversification rates derived from just two pieces of information: a clade’s age and its extant species richness. This “age-richness rate” (ARR) estimator provides a convenient shortcut for comparative studies, but makes strong assumptions about the dynamics of species richness through time. Here we demonstrate that use of the ARR estimator in comparative studies is problematic on both theoretical and empirical grounds. We prove mathematically that ARR estimates are non-identifiable: there is no information in the data for a single clade that can distinguish a process with positive net diversification from one where net diversification is zero. Using paleontological time series, we demonstrate that the ARR estimator has no predictive ability for real datasets. These pathologies arise because the ARR inference procedure yields “point estimates” that have been computed under a saturated statistical model with zero degrees of freedom. Although ARR estimates remain useful in some contexts, they should be avoided for comparative studies of diversification and species richness.


2021 ◽  
Author(s):  
Asif U Tamuri ◽  
Mario dos Reis

We use first principles of population genetics to model the evolution of proteins under persistent positive selection (PPS). PPS may occur when organisms are subjected to persistent environmental change, during adaptive radiations, or in host-pathogen interactions. Our mutation-selection model indicates protein evolution under PPS is an irreversible Markov process, and thus proteins under PPS show a strongly asymmetrical distribution of selection coefficients among amino acid substitutions. Our model shows the criteria ω > 1 (where ω is the ratio of non-synonymous over synonymous codon substitution rates) to detect positive selection is conservative and indeed arbitrary, because in real proteins many mutations are highly deleterious and are removed by selection even at positively-selected sites. We use a penalized-likelihood implementation of our model to successfully detect PPS in plant RuBisCO and influenza HA proteins. By directly estimating selection coefficients at protein sites, our inference procedure bypasses the need for using ω as a surrogate measure of selection and improves our ability to detect molecular adaptation in proteins.


2021 ◽  
Vol 4 ◽  
Author(s):  
Monica Billio ◽  
Roberto Casarin ◽  
Michele Costola ◽  
Matteo Iacopini

Networks represent a useful tool to describe relationships among financial firms and network analysis has been extensively used in recent years to study financial connectedness. An aspect, which is often neglected, is that network observations come with errors from different sources, such as estimation and measurement errors, thus a proper statistical treatment of the data is needed before network analysis can be performed. We show that node centrality measures can be heavily affected by random errors and propose a flexible model based on the matrix-variate t distribution and a Bayesian inference procedure to de-noise the data. We provide an application to a network among European financial institutions.


2021 ◽  
Vol 2021 (2) ◽  
Author(s):  
Jake R Hanson ◽  
Sara I Walker

Abstract The scientific study of consciousness is currently undergoing a critical transition in the form of a rapidly evolving scientific debate regarding whether or not currently proposed theories can be assessed for their scientific validity. At the forefront of this debate is Integrated Information Theory (IIT), widely regarded as the preeminent theory of consciousness because it quantified subjective experience in a scalar mathematical measure called Φ that is in principle measurable. Epistemological issues in the form of the “unfolding argument” have provided a concrete refutation of IIT by demonstrating how it permits functionally identical systems to have differences in their predicted consciousness. The implication is that IIT and any other proposed theory based on a physical system’s causal structure may already be falsified even in the absence of experimental refutation. However, so far many of these arguments surrounding the epistemological foundations of falsification arguments, such as the unfolding argument, are too abstract to determine the full scope of their implications. Here, we make these abstract arguments concrete, by providing a simple example of functionally equivalent machines realizable with table-top electronics that take the form of isomorphic digital circuits with and without feedback. This allows us to explicitly demonstrate the different levels of abstraction at which a theory of consciousness can be assessed. Within this computational hierarchy, we show how IIT is simultaneously falsified at the finite-state automaton level and unfalsifiable at the combinatorial-state automaton level. We use this example to illustrate a more general set of falsification criteria for theories of consciousness: to avoid being already falsified, or conversely unfalsifiable, scientific theories of consciousness must be invariant with respect to changes that leave the inference procedure fixed at a particular level in a computational hierarchy.


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Alessandra Canepa

Abstract Johansen’s (2000. “A Bartlett Correction Factor for Tests of on the Cointegrating Relations.” Econometric Theory 16: 740–78) Bartlett correction factor for the LR test of linear restrictions on cointegrated vectors is derived under the i.i.d. Gaussian assumption for the innovation terms. However, the distribution of most data relating to financial variables is fat-tailed and often skewed; there is therefore a need to examine small sample inference procedures that require weaker assumptions for the innovation term. This paper suggests that using the non-parametric bootstrap to approximate a Bartlett-type correction provides a statistic that does not require specification of the innovation distribution and can be used by applied econometricians to perform a small sample inference procedure that is less computationally demanding than it’s analytical counterpart. The procedure involves calculating a number of bootstrap values of the LR test statistic and estimating the expected value of the test statistic by the average value of the bootstrapped LR statistic. Simulation results suggest that the inference procedure has good finite sample property and is less dependent on the parameter space of the data generating process.


Sign in / Sign up

Export Citation Format

Share Document