inference problems
Recently Published Documents


TOTAL DOCUMENTS

216
(FIVE YEARS 55)

H-INDEX

23
(FIVE YEARS 4)

2022 ◽  
Vol 12 (1) ◽  
Author(s):  
Oscar Fajardo-Fontiveros ◽  
Roger Guimerà ◽  
Marta Sales-Pardo

Author(s):  
Winston C Chow

A Kalman filter estimation of the state of a system is merely a random vector that has a normal, also called Gaussian, distribution. Elementary statistics teaches any Gaussian distribution is completely and uniquely characterized by its mean and covariance (variance if univariate). Such characterization is required for statistical inference problems on a Gaussian random vector. This mean and composite covariance of a Kalman filter estimate of a system state will be derived here. The derived covariance is in recursive form. One must not confuse it with the “error covariance” output of a Kalman filter. Potential applications, including geological ones, of the derivation are described and illustrated with a simple example.


2021 ◽  
Author(s):  
◽  
Rohitash Chandra

<p>One way to train neural networks is to use evolutionary algorithms such as cooperative coevolution - a method that decomposes the network's learnable parameters into subsets, called subcomponents. Cooperative coevolution gains advantage over other methods by evolving particular subcomponents independently from the rest of the network. Its success depends strongly on how the problem decomposition is carried out. This thesis suggests new forms of problem decomposition, based on a novel and intuitive choice of modularity, and examines in detail at what stage and to what extent the different decomposition methods should be used. The new methods are evaluated by training feedforward networks to solve pattern classification tasks, and by training recurrent networks to solve grammatical inference problems. Efficient problem decomposition methods group interacting variables into the same subcomponents. We examine the methods from the literature and provide an analysis of the nature of the neural network optimization problem in terms of interacting variables. We then present a novel problem decomposition method that groups interacting variables and that can be generalized to neural networks with more than a single hidden layer. We then incorporate local search into cooperative neuro-evolution. We present a memetic cooperative coevolution method that takes into account the cost of employing local search across several sub-populations. The optimisation process changes during evolution in terms of diversity and interacting variables. To address this, we examine the adaptation of the problem decomposition method during the evolutionary process. The results in this thesis show that the proposed methods improve performance in terms of optimization time, scalability and robustness. As a further test, we apply the problem decomposition and adaptive cooperative coevolution methods for training recurrent neural networks on chaotic time series problems. The proposed methods show better performance in terms of accuracy and robustness.</p>


2021 ◽  
Author(s):  
◽  
Rohitash Chandra

<p>One way to train neural networks is to use evolutionary algorithms such as cooperative coevolution - a method that decomposes the network's learnable parameters into subsets, called subcomponents. Cooperative coevolution gains advantage over other methods by evolving particular subcomponents independently from the rest of the network. Its success depends strongly on how the problem decomposition is carried out. This thesis suggests new forms of problem decomposition, based on a novel and intuitive choice of modularity, and examines in detail at what stage and to what extent the different decomposition methods should be used. The new methods are evaluated by training feedforward networks to solve pattern classification tasks, and by training recurrent networks to solve grammatical inference problems. Efficient problem decomposition methods group interacting variables into the same subcomponents. We examine the methods from the literature and provide an analysis of the nature of the neural network optimization problem in terms of interacting variables. We then present a novel problem decomposition method that groups interacting variables and that can be generalized to neural networks with more than a single hidden layer. We then incorporate local search into cooperative neuro-evolution. We present a memetic cooperative coevolution method that takes into account the cost of employing local search across several sub-populations. The optimisation process changes during evolution in terms of diversity and interacting variables. To address this, we examine the adaptation of the problem decomposition method during the evolutionary process. The results in this thesis show that the proposed methods improve performance in terms of optimization time, scalability and robustness. As a further test, we apply the problem decomposition and adaptive cooperative coevolution methods for training recurrent neural networks on chaotic time series problems. The proposed methods show better performance in terms of accuracy and robustness.</p>


Author(s):  
Marc Hallin

Unlike the real line, the real space, in dimension d ≥ 2, is not canonically ordered. As a consequence, extending to a multivariate context fundamental univariate statistical tools such as quantiles, signs, and ranks is anything but obvious. Tentative definitions have been proposed in the literature but do not enjoy the basic properties (e.g., distribution-freeness of ranks, their independence with respect to the order statistic, their independence with respect to signs) they are expected to satisfy. Based on measure transportation ideas, new concepts of distribution and quantile functions, ranks, and signs have been proposed recently that, unlike previous attempts, do satisfy these properties. These ranks, signs, and quantiles have been used, quite successfully, in several inference problems and have triggered, in a short span of time, a number of applications: fully distribution-free testing for multiple-output regression, MANOVA, and VAR models; R-estimation for VARMA parameters; distribution-free testing for vector independence; multiple-output quantile regression; nonlinear independent component analysis; and so on. Expected final online publication date for the Annual Review of Statistics, Volume 9 is March 2022. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.


Author(s):  
Assia Chadli ◽  
Sara Kermoune

In this paper, we consider inference problems including estimation for a Rayleigh Pareto (RP) distribution under progressively type-II right censored data. We use two approaches, the classical maximum likelihood approach and the Bayesian approach for estimating the distribution parameters and the reliability characteristics. Bayes estimators and corresponding posterior risks (PR) have been derived using different loss functions (symmetric and asymmetric). The estimators cannot be obtained explicitly, so we use the method of Monte Carlo. Finally, we use the integrated mean square error (IMSE) and the Pitman closeness criterion to compare the results of the two methods.


2021 ◽  
Author(s):  
Timon Wittenstein ◽  
Nava Leibovich ◽  
Andreas Hilfinger

Quantifying biochemical reaction rates within complex cellular processes remains a key challenge of systems biology even as high-throughput single-cell data have become available to characterize snapshots of population variability. That is because complex systems with stochastic and non-linear interactions are difficult to analyze when not all components can be observed simultaneously and systems cannot be followed over time. Instead of using descriptive statistical models, we show that incompletely specified mechanistic models can be used to translate qualitative knowledge of interactions into reaction rate functions from covariability data between pairs of components. This promises to turn a globally intractable problem into a sequence of solvable inference problems to quantify complex interaction networks from incomplete snapshots of their stochastic fluctuations.


2021 ◽  
Author(s):  
Diego Aineto ◽  
Sergio Jimenez ◽  
Eva Onaindia

This paper introduces the Temporal Inference Problem (TIP), a general formulation for a family of inference problems that reason about the past, present or future state of some observed agent. A TIP builds on the models of an actor and of an observer. Observations of the actor are gathered at arbitrary times and a TIP encodes hypothesis on unobserved segments of the actor's trajectory. Regarding the last observation as the present time, a TIP enables to hypothesize about the past trajectory, future trajectory or current state of the actor. We use LTL as a language for expressing hypotheses and reduce a TIP to a planning problem which is solved with an off-the-shelf classical planner. The output of the TIP is the most likely hypothesis, the minimal cost trajectory under the assumption that the actor is rational. Our proposal is evaluated on a wide range of TIP instances defined over different planning domains.


2021 ◽  
Vol 11 (7) ◽  
pp. 363
Author(s):  
Jesús Guadalupe Lugo-Armenta ◽  
Luis Roberto Pino-Fan

The COVID-19 pandemic generated a new scenario in education, where technological resources mediate teaching and learning processes. This paper presents the development of a virtual teacher training experience aimed at promoting inferential reasoning in practicing and prospective mathematics teachers using inference problems on the Chi-square statistic. The objective of this article is to assess the implemented or intended institutional meanings and the degree of availability and adequacy of the material and temporal resources necessary for the development of the training experience. For this purpose, we use theoretical and methodological notions introduced by the Ontosemiotic Approach to Mathematical Knowledge and Instruction (OSA), among which are the notions of practice and suitability criteria. The participants of this experience were divided into three groups; one of them was comprised of practicing teachers and the other two of prospective teachers. The intervention used different virtual modalities that enabled the development of the participants’ inferential reasoning in a similar way.


2021 ◽  
Vol 17 (6) ◽  
pp. e1009025
Author(s):  
Jonathan Cannon

When presented with complex rhythmic auditory stimuli, humans are able to track underlying temporal structure (e.g., a “beat”), both covertly and with their movements. This capacity goes far beyond that of a simple entrained oscillator, drawing on contextual and enculturated timing expectations and adjusting rapidly to perturbations in event timing, phase, and tempo. Previous modeling work has described how entrainment to rhythms may be shaped by event timing expectations, but sheds little light on any underlying computational principles that could unify the phenomenon of expectation-based entrainment with other brain processes. Inspired by the predictive processing framework, we propose that the problem of rhythm tracking is naturally characterized as a problem of continuously estimating an underlying phase and tempo based on precise event times and their correspondence to timing expectations. We present two inference problems formalizing this insight: PIPPET (Phase Inference from Point Process Event Timing) and PATIPPET (Phase and Tempo Inference). Variational solutions to these inference problems resemble previous “Dynamic Attending” models of perceptual entrainment, but introduce new terms representing the dynamics of uncertainty and the influence of expectations in the absence of sensory events. These terms allow us to model multiple characteristics of covert and motor human rhythm tracking not addressed by other models, including sensitivity of error corrections to inter-event interval and perceived tempo changes induced by event omissions. We show that positing these novel influences in human entrainment yields a range of testable behavioral predictions. Guided by recent neurophysiological observations, we attempt to align the phase inference framework with a specific brain implementation. We also explore the potential of this normative framework to guide the interpretation of experimental data and serve as building blocks for even richer predictive processing and active inference models of timing.


Sign in / Sign up

Export Citation Format

Share Document