AN MLP TRAINING ALGORITHM TAKING INTO ACCOUNT KNOWN ERRORS ON INPUTS AND OUTPUTS

2002 ◽  
Vol 12 (05) ◽  
pp. 369-379
Author(s):  
J. SVENSSON

A training algorithm is introduced that takes into account a priori known errors on both inputs and outputs in an MLP network. The new cost function introduced for this case is based on a linear approximation of the network function over the input distribution for a given input pattern. Update formulas, in the form of the gradient of the new cost function, is given for a MLP network, together with expressions for the Hessian matrix. This is later used to calculate error bars in a Bayesian framework. The error bars thus derived are discussed in relation to the more commonly used width of the target posterior predictive distribution. It will also be shown that the taking into account of known input uncertainties in the way suggested in this article will have a strong regularizing effect on the solution.

Complexity ◽  
2018 ◽  
Vol 2018 ◽  
pp. 1-9 ◽  
Author(s):  
A. Corberán-Vallet ◽  
F. J. Santonja ◽  
M. Jornet-Sanz ◽  
R.-J. Villanueva

We present a Bayesian stochastic susceptible-exposed-infectious-recovered model in discrete time to understand chickenpox transmission in the Valencian Community, Spain. During the last decades, different strategies have been introduced in the routine immunization program in order to reduce the impact of this disease, which remains a public health’s great concern. Under this scenario, a model capable of explaining closely the dynamics of chickenpox under the different vaccination strategies is of utter importance to assess their effectiveness. The proposed model takes into account both heterogeneous mixing of individuals in the population and the inherent stochasticity in the transmission of the disease. As shown in a comparative study, these assumptions are fundamental to describe properly the evolution of the disease. The Bayesian analysis of the model allows us to calculate the posterior distribution of the model parameters and the posterior predictive distribution of chickenpox incidence, which facilitates the computation of point forecasts and prediction intervals.


Author(s):  
M.M. Manene

The performance of step-wise group screening with unequal a-priori probabilities in terms of the expected number of runs and the expected maximum number of incorrect decisions is considered. A method of obtaining optimal step-wise designs with unequal a-priori probabilities is presented for the case in which the direction of each defective factor is assumed to be known a -priori and observations are subject to error. An appropriate cost function is introduced and the value of the group size which minimizes the expected total cost is obtained.  


2021 ◽  
Author(s):  
Vincent Savaux ◽  
Christophe Delacourt ◽  
Patrick Savelli

This paper deals with time and frequency synchronization in LoRa system based on the preamble symbols. A thorough analysis of the maximum likelihood (ML) estimator of the delay (time offset) and the frequency offset shows that the resulting cost function is not concave. As a consequence the a priori solution to the maximization problem consists in exhaustively searching over all the possible values of both the delay and the frequency offset. Furthermore, it is shown that these parameters are intertwined and therefore they must be jointly estimated, leading to an extremely complex solution. Alternatively, we show that it is possible to recover the concavity of the cost function, from which we suggest a low-complexity synchronization algorithm, whose steps are described in detail. Simulations results show that the suggested method reaches the same performance as the ML exhaustive search, while the complexity is drastically reduced, allowing for a real-time implementation of a LoRa receiver. <br>


Author(s):  
Ryota Wada ◽  
Takuji Waseda

Extreme value estimation of significant wave height is essential for designing robust and economically efficient ocean structures. But in most cases, the duration of observational wave data is not efficient to make a precise estimation of the extreme value for the desired period. When we focus on hurricane dominated oceans, the situation gets worse. The uncertainty of the extreme value estimation is the main topic of this paper. We use Likelihood-Weighted Method (LWM), a method that can quantify the uncertainty of extreme value estimation in terms of aleatory and epistemic uncertainty. We considered the extreme values of hurricane-dominated regions such as Japan and Gulf of Mexico. Though observational data is available for more than 30 years in Gulf of Mexico, the epistemic uncertainty for 100-year return period value is notably large. Extreme value estimation from 10-year duration of observational data, which is a typical case in Japan, gave a Coefficient of Variance of 43%. This may have impact on the design rules of ocean structures. Also, the consideration of epistemic uncertainty gives rational explanation for the past extreme events, which were considered as abnormal. Expected Extreme Value distribution (EEV), which is the posterior predictive distribution, defined better extreme values considering the epistemic uncertainty.


2016 ◽  
Vol 115 (5) ◽  
pp. 2593-2607 ◽  
Author(s):  
Sarah E. Jones ◽  
Mathias Dutschmann

Degeneracy of respiratory network function would imply that anatomically discrete aspects of the brain stem are capable of producing respiratory rhythm. To test this theory we a priori transected brain stem preparations before reperfusion and reoxygenation at 4 rostrocaudal levels: 1.5 mm caudal to obex ( n = 5), at obex ( n = 5), and 1.5 ( n = 7) and 3 mm ( n = 6) rostral to obex. The respiratory activity of these preparations was assessed via recordings of phrenic and vagal nerves and lumbar spinal expiratory motor output. Preparations with a priori transection at level of the caudal brain stem did not produce stable rhythmic respiratory bursting, even when the arterial chemoreceptors were stimulated with sodium cyanide (NaCN). Reperfusion of brain stems that preserved the pre-Bötzinger complex (pre-BötC) showed spontaneous and sustained rhythmic respiratory bursting at low phrenic nerve activity (PNA) amplitude that occurred simultaneously in all respiratory motor outputs. We refer to this rhythm as the pre-BötC burstlet-type rhythm. Conserving circuitry up to the pontomedullary junction consistently produced robust high-amplitude PNA at lower burst rates, whereas sequential motor patterning across the respiratory motor outputs remained absent. Some of the rostrally transected preparations expressed both burstlet-type and regular PNA amplitude rhythms. Further analysis showed that the burstlet-type rhythm and high-amplitude PNA had 1:2 quantal relation, with burstlets appearing to trigger high-amplitude bursts. We conclude that no degenerate rhythmogenic circuits are located in the caudal medulla oblongata and confirm the pre-BötC as the primary rhythmogenic kernel. The absence of sequential motor patterning in a priori transected preparations suggests that pontine circuits govern respiratory pattern formation.


2019 ◽  
Author(s):  
Donald Ray Williams ◽  
Philippe Rast ◽  
Luis Pericchi ◽  
Joris Mulder

Gaussian graphical models are commonly used to characterize conditional independence structures (i.e., networks) of psychological constructs. Recently attention has shifted from estimating single networks to those from various sub-populations. The focus is primarily to detect differences or demonstrate replicability. We introduce two novel Bayesian methods for comparing networks that explicitly address these aims. The first is based on the posterior predictive distribution, with Kullback-Leibler divergence as the discrepancy measure, that tests differences between two multivariate normal distributions. The second approach makes use of Bayesian model selection, with the Bayes factor, and allows for gaining evidence for invariant network structures. This overcomes limitations of current approaches in the literature that use classical hypothesis testing, where it is only possible to determine whether groups are significantly different from each other. With simulation we show the posterior predictive method is approximately calibrated under the null hypothesis ($\alpha = 0.05$) and has more power to detect differences than alternative approaches. We then examine the necessary sample sizes for detecting invariant network structures with Bayesian hypothesis testing, in addition to how this is influenced by the choice of prior distribution. The methods are applied to post-traumatic stress disorder symptoms that were measured in four groups. We end by summarizing our major contribution, that is proposing two novel methods for comparing GGMs, which extends beyond the social-behavioral sciences. The methods have been implemented in the R package BGGM.


Mathematics ◽  
2021 ◽  
Vol 9 (22) ◽  
pp. 2921
Author(s):  
Stefano Cabras

This work proposes a semi-parametric approach to estimate the evolution of COVID-19 (SARS-CoV-2) in Spain. Considering the sequences of 14-day cumulative incidence of all Spanish regions, it combines modern Deep Learning (DL) techniques for analyzing sequences with the usual Bayesian Poisson-Gamma model for counts. The DL model provides a suitable description of the observed time series of counts, but it cannot give a reliable uncertainty quantification. The role of expert elicitation of the expected number of counts and its reliability is DL predictions’ role in the proposed modelling approach. Finally, the posterior predictive distribution of counts is obtained in a standard Bayesian analysis using the well known Poisson-Gamma model. The model allows to predict the future evolution of the sequences on all regions or estimates the consequences of eventual scenarios.


Author(s):  
Daniel Hulse ◽  
Christopher Hoyle ◽  
Irem Y. Tumer ◽  
Kai Goebel ◽  
Chetan Kulkarni

Abstract Resilience models assess a system’s ability to withstand disruption by quantifying the value of metrics (e.g. expected cost or loss) over time. When such a metric is the result of injecting faults in a dynamic model over an interval of time, it is important that it represent the statistical expectation of fault responses rather than a single response. Since fault responses vary over fault injection times, representing the statistical expectation of responses requires sampling a number of points. However, fault models are often built around computationally expensive dynamic simulations, and it is desirable to be able to iterate over designs as quickly as possible to improve system resilience. With this in mind, this paper explores approaches to sample fault injection times to minimize computational cost while accurately representing the expectation of fault resilience metrics over the set possible occurrence times. Two general approaches are presented: an a priori approach that attempts to minimize error without knowing the underlying cost function, and an a posteriori approach that minimizes error when the cost function is known. Among a priori methods, numerical integration minimizes error and computational time compared to Monte Carlo sampling, however both are prone to error when the metric’s fault response curve is discontinuous. While a posteriori approaches can locate and correct for these discontinuities, the resulting error reduction is not robust to design changes that shift the underlying location of discontinuities. The ultimate decision to use an a priori or a posteriori approach to quantify resilience is thus dependent on a number of considerations, including computational cost, the robustness of the approximation to design changes, and the underlying form of the resilience function.


Author(s):  
Therese M. Donovan ◽  
Ruth M. Mickey

While one of the most common uses of Bayes’ Theorem is in the statistical analysis of a dataset (i.e., statistical modeling), this chapter examines another application of Gibbs sampling: parameter estimation for simple linear regression. In the “Survivor Problem,” the chapter considers the relationship between how many days a contestant lasts in a reality-show competition as a function of how many years of formal education they have. This chapter is a bit more complicated than the previous chapter because it involves estimation of the joint posterior distribution of three parameters. As in earlier chapters, the estimation process is described in detail on a step-by-step basis. Finally, the posterior predictive distribution is estimated and discussed. By the end of the chapter, the reader will have a firm understanding of the following concepts: linear equation, sums of squares, posterior predictive distribution, and linear regression with Markov Chain Monte Carlo and Gibbs sampling.


Sign in / Sign up

Export Citation Format

Share Document