scholarly journals Stochastic approximation cut algorithm for inference in modularized Bayesian models

2021 ◽  
Vol 32 (1) ◽  
Author(s):  
Yang Liu ◽  
Robert J. B. Goudie

AbstractBayesian modelling enables us to accommodate complex forms of data and make a comprehensive inference, but the effect of partial misspecification of the model is a concern. One approach in this setting is to modularize the model and prevent feedback from suspect modules, using a cut model. After observing data, this leads to the cut distribution which normally does not have a closed form. Previous studies have proposed algorithms to sample from this distribution, but these algorithms have unclear theoretical convergence properties. To address this, we propose a new algorithm called the stochastic approximation cut (SACut) algorithm as an alternative. The algorithm is divided into two parallel chains. The main chain targets an approximation to the cut distribution; the auxiliary chain is used to form an adaptive proposal distribution for the main chain. We prove convergence of the samples drawn by the proposed algorithm and present the exact limit. Although SACut is biased, since the main chain does not target the exact cut distribution, we prove this bias can be reduced geometrically by increasing a user-chosen tuning parameter. In addition, parallel computing can be easily adopted for SACut, which greatly reduces computation time.

Author(s):  
Ning Yang ◽  
Shiaaulir Wang ◽  
Paul Schonfeld

A Parallel Genetic Algorithm (PGA) is used for a simulation-based optimization of waterway project schedules. This PGA is designed to distribute a Genetic Algorithm application over multiple processors in order to speed up the solution search procedure for a very large combinational problem. The proposed PGA is based on a global parallel model, which is also called a master-slave model. A Message-Passing Interface (MPI) is used in developing the parallel computing program. A case study is presented, whose results show how the adaption of a simulation-based optimization algorithm to parallel computing can greatly reduce computation time. Additional techniques which are found to further improve the PGA performance include: (1) choosing an appropriate task distribution method, (2) distributing simulation replications instead of different solutions, (3) avoiding the simulation of duplicate solutions, (4) avoiding running multiple simulations simultaneously in shared-memory processors, and (5) avoiding using multiple processors which belong to different clusters (physical sub-networks).


2019 ◽  
Vol 28 ◽  
pp. 01031
Author(s):  
Rafal Szczepanski ◽  
Tomasz Tarczewski ◽  
Lech M. Grzesiak

Nowadays the simulation is inseparable part of researcher's work. Its computation time may significantly exceed the experiment time. On the other hand, multi-core processors can be used to reduce computation time by using parallel computing. The parallel computing can be employed to decrease the overall simulation time. In this paper the parallel computing is used to speed-up the auto-tuning process of state feedback speed controller for PMSM drive.


2014 ◽  
Vol 2014 ◽  
pp. 1-7
Author(s):  
Radim Briš ◽  
Simona Domesová

Reliability engineering is relatively new scientific discipline which develops in close connection with computers. Rapid development of computer technology recently requires adequate novelties of source codes and appropriate software. New parallel computing technology based on HPC (high performance computing) for availability calculation will be demonstrated in this paper. The technology is particularly effective in context with simulation methods; nevertheless, analytical methods are taken into account as well. In general, basic algorithms for reliability calculations must be appropriately modified and improved to achieve better computation efficiency. Parallel processing is executed by two ways, firstly by the use of the MATLAB function parfor and secondly by the use of the CUDA technology. The computation efficiency was significantly improved which is clearly demonstrated in numerical experiments performed on selected testing examples as well as on industrial example. Scalability graphs are used to demonstrate reduction of computation time caused by parallel computing.


Author(s):  
Ulrik D. Nielsen

Reliable estimation of the on-site sea state parameters is essential to decision support systems for safe navigation of ships. The sea state parameters can be estimated by Bayesian Modelling which uses complex-valued frequency response functions (FRF) to estimate the wave spectrum on the basis of measured ship responses. It is therefore interesting to investigate how the filtering aspect, introduced by FRF, affects the final outcome of the estimation procedures. The paper contains a study based on numerical generated time series, and the study shows that filtering has an influence on the estimations, since high frequency components of the wave excitations are not estimated as accurately as lower frequency components. Moreover, the paper investigates how the final outcome of the Bayesian Modelling is influenced by the accuracy of the FRF. Thus, full-scale data is analysed by use of FRF calculated by a 3-D time domain code and by closed-form (analytical) expressions, respectively. Based on comparisons with wave radar measurements and satellite measurements it is seen that the wave estimations based on closed-form expressions exhibit a reasonable energy content, but the distribution of energy appears to be incorrect.


Sensors ◽  
2019 ◽  
Vol 19 (18) ◽  
pp. 3907 ◽  
Author(s):  
Kwangjae Sung ◽  
Hyung Kyu Lee ◽  
Hwangnam Kim

The indoor pedestrian positioning methods are affected by substantial bias and errors because of the use of cheap microelectromechanical systems (MEMS) devices (e.g., gyroscope and accelerometer) and the users’ movements. Moreover, because radio-frequency (RF) signal values are changed drastically due to multipath fading and obstruction, the performance of RF-based localization systems may deteriorate in practice. To deal with this problem, various indoor localization methods that integrate the positional information gained from received signal strength (RSS) fingerprinting scheme and the motion of the user inferred by dead reckoning (DR) approach via Bayes filters have been suggested to accomplish more accurate localization results indoors. Among the Bayes filters, while the particle filter (PF) can offer the most accurate positioning performance, it may require substantial computation time due to use of many samples (particles) for high positioning accuracy. This paper introduces a pedestrian localization scheme performed on a mobile phone that leverages the RSS fingerprint-based method, dead reckoning (DR), and improved PF called a double-stacked particle filter (DSPF) in indoor environments. As a key element of our system, the DSPF algorithm is employed to correct the position of the user by fusing noisy location data gained by the RSS fingerprinting and DR schemes. By estimating the position of the user through the proposal distribution and target distribution obtained from multiple measurements, the DSPF method can offer better localization results compared to the Kalman filtering-based methods, and it can achieve competitive localization accuracy compared with PF while offering higher computational efficiency than PF. Experimental results demonstrate that the DSPF algorithm can achieve accurate and reliable localization with higher efficiency in computational cost compared with PF in indoor environments.


2009 ◽  
Vol 16 (4) ◽  
pp. 355-364
Author(s):  
Prabhakar R. Marur

In explicit finite element simulations, a technique called deformable-to-rigid (D2R) switching is used routinely to reduce the computation time. Using the D2R option, the deformable parts in the model can be switched to rigid and reverted back to deformable when needed during the analysis. The time of activation of D2R however influences the overall dynamics of the system being analyzed. In this paper, a theoretical basis for the selection of time of rigid switching based on system energy is established. A floating oscillator problem is investigated for this purpose and closed-form analytical expressions are derived for different phases in rigid switching. The analytical expressions are validated by comparing the theoretical results with numerical computations.


2006 ◽  
Author(s):  
Marius Staring

This document describes contributions on the NormalizedCorrelationImageToImageMetric and the MeanSquaresImageToImageMetric of the Insight Toolkit ITK . For the first metric a two time speed-up can be achieved by rewriting the code to loop only once over the fixed image. This is instead of the two times that is used in the current ITK code. The reduction in computation time comes at the cost of an additional storage of a parameters array. For both metric we have implemented the option to use only a random subset of the fixed image voxels for calculating the metric value and its derivatives. This reduces the computation time (substantially), while convergence properties are maintained. This paper is accompanied with the source code.


Author(s):  
Imme Ebert-Uphoff ◽  
Gregory S. Chirikjian

Abstract We discuss the determination of workspaces of discretely actuated manipulators using convolution of real-valued functions on the Special Euclidean Group. Each workspace is described in terms of a density function that provides for any unit taskspace volume of the workspace the number of reachable frames therein. A manipulator consisting of n discrete actuators each with K states can reach Kn frames in space. Given this exponential growth, brute force representation of discrete manipulator workspaces is not feasible in the highly actuated case. However, if the manipulator is of macroscopically-serial architecture, the workspace can be generated by the following procedure: (1) partition the manipulator into segments; (2) approximate the workspace of each segment as a continuous density function on a compact subset of the Special Euclidean Group; (3) approximate the whole workspace as an n-fold convolution of these densities. We represent density functions as finite Hermite-Fourier Series and show for the planar case how the n-fold convolution can be performed in closed form requiring O(n) computation time. If all segments are identical the computation time reduces to O(logn).


Sign in / Sign up

Export Citation Format

Share Document