scholarly journals Statistical Inference Using GLEaM Model with Spatial Heterogeneity and Correlation between Regions

Author(s):  
Yixuan Tan ◽  
Yuan Zhang ◽  
Xiuyuan Cheng ◽  
Xiao-Hua Zhou

A better understanding of the various patterns in the coronavirus disease 2019 (COVID-19) spread in different parts of the world is crucial to its prevention and control. Motivated by the celebrated GLEaM model (Balcan et al., 2010 [1]), this paper proposes a stochastic dynamic model to depict the evolution of COVID-19. The model allows spatial and temporal heterogeneity of transmission parameters and involves transportation between regions. Based on the proposed model, this paper also designs a two-step procedure for parameter inference, which utilizes the correlation between regions through a prior distribution that imposes graph Laplacian regularization on transmission parameters. Experiments on simulated data and real-world data in China and Europe indicate that the proposed model achieves higher accuracy in predicting the newly confirmed cases than baseline models.

Author(s):  
Rawane Samb ◽  
Khader Khadraoui ◽  
Pascal Belleau ◽  
Astrid Deschênes ◽  
Lajmi Lakhal-Chaieb ◽  
...  

AbstractGenome-wide mapping of nucleosomes has revealed a great deal about the relationships between chromatin structure and control of gene expression. Recent next generation CHIP-chip and CHIP-Seq technologies have accelerated our understanding of basic principles of chromatin organization. These technologies have taught us that nucleosomes play a crucial role in gene regulation by allowing physical access to transcription factors. Recent methods and experimental advancements allow the determination of nucleosome positions for a given genome area. However, most of these methods estimate the number of nucleosomes either by an EM algorithm using a BIC criterion or an effective heuristic strategy. Here, we introduce a Bayesian method for identifying nucleosome positions. The proposed model is based on a Multinomial-Dirichlet classification and a hierarchical mixture distributions. The number and the positions of nucleosomes are estimated using a reversible jump Markov chain Monte Carlo simulation technique. We compare the performance of our method on simulated data and MNase-Seq data from Saccharomyces cerevisiae against PING and NOrMAL methods.


Author(s):  
Bo Li ◽  
Xiaoting Rui ◽  
Guoping Wang ◽  
Jianshu Zhang ◽  
Qinbo Zhou

Dynamics analysis is currently a key technique to fully understand the dynamic characteristics of sophisticated mechanical systems because it is a prerequisite for dynamic design and control studies. In this study, a dynamics analysis problem for a multiple launch rocket system (MLRS) is developed. We particularly focus on the deductions of equations governing the motion of the MLRS without rockets by using a transfer matrix method for multibody systems and the motion of rockets via the Newton–Euler method. By combining the two equations, the differential equations of the MLRS are obtained. The complete process of the rockets’ ignition, movement in the barrels, airborne flight, and landing is numerically simulated via the Monte Carlo stochastic method. An experiment is implemented to validate the proposed model and the corresponding numerical results.


Author(s):  
Marcelo N. de Sousa ◽  
Ricardo Sant’Ana ◽  
Rigel P. Fernandes ◽  
Julio Cesar Duarte ◽  
José A. Apolinário ◽  
...  

AbstractIn outdoor RF localization systems, particularly where line of sight can not be guaranteed or where multipath effects are severe, information about the terrain may improve the position estimate’s performance. Given the difficulties in obtaining real data, a ray-tracing fingerprint is a viable option. Nevertheless, although presenting good simulation results, the performance of systems trained with simulated features only suffer degradation when employed to process real-life data. This work intends to improve the localization accuracy when using ray-tracing fingerprints and a few field data obtained from an adverse environment where a large number of measurements is not an option. We employ a machine learning (ML) algorithm to explore the multipath information. We selected algorithms random forest and gradient boosting; both considered efficient tools in the literature. In a strict simulation scenario (simulated data for training, validating, and testing), we obtained the same good results found in the literature (error around 2 m). In a real-world system (simulated data for training, real data for validating and testing), both ML algorithms resulted in a mean positioning error around 100 ,m. We have also obtained experimental results for noisy (artificially added Gaussian noise) and mismatched (with a null subset of) features. From the simulations carried out in this work, our study revealed that enhancing the ML model with a few real-world data improves localization’s overall performance. From the machine ML algorithms employed herein, we also observed that, under noisy conditions, the random forest algorithm achieved a slightly better result than the gradient boosting algorithm. However, they achieved similar results in a mismatch experiment. This work’s practical implication is that multipath information, once rejected in old localization techniques, now represents a significant source of information whenever we have prior knowledge to train the ML algorithm.


2008 ◽  
Vol 20 (5) ◽  
pp. 1211-1238 ◽  
Author(s):  
Gaby Schneider

Oscillatory correlograms are widely used to study neuronal activity that shows a joint periodic rhythm. In most cases, the statistical analysis of cross-correlation histograms (CCH) features is based on the null model of independent processes, and the resulting conclusions about the underlying processes remain qualitative. Therefore, we propose a spike train model for synchronous oscillatory firing activity that directly links characteristics of the CCH to parameters of the underlying processes. The model focuses particularly on asymmetric central peaks, which differ in slope and width on the two sides. Asymmetric peaks can be associated with phase offsets in the (sub-) millisecond range. These spatiotemporal firing patterns can be highly consistent across units yet invisible in the underlying processes. The proposed model includes a single temporal parameter that accounts for this peak asymmetry. The model provides approaches for the analysis of oscillatory correlograms, taking into account dependencies and nonstationarities in the underlying processes. In particular, the auto- and the cross-correlogram can be investigated in a joint analysis because they depend on the same spike train parameters. Particular temporal interactions such as the degree to which different units synchronize in a common oscillatory rhythm can also be investigated. The analysis is demonstrated by application to a simulated data set.


Author(s):  
Rodolfo Tellez ◽  
William Y. Svrcek ◽  
Brent R. Young

Process integration design methodologies have been developed and introduced to synthesise an optimum heat exchanger network (HEN) arrangement. However, controllability issues are often overlooked during the early stages of a plant design. In this paper we present a five-step procedure that involves the use of multivariable disturbance and control analyses based solely on steady-state information and with the purpose to assess process design developments and to propose control strategy alternatives appropriate and suitable for a HEN.


2014 ◽  
Vol 521 ◽  
pp. 252-255
Author(s):  
Jian Yuan Xu ◽  
Jia Jue Li ◽  
Jie Jun Zhang ◽  
Yu Zhu

The problem of intermittent generation peaking is highly concerned by the grid operator. To build control model for solving unbalance of peaking is great necessary. In this paper, we propose reserve classification control model which contain constant reserve control model with real-time reserve control model to guide the peaking balance of the grid with intermittent generation. The proposed model associate time-period constant reserve control model with real-time reserve control model to calculate, and use the peaking margin as intermediate variable. Therefore, the model solutions which are the capacity of reserve classification are obtained. The grid operators use the solution to achieve the peaking balance control. The proposed model was examined by real grid operation case, and the results of the case demonstrate the validity of the proposed model.


2021 ◽  
Vol 105 ◽  
pp. 110-118
Author(s):  
Jie Si Ma ◽  
Fu Sheng Li ◽  
Yan Chun Zhao

X-ray Fluorescence (XRF) analysis technology is used widely to detect and measure elemental compositions of target samples. The MCNP code developed by LANL can be utilized to simulate and generate the XRF spectrum of any sample with various elemental compositions. However, one shortcoming of MCNP code is that it takes quite a lot of time (in hours or longer) to generate one XRF spectrum with reasonable statistical precision; the other shortcoming is that MCNP code cannot produce L shell spectrum accurately. In this paper, a new computation model based on the Sherman equation (i.e., Fundamental Parameters, FP) is proposed to overcome the drawbacks of the MCNP code. The most important feature of this model is to achieve a full and accurate generation of spectral information of each element in a target material very rapidly (in seconds or less), including both K and L shell spectral peaks. Furtherly, it is demonstrated that the simulated data by this new mode match the experimental data very well. It proves that the proposed model can be a better alternative of MCNP code in the application of generation the XRF spectra of many materials, in terms of speed and accuracy. The proposed model can perform the simulation of XRF spectra in situ both fast and accurately, which is essential for real-time calculation of chemical composition by use of X-ray spectrometer, especially for those trace elements in target materials.


2019 ◽  
Vol 15 (1) ◽  
pp. 19-36 ◽  
Author(s):  
Wiliam Acar ◽  
Rami al-Gharaibeh

Practical applications of knowledge management are hindered by a lack of linkage between the accepted data-information-knowledge hierarchy with using pragmatic approaches. Specifically, the authors seek to clarify the use of the tacit-explicit dichotomy with a deductive synthesis of complementary concepts. The authors review appropriate segments of the KM/OL literature with an emphasis on the SECI model of Nonaka and Takeuchi. Looking beyond equating the sharing of knowledge with mere socialization, the authors deduce from more recent developments a knowledge creation, nurturing and control framework. Based on a cyclic and upward-spiraling data-information-knowledge structure, the authors' proposed model affords top managers and their consultants opportunities for capturing, debating and storing richer information – as well as monitoring their progress and controlling their learning process.


2011 ◽  
Vol 19 (2) ◽  
pp. 135-146 ◽  
Author(s):  
William Greene

Plümper and Troeger (2007) propose a three-step procedure for the estimation of a fixed effects (FE) model that, it is claimed, “provides the most reliable estimates under a wide variety of specifications common to real world data.” Their fixed effects vector decomposition (FEVD) estimator is startlingly simple, involving three simple steps, each requiring nothing more than ordinary least squares (OLS). Large gains in efficiency are claimed for cases of time-invariant and slowly time-varying regressors. A subsequent literature has compared the estimator to other estimators of FE models, including the estimator of Hausman and Taylor (1981) also (apparently) with impressive gains in efficiency. The article also claims to provide an efficient estimator for parameters on time-invariant variables (TIVs) in the FE model. None of the claims are correct. The FEVD estimator simply reproduces (identically) the linear FE (dummy variable) estimator then substitutes an inappropriate covariance matrix for the correct one. The consistency result follows from the fact that OLS in the FE model is consistent. The “efficiency” gains are illusory. The claim that the estimator provides an estimator for the coefficients on TIVs in an FE model is also incorrect. That part of the parameter vector remains unidentified. The “estimator” relies upon a strong assumption that turns the FE model into a type of random effects model.


Sign in / Sign up

Export Citation Format

Share Document