pseudo likelihood estimation
Recently Published Documents


TOTAL DOCUMENTS

36
(FIVE YEARS 6)

H-INDEX

8
(FIVE YEARS 1)

2021 ◽  
pp. 096228022110471
Author(s):  
Xi Wang ◽  
Vernon M. Chinchilli

Longitudinal binary data in crossover designs with missing data due to ignorable and nonignorable dropout is common. This paper evaluates available conditional and marginal models and establishes the relationship between the conditional and marginal parameters with the primary objective of comparing the treatment mean effects. We perform extensive simulation studies to investigate these models under complete data and the selection models under missing data with different parametric distributions and missingness patterns and mechanisms. The generalized estimating equations and the generalized linear mixed-effects models with pseudo-likelihood estimation are advocated for valid and robust inference. We also propose a controlled multiple imputation method as a sensitivity analysis of the missing data assumption. Lastly, we implement the proposed models and the sensitivity analysis in two real data examples with binary data.


2021 ◽  
Vol 5 (3) ◽  
pp. 129
Author(s):  
Guofei Pang ◽  
Wanrong Cao

Although stochastic fractional partial differential equations have received increasing attention in the last decade, the parameter estimation of these equations has been seldom reported in literature. In this paper, we propose a pseudo-likelihood approach to estimating the parameters of stochastic time-fractional diffusion equations, whose forward solver has been investigated very recently by Gunzburger, Li, and Wang (2019). Our approach can accurately recover the fractional order, diffusion coefficient, as well as noise magnitude given the discrete observation data corresponding to only one realization of driving noise. When only partial data is available, our approach can also attain acceptable results for intermediate sparsity of observation.


Author(s):  
Toan Luu Duc Huynh ◽  
Rizwan Ahmed ◽  
Muhammad Ali Nasir ◽  
Muhammad Shahbaz ◽  
Ngoc Quang Anh Huynh

AbstractIn the context of the debate on cryptocurrencies as the ‘digital gold’, this study explores the nexus between the Bitcoin and US oil returns by employing a rich set of parametric and non-parametric approaches. We examine the dependence structure of the US oil market and Bitcoin through Clayton copulas, normal copulas, and Gumbel copulas. Copulas help us to test the volatility of these dependence structures through left-tailed, right-tailed or normal distributions. We collected daily data from 5 February 2014 to 24 January 2019 on Bitcoin prices and oil prices. The data on bitcoin prices were extracted from coinmarketcap.com. The US oil prices were collected from the Federal Reserve Economic Data source. Maximum pseudo-likelihood estimation was applied to the dataset and showed that the US oil returns and Bitcoin are highly vulnerable to tail risks. The multiplier bootstrap-based goodness-of-fit test as well as Kendal plots also suggest left-tail dependence, and this adds to the robustness of the results. The stationary bootstrap test for the partial cross-quantilogram indicates which quantile in the left tail has a statistically significant relationship between Bitcoin and US oil returns. The study has crucial implications in terms of portfolio diversification using cryptocurrencies and oil-based hedging instruments.


2019 ◽  
Vol 29 (2) ◽  
pp. 344-358
Author(s):  
Claudia Rivera-Rodriguez ◽  
Sebastien Haneuse ◽  
Molin Wang ◽  
Donna Spiegelman

In many public health and medical research settings, information on key covariates may not be readily available or too expensive to gather for all individuals in the study. In such settings, the two-phase design provides a way forward by first stratifying an initial (large) phase I sample on the basis of covariates readily available (including, possibly, the outcome), and sub-sampling participants at phase II to collect the expensive measure(s). When the outcome of interest is binary, several methods have been proposed for estimation and inference for the parameters of a logistic regression model, including weighted likelihood, pseudo-likelihood and maximum likelihood. Although these methods yield consistent estimation and valid inference, they do so solely on the basis of the phase I stratification and the detailed covariate information obtained at phase II. Moreover, they ignore any additional information that is readily available at phase I but was not used as part of the stratified sampling design. Motivated by the potential for efficiency gains, especially concerning parameters corresponding to the additional phase I covariates, we propose a novel augmented pseudo-likelihood estimator for two-phase studies that makes use of all available information. In contrast to recently-proposed weighted likelihood-based methods that calibrate to the influence function of the model of interest, the methods we propose do not require the development of additional models and, therefore, enjoy a degree of robustness. In addition, we expand the broader framework for pseudo-likelihood based estimation and inference to permit link functions for binary regression other than the logit link. Comprehensive simulations, based on a one-time cross sectional survey of 82,887 patients undergoing anti-retroviral therapy in Malawi between 2005 and 2007, illustrate finite sample properties of the proposed methods and compare their performance competing approaches. The proposed method yields the lowest standard errors when the model is correctly specified. Finally, the methods are applied to a large implementation science project examining the effect of an enhanced community health worker program to improve adherence to WHO guidelines for at least four antenatal visits, in Dar es Salaam, Tanzania.


Proceedings ◽  
2018 ◽  
Vol 7 (1) ◽  
pp. 19 ◽  
Author(s):  
Nikoletta Stamatatou ◽  
Lampros Vasiliades ◽  
Athanasios Loukas

The objective of this study is to compare univariate and joint bivariate return periods of extreme precipitation that all rely on different probability concepts in selected meteorological stations in Cyprus. Pairs of maximum rainfall depths with corresponding durations are estimated and compared using annual maximum series (AMS) for the complete period of the analysis and 30-year subsets for selected data periods. Marginal distributions of extreme precipitation are examined and used for the estimation of typical design periods. The dependence between extreme rainfall and duration is then assessed by an exploratory data analysis using K-plots and Chi-plots and the consistency of their relationship is quantified by Kendall’s correlation coefficient. Copulas from Archimedean, Elliptical, and Extreme Value families are fitted using a pseudo-likelihood estimation method, evaluated according to the corrected Akaike Information Criterion and verified using both graphical approaches and a goodness-of-fit test based on the Cramér-von Mises statistic. The selected copula functions and the corresponding conditional and joint return periods are calculated and the results are compared with the marginal univariate estimations of each variable. Results highlight the effect of sample size on univariate and bivariate rainfall frequency analysis for hydraulic engineering design practices.


Proceedings ◽  
2018 ◽  
Vol 2 (11) ◽  
pp. 635 ◽  
Author(s):  
Nikoletta Stamatatou ◽  
Lampros Vasiliades ◽  
Athanasios Loukas

Flood frequency estimation for the design of hydraulic structures is usually performed as a univariate analysis of flood event magnitudes. However, recent studies show that for accurate return period estimation of the flood events, the dependence and the correlation pattern among flood attribute characteristics, such as peak discharge, volume and duration should be taken into account in a multivariate framework. The primary goal of this study is to compare univariate and joint bivariate return periods of floods that all rely on different probability concepts in Yermasoyia watershed, Cyprus. Pairs of peak discharge with corresponding flood volumes are estimated and compared using annual maximum series (AMS) and peaks over threshold (POT) approaches. The Lyne-Hollick recursive digital filter is applied to separate baseflow from quick flow and to subsequently estimate flood volumes from the quick flow timeseries. Marginal distributions of flood peaks and volumes are examined and used for the estimation of typical design periods. The dependence between peak discharges and volumes is then assessed by an exploratory data analysis using K-plots and Chi-plots, and the consistency of their relationship is quantified by Kendall’s correlation coefficient. Copulas from Archimedean, Elliptical and Extreme Value families are fitted using a pseudo-likelihood estimation method, verified using both graphical approaches and a goodness-of-fit test based on the Cramér-von Mises statistic and evaluated according to the corrected Akaike Information Criterion. The selected copula functions and the corresponding joint return periods are calculated and the results are compared with the marginal univariate estimations of each variable. Results indicate the importance of the bivariate analysis in the estimation of design return period of the hydraulic structures.


2016 ◽  
Vol 28 (3) ◽  
pp. 485-492 ◽  
Author(s):  
Hien D. Nguyen ◽  
Ian A. Wood

Maximum pseudo-likelihood estimation (MPLE) is an attractive method for training fully visible Boltzmann machines (FVBMs) due to its computational scalability and the desirable statistical properties of the MPLE. No published algorithms for MPLE have been proven to be convergent or monotonic. In this note, we present an algorithm for the MPLE of FVBMs based on the block successive lower-bound maximization (BSLM) principle. We show that the BSLM algorithm monotonically increases the pseudo-likelihood values and that the sequence of BSLM estimates converges to the unique global maximizer of the pseudo-likelihood function. The relationship between the BSLM algorithm and the gradient ascent (GA) algorithm for MPLE of FVBMs is also discussed, and a convergence criterion for the GA algorithm is given.


Sign in / Sign up

Export Citation Format

Share Document