scholarly journals Bartlett-type adjustments for hypothesis testing in linear models with general error covariance matrices

2013 ◽  
Vol 122 ◽  
pp. 162-174
Author(s):  
Masahiro Kojima ◽  
Tatsuya Kubokawa
Author(s):  
Patrick W. Kraft ◽  
Ellen M. Key ◽  
Matthew J. Lebo

Abstract Grant and Lebo (2016) and Keele et al. (2016) clarify the conditions under which the popular general error correction model (GECM) can be used and interpreted easily: In a bivariate GECM the data must be integrated in order to rely on the error correction coefficient, $\alpha _1^\ast$ , to test cointegration and measure the rate of error correction between a single exogenous x and a dependent variable, y. Here we demonstrate that even if the data are all integrated, the test on $\alpha _1^\ast$ is misunderstood when there is more than a single independent variable. The null hypothesis is that there is no cointegration between y and any x but the correct alternative hypothesis is that y is cointegrated with at least one—but not necessarily more than one—of the x's. A significant $\alpha _1^\ast$ can occur when some I(1) regressors are not cointegrated and the equation is not balanced. Thus, the correct limiting distributions of the right-hand-side long-run coefficients may be unknown. We use simulations to demonstrate the problem and then discuss implications for applied examples.


2015 ◽  
Vol 143 (9) ◽  
pp. 3680-3699 ◽  
Author(s):  
Ross N. Bannister

Abstract This paper investigates the effect on balance of a number of Schur product–type localization schemes that have been designed with the primary function of reducing spurious far-field correlations in forecast error statistics. The localization schemes studied comprise a nonadaptive scheme (where the moderation matrix is decomposed in a spectral basis), and two adaptive schemes: a simplified version of Smoothed Ensemble Correlations Raised to a Power (SENCORP) and Ensemble Correlations Raised to a Power (ECO-RAP). The paper shows, the author believes for the first time, how the degree of balance (geostrophic and hydrostatic) implied by the error covariance matrices localized by these schemes can be diagnosed. Here it is considered that an effective localization scheme is one that reduces spurious correlations adequately, but also minimizes disruption of balance (where the “correct” degree of balance or imbalance is assumed to be possessed by the unlocalized ensemble). By varying free parameters that describe each scheme (e.g., the degree of truncation in the schemes that use the spectral basis, the “order” of each scheme, and the degree of ensemble smoothing), it is found that a particular configuration of the ECO-RAP scheme is best suited to the convective-scale system studied. According to the diagnostics this ECO-RAP configuration still weakens geostrophic and hydrostatic balance, but overall this is less so than for other schemes.


2018 ◽  
Vol 28 (6) ◽  
pp. 1609-1621
Author(s):  
Xiaoming Li ◽  
Jianhui Zhou ◽  
Feifang Hu

Covariate-adaptive designs are widely used to balance covariates and maintain randomization in clinical trials. Adaptive designs for discrete covariates and their asymptotic properties have been well studied in the literature. However, important continuous covariates are often involved in clinical studies. Simply discretizing or categorizing continuous covariates can result in loss of information. The current understanding of adaptive designs with continuous covariates lacks a theoretical foundation as the existing works are entirely based on simulations. Consequently, conventional hypothesis testing in clinical trials using continuous covariates is still not well understood. In this paper, we establish a theoretical framework for hypothesis testing on adaptive designs with continuous covariates based on linear models. For testing treatment effects and significance of covariates, we obtain the asymptotic distributions of the test statistic under null and alternative hypotheses. Simulation studies are conducted under a class of covariate-adaptive designs, including the p-value-based method, the Su’s percentile method, the empirical cumulative-distribution method, the Kullback–Leibler divergence method, and the kernel-density method. Key findings about adaptive designs with independent covariates based on linear models are (1) hypothesis testing that compares treatment effects are conservative in terms of smaller type I error, (2) hypothesis testing using adaptive designs outperforms complete randomization method in terms of power, and (3) testing on significance of covariates is still valid.


2014 ◽  
Vol 53 (4) ◽  
pp. 1099-1119 ◽  
Author(s):  
Wei-Yu Chang ◽  
Jothiram Vivekanandan ◽  
Tai-Chi Chen Wang

AbstractA variational algorithm for estimating measurement error covariance and the attenuation of X-band polarimetric radar measurements is described. It concurrently uses both the differential reflectivity ZDR and propagation phase ΦDP. The majority of the current attenuation estimation techniques use only ΦDP. A few of the ΦDP-based methods use ZDR as a constraint for verifying estimated attenuation. In this paper, a detailed observing system simulation experiment was used for evaluating the performance of the variational algorithm. The results were compared with a single-coefficient ΦDP-based method. Retrieved attenuation from the variational method is more accurate than the results from a single coefficient ΦDP-based method. Moreover, the variational method is less sensitive to measurement noise in radar observations. The variational method requires an accurate description of error covariance matrices. Relative weights between measurements and background values (i.e., mean value based on long-term DSD measurements in the variational method) are determined by their respective error covariances. Instead of using ad hoc values, error covariance matrices of background and radar measurement are statistically estimated and their spatial characteristics are studied. The estimated error covariance shows higher values in convective regions than in stratiform regions, as expected. The practical utility of the variational attenuation correction method is demonstrated using radar field measurements from the Taiwan Experimental Atmospheric Mobile-Radar (TEAM-R) during 2008’s Southwest Monsoon Experiment/Terrain-Influenced Monsoon Rainfall Experiment (SoWMEX/TiMREX). The accuracy of attenuation-corrected X-band radar measurements is evaluated by comparing them with collocated S-band radar measurements.


2019 ◽  
Vol 7 (1) ◽  
pp. 78-91
Author(s):  
Stephen Haslett

Abstract When sample survey data with complex design (stratification, clustering, unequal selection or inclusion probabilities, and weighting) are used for linear models, estimation of model parameters and their covariance matrices becomes complicated. Standard fitting techniques for sample surveys either model conditional on survey design variables, or use only design weights based on inclusion probabilities essentially assuming zero error covariance between all pairs of population elements. Design properties that link two units are not used. However, if population error structure is correlated, an unbiased estimate of the linear model error covariance matrix for the sample is needed for efficient parameter estimation. By making simultaneous use of sampling structure and design-unbiased estimates of the population error covariance matrix, the paper develops best linear unbiased estimation (BLUE) type extensions to standard design-based and joint design and model based estimation methods for linear models. The analysis covers both with and without replacement sample designs. It recognises that estimation for with replacement designs requires generalized inverses when any unit is selected more than once. This and the use of Hadamard products to link sampling and population error covariance matrix properties are central topics of the paper. Model-based linear model parameter estimation is also discussed.


2015 ◽  
Vol 8 (2) ◽  
pp. 191-203 ◽  
Author(s):  
J. Vira ◽  
M. Sofiev

Abstract. This paper describes the assimilation of trace gas observations into the chemistry transport model SILAM (System for Integrated modeLling of Atmospheric coMposition) using the 3D-Var method. Assimilation results for the year 2012 are presented for the prominent photochemical pollutants ozone (O3) and nitrogen dioxide (NO2). Both species are covered by the AirBase observation database, which provides the observational data set used in this study. Attention was paid to the background and observation error covariance matrices, which were obtained primarily by the iterative application of a posteriori diagnostics. The diagnostics were computed separately for 2 months representing summer and winter conditions, and further disaggregated by time of day. This enabled the derivation of background and observation error covariance definitions, which included both seasonal and diurnal variation. The consistency of the obtained covariance matrices was verified using χ2 diagnostics. The analysis scores were computed for a control set of observation stations withheld from assimilation. Compared to a free-running model simulation, the correlation coefficient for daily maximum values was improved from 0.8 to 0.9 for O3 and from 0.53 to 0.63 for NO2.


Sensors ◽  
2020 ◽  
Vol 20 (5) ◽  
pp. 1460
Author(s):  
Vincent Chabot ◽  
Maëlle Nodet ◽  
Arthur Vidard

Accounting for realistic observation errors is a known bottleneck in data assimilation, because dealing with error correlations is complex. Following a previous study on this subject, we propose to use multiscale modelling, more precisely wavelet transform, to address this question. This study aims to investigate the problem further by addressing two issues arising in real-life data assimilation: how to deal with partially missing data (e.g., concealed by an obstacle between the sensor and the observed system), and how to solve convergence issues associated with complex observation error covariance matrices? Two adjustments relying on wavelets modelling are proposed to deal with those, and offer significant improvements. The first one consists of adjusting the variance coefficients in the frequency domain to account for masked information. The second one consists of a gradual assimilation of frequencies. Both of these fully rely on the multiscale properties associated with wavelet covariance modelling. Numerical results on twin experiments show that multiscale modelling is a promising tool to account for correlations in observation errors in realistic applications.


Sign in / Sign up

Export Citation Format

Share Document