estimation bias
Recently Published Documents


TOTAL DOCUMENTS

176
(FIVE YEARS 45)

H-INDEX

17
(FIVE YEARS 2)

Author(s):  
David Meenagh ◽  
Patrick Minford ◽  
Michael R. Wickens

AbstractPrice rigidity plays a central role in macroeconomic models but remains controversial. Those espousing it look to Bayesian estimated models in support, while those assuming price flexibility largely impose it on their models. So controversy continues unresolved by testing on the data. In a Monte Carlo experiment we ask how different estimation methods could help to resolve this controversy. We find Bayesian estimation creates a large potential estimation bias compared with standard estimation techniques. Indirect estimation where the bias is found to be low appears to do best, and offers the best way forward for settling the price rigidity controversy.


2021 ◽  
Vol 14 (12) ◽  
pp. 617
Author(s):  
Jia Liu

This paper proposes a semiparametric realized stochastic volatility model by integrating the parametric stochastic volatility model utilizing realized volatility information and the Bayesian nonparametric framework. The flexible framework offered by Bayesian nonparametric mixtures not only improves the fitting of asymmetric and leptokurtic densities of asset returns and logarithmic realized volatility but also enables flexible adjustments for estimation bias in realized volatility. Applications to equity data show that the proposed model offers superior density forecasts for returns and improved estimates of parameters and latent volatility compared with existing alternatives.


2021 ◽  
Author(s):  
◽  
Xiaomei Li

<p>This thesis is about estimation bias of longitudinal data when there is correlation between the explanatory variable and the individual effect. In our study, we firstly introduce what is longitudinal data, then we introduce the commonly used estimation methods for the general linear model: the least squares method and maximum likelihood method. We apply these estimation methods to three simple general models which are commonly used to analyse longitudinal data. Secondly, we use frequentist and Bayesian analysis to explore the estimation bias theoretically and empirically, with an emphasis on the heterogeneity bias. This bias occurs where random effect estimation is used to analyse data with nonzero correlation between explanatory variables and the individual effect. We then empirically compare the estimated value with the true value. In this way, we demonstrate and verify the theoretical formulation which can be used to determine the size of the bias [Mundlak, 1978]. In order to avoid the estimation bias, the fixed effect estimation should be used to get the better solution under nonzero correlation situation. The Hausman test is used to confirm this. However, the bias not only occurs when we use frequentist analysis, but also exist by using the Bayesian estimation of random effect model. Finally, we follow the Mundlak [1978] idea, then define the special Bayesian model which can be used as Hausman test and as a comparable model. We also prove that it is best fit model among the random effect, fixed effect and pooled model if there is correlation between explanatory variables and individual effect. Throughout this thesis, we illustrate this ideas using examples based on real and simulated data.</p>


2021 ◽  
Author(s):  
◽  
Xiaomei Li

<p>This thesis is about estimation bias of longitudinal data when there is correlation between the explanatory variable and the individual effect. In our study, we firstly introduce what is longitudinal data, then we introduce the commonly used estimation methods for the general linear model: the least squares method and maximum likelihood method. We apply these estimation methods to three simple general models which are commonly used to analyse longitudinal data. Secondly, we use frequentist and Bayesian analysis to explore the estimation bias theoretically and empirically, with an emphasis on the heterogeneity bias. This bias occurs where random effect estimation is used to analyse data with nonzero correlation between explanatory variables and the individual effect. We then empirically compare the estimated value with the true value. In this way, we demonstrate and verify the theoretical formulation which can be used to determine the size of the bias [Mundlak, 1978]. In order to avoid the estimation bias, the fixed effect estimation should be used to get the better solution under nonzero correlation situation. The Hausman test is used to confirm this. However, the bias not only occurs when we use frequentist analysis, but also exist by using the Bayesian estimation of random effect model. Finally, we follow the Mundlak [1978] idea, then define the special Bayesian model which can be used as Hausman test and as a comparable model. We also prove that it is best fit model among the random effect, fixed effect and pooled model if there is correlation between explanatory variables and individual effect. Throughout this thesis, we illustrate this ideas using examples based on real and simulated data.</p>


NeuroImage ◽  
2021 ◽  
pp. 118749
Author(s):  
C.S. Parker ◽  
T. Veale ◽  
M. Bocchetta ◽  
C.F. Slattery ◽  
I.B. Malone ◽  
...  

2021 ◽  
Author(s):  
Dogan C. Cicek ◽  
Enes Duran ◽  
Baturay Saglam ◽  
Kagan Kaya ◽  
Furkan Mutlu ◽  
...  
Keyword(s):  

2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Haixia Hu ◽  
Ling Wang ◽  
Chen Li ◽  
Wei Ge ◽  
Jielai Xia

Abstract Background In follow-up studies, the occurrence of the intermediate event may influence the risk of the outcome of interest. Existing methods estimate the effect of the intermediate event by including a time-varying covariate in the outcome model. However, the insusceptible fraction to the intermediate event in the study population has not been considered in the literature, leading to effect estimation bias due to the inaccurate dataset. Methods In this paper, we propose a new effect estimation method, in which the susceptible subpopulation is identified firstly so that the estimation could be conducted in the right population. Then, the effect is estimated via the extended Cox regression and landmark methods in the identified susceptible subpopulation. For susceptibility identification, patients with observed intermediate event time are classified as susceptible. Based on the mixture cure model fitted the incidence and time of the intermediate event, the susceptibility of the patient with censored intermediate event time is predicted by the residual intermediate event time imputation. The effect estimation performance of the new method was investigated in various scenarios via Monte-Carlo simulations with the performance of existing methods serving as the comparison. The application of the proposed method to mycosis fungoides data has been reported as an example. Results The simulation results show that the estimation bias of the proposed method is smaller than that of the existing methods, especially in the case of a large insusceptible fraction. The results hold for small sample sizes. Besides, the estimation bias of the new method decreases with the increase of the covariates, especially continuous covariates, in the mixture cure model. The heterogeneity of the effect of covariates on the outcome in the insusceptible and susceptible subpopulation, as well as the landmark time, does not affect the estimation performance of the new method. Conclusions Based on the pre-identification of the susceptible, the proposed new method could improve the effect estimation accuracy of the intermediate event on the outcome when there is an insusceptible fraction to the intermediate event in the study population.


Quantum ◽  
2021 ◽  
Vol 5 ◽  
pp. 548
Author(s):  
Zhenyu Cai

Even with the recent rapid developments in quantum hardware, noise remains the biggest challenge for the practical applications of any near-term quantum devices. Full quantum error correction cannot be implemented in these devices due to their limited scale. Therefore instead of relying on engineered code symmetry, symmetry verification was developed which uses the inherent symmetry within the physical problem we try to solve. In this article, we develop a general framework named symmetry expansion which provides a wide spectrum of symmetry-based error mitigation schemes beyond symmetry verification, enabling us to achieve different balances between the estimation bias and the sampling cost of the scheme. We show that certain symmetry expansion schemes can achieve a smaller estimation bias than symmetry verification through cancellation between the biases due to the detectable and undetectable noise components. A practical way to search for such a small-bias scheme is introduced. By numerically simulating the Fermi-Hubbard model for energy estimation, the small-bias symmetry expansion we found can achieve an estimation bias 6 to 9 times below what is achievable by symmetry verification when the average number of circuit errors is between 1 to 2. The corresponding sampling cost for random shot noise reduction is just 2 to 6 times higher than symmetry verification. Beyond symmetries inherent to the physical problem, our formalism is also applicable to engineered symmetries. For example, the recent scheme for exponential error suppression using multiple noisy copies of the quantum device is just a special case of symmetry expansion using the permutation symmetry among the copies.


Sign in / Sign up

Export Citation Format

Share Document