asymptotic bias
Recently Published Documents


TOTAL DOCUMENTS

69
(FIVE YEARS 7)

H-INDEX

18
(FIVE YEARS 2)

2021 ◽  
Vol 64 (4) ◽  
pp. 70-82
Author(s):  
Galina Besstremyannaya ◽  
◽  
Sergei Golovan ◽  

The desire to capture heterogeneity in the response of the dependent variable to covariates often forces empiricists to employ panel data quantile regression models. Very often practitioners forget the limitations of their datasets in terms of the sample size n and the length of panel T. Yet, quantile regression requires large samples, long panels and small value of the ratio n/T. So the estimator in quantile regression with short panels is biased. The paper reviews the approaches for estimating longitudinal models for quantile regression. We highlight the fact that a method of smoothed quantile regression may be viewed as a remedy for reducing the asymptotic bias of the estimator in short panels, both in case of quantile-dependent and quantile-independent fixed effect specifications.


Stats ◽  
2020 ◽  
Vol 4 (1) ◽  
pp. 1-17
Author(s):  
Samuele Tosatto ◽  
Riad Akrour ◽  
Jan Peters

The Nadaraya-Watson kernel estimator is among the most popular nonparameteric regression technique thanks to its simplicity. Its asymptotic bias has been studied by Rosenblatt in 1969 and has been reported in several related literature. However, given its asymptotic nature, it gives no access to a hard bound. The increasing popularity of predictive tools for automated decision-making surges the need for hard (non-probabilistic) guarantees. To alleviate this issue, we propose an upper bound of the bias which holds for finite bandwidths using Lipschitz assumptions and mitigating some of the prerequisites of Rosenblatt’s analysis. Our bound has potential applications in fields like surgical robots or self-driving cars, where some hard guarantees on the prediction-error are needed.


2020 ◽  
Author(s):  
Liang Chen ◽  
Yulong Huo

Summary This paper considers panel data models where the idiosyncratic errors are subject to conditonal quantile restrictions. We propose a two-step estimator based on smoothed quantile regressions that is easy to implement. The asymptotic distribution of the estimator is established, and the analytical expression of its asymptotic bias is derived. Building on these results, we show how to make asymptotically valid inference on the basis of both analytical and split-panel jackknife bias corrections. Finite-sample simulations are used to support our theoretical analysis and to illustrate the importance of bias correction in quantile regressions for panel data. Finally, in an empirical application, the proposed method is used to study the growth effects of foreign direct investment.


Author(s):  
Vincent Francois-Lavet ◽  
Guillaume Rabusseau ◽  
Joelle Pineau ◽  
Damien Ernst ◽  
Raphael Fonteneau

When an agent has limited information on its environment, the suboptimality of an RL algorithm can be decomposed into the sum of two terms: a term related to an asymptotic bias (suboptimality with unlimited data) and a term due to overfitting (additional suboptimality due to limited data). In the context of reinforcement learning with partial observability, this paper provides an analysis of the tradeoff between these two error sources. In particular, our theoretical analysis formally characterizes how a smaller state representation increases the asymptotic bias while decreasing the risk of overfitting.


2019 ◽  
Vol 65 ◽  
pp. 1-30 ◽  
Author(s):  
Vincent Francois-Lavet ◽  
Guillaume Rabusseau ◽  
Joelle Pineau ◽  
Damien Ernst ◽  
Raphael Fonteneau

This paper provides an analysis of the tradeoff between asymptotic bias (suboptimality with unlimited data) and overfitting (additional suboptimality due to limited data) in the context of reinforcement learning with partial observability. Our theoretical analysis formally characterizes that while potentially increasing the asymptotic bias, a smaller state representation decreases the risk of overfitting. This analysis relies on expressing the quality of a state representation by bounding $L_1$ error terms of the associated belief states.  Theoretical results are empirically illustrated when the state representation is a truncated history of observations, both on synthetic POMDPs and on a large-scale POMDP in the context of smartgrids, with real-world data. Finally, similarly to known results in the fully observable setting, we also briefly discuss and empirically illustrate how using function approximators and adapting the discount factor may enhance the tradeoff between asymptotic bias and overfitting in the partially observable context.


2018 ◽  
Vol 17 (01) ◽  
pp. 1-17 ◽  
Author(s):  
Fangchao He ◽  
Qiang Wu

We propose a bias corrected regularization kernel ranking (BCRKR) method and characterize the asymptotic bias and variance of the estimated ranking score function. The results show that BCRKR has smaller asymptotic bias than the traditional regularization kernel ranking (RKR) method. The variance of BCRKR has the same order of decay as that of RKR when the sample size goes to infinity. Therefore, BCRKR is expected to be as effective as RKR and its smaller bias favors its use in block wise data analysis such as distributed learning for big data. The proofs make use of a concentration inequality of integral operator U-statistic.


2017 ◽  
Vol 27 (6) ◽  
pp. 3255-3304 ◽  
Author(s):  
Vladislav B. Tadić ◽  
Arnaud Doucet

Sign in / Sign up

Export Citation Format

Share Document