A TWO-STAGE PLUG-IN BANDWIDTH SELECTION AND ITS IMPLEMENTATION FOR COVARIANCE ESTIMATION

2009 ◽  
Vol 26 (3) ◽  
pp. 710-743 ◽  
Author(s):  
Masayuki Hirukawa

The two most popular bandwidth choice rules for kernel HAC estimation have been proposed by Andrews (1991) and Newey and West (1994). This paper suggests an alternative approach that estimates an unknown quantity in the optimal bandwidth for the HAC estimator (called normalized curvature) using a general class of kernels, and derives the optimal bandwidth that minimizes the asymptotic mean squared error of the estimator of normalized curvature. It is shown that the optimal bandwidth for the kernel-smoothed normalized curvature estimator should diverge at a slower rate than that of the HAC estimator using the same kernel. An implementation method of the optimal bandwidth for the HAC estimator, which is analogous to the one for probability density estimation by Sheather and Jones (1991), is also developed. The finite sample performance of the new bandwidth choice rule is assessed through Monte Carlo simulations.

2014 ◽  
Vol 31 (5) ◽  
pp. 1054-1077 ◽  
Author(s):  
Daniel Wilhelm

A two-step generalized method of moments estimation procedure can be made robust to heteroskedasticity and autocorrelation in the data by using a nonparametric estimator of the optimal weighting matrix. This paper addresses the issue of choosing the corresponding smoothing parameter (or bandwidth) so that the resulting point estimate is optimal in a certain sense. We derive an asymptotically optimal bandwidth that minimizes a higher-order approximation to the asymptotic mean-squared error of the estimator of interest. We show that the optimal bandwidth is of the same order as the one minimizing the mean-squared error of the nonparametric plugin estimator, but the constants of proportionality are significantly different. Finally, we develop a data-driven bandwidth selection rule and show, in a simulation experiment, that it may substantially reduce the estimator’s mean-squared error relative to existing bandwidth choices, especially when the number of moment conditions is large.


Author(s):  
Yulia Kotlyarova ◽  
Marcia M. A. Schafgans ◽  
Victoria Zinde-Walsh

AbstractIn this paper, we summarize results on convergence rates of various kernel based non- and semiparametric estimators, focusing on the impact of insufficient distributional smoothness, possibly unknown smoothness and even non-existence of density. In the presence of a possible lack of smoothness and the uncertainty about smoothness, methods of safeguarding against this uncertainty are surveyed with emphasis on nonconvex model averaging. This approach can be implemented via a combined estimator that selects weights based on minimizing the asymptotic mean squared error. In order to evaluate the finite sample performance of these and similar estimators we argue that it is important to account for possible lack of smoothness.


2020 ◽  
Author(s):  
Mohitosh Kejriwal ◽  
Xuewen Yu

Summary This paper develops a new approach to forecasting a highly persistent time series that employs feasible generalized least squares (FGLS) estimation of the deterministic components in conjunction with Mallows model averaging. Within a local-to-unity asymptotic framework, we derive analytical expressions for the asymptotic mean squared error and one-step-ahead mean squared forecast risk of the proposed estimator and show that the optimal FGLS weights are different from their ordinary least squares (OLS) counterparts. We also provide theoretical justification for a generalized Mallows averaging estimator that incorporates lag order uncertainty in the construction of the forecast. Monte Carlo simulations demonstrate that the proposed procedure yields a considerably lower finite-sample forecast risk relative to OLS averaging. An application to U.S. macroeconomic time series illustrates the efficacy of the advocated method in practice and finds that both persistence and lag order uncertainty have important implications for the accuracy of forecasts.


2019 ◽  
Vol 22 (3) ◽  
pp. 995-1008
Author(s):  
M. N. M. van Lieshout

AbstractWe investigate the asymptotic mean squared error of kernel estimators of the intensity function of a spatial point process. We derive expansions for the bias and variance in the scenario that n independent copies of a point process in $\mathbb {R}^{d}$ ℝ d are superposed. When the same bandwidth is used in all d dimensions, we show that an optimal bandwidth exists and is of the order n− 1/(d+ 4) under appropriate smoothness conditions on the true intensity function.


2008 ◽  
Vol 47 (02) ◽  
pp. 167-173 ◽  
Author(s):  
A. Pfahlberg ◽  
O. Gefeller ◽  
R. Weißbach

Summary Objectives: In oncological studies, the hazard rate can be used to differentiate subgroups of the study population according to their patterns of survival risk over time. Nonparametric curve estimation has been suggested as an exploratory means of revealing such patterns. The decision about the type of smoothing parameter is critical for performance in practice. In this paper, we study data-adaptive smoothing. Methods: A decade ago, the nearest-neighbor bandwidth was introduced for censored data in survival analysis. It is specified by one parameter, namely the number of nearest neighbors. Bandwidth selection in this setting has rarely been investigated, although the heuristical advantages over the frequently-studied fixed bandwidth are quite obvious. The asymptotical relationship between the fixed and the nearest-neighbor bandwidth can be used to generate novel approaches. Results: We develop a new selection algorithm termed double-smoothing for the nearest-neighbor bandwidth in hazard rate estimation. Our approach uses a finite sample approximation of the asymptotical relationship between the fixed and nearest-neighbor bandwidth. By so doing, we identify the nearest-neighbor bandwidth as an additional smoothing step and achieve further data-adaption after fixed bandwidth smoothing. We illustrate the application of the new algorithm in a clinical study and compare the outcome to the traditional fixed bandwidth result, thus demonstrating the practical performance of the technique. Conclusion: The double-smoothing approach enlarges the methodological repertoire for selecting smoothing parameters in nonparametric hazard rate estimation. The slight increase in computational effort is rewarded with a substantial amount of estimation stability, thus demonstrating the benefit of the technique for biostatistical applications.


Extremes ◽  
2021 ◽  
Author(s):  
Laura Fee Schneider ◽  
Andrea Krajina ◽  
Tatyana Krivobokova

AbstractThreshold selection plays a key role in various aspects of statistical inference of rare events. In this work, two new threshold selection methods are introduced. The first approach measures the fit of the exponential approximation above a threshold and achieves good performance in small samples. The second method smoothly estimates the asymptotic mean squared error of the Hill estimator and performs consistently well over a wide range of processes. Both methods are analyzed theoretically, compared to existing procedures in an extensive simulation study and applied to a dataset of financial losses, where the underlying extreme value index is assumed to vary over time.


Risks ◽  
2021 ◽  
Vol 9 (4) ◽  
pp. 70
Author(s):  
Małgorzata Just ◽  
Krzysztof Echaust

The appropriate choice of a threshold level, which separates the tails of the probability distribution of a random variable from its middle part, is considered to be a very complex and challenging task. This paper provides an empirical study on various methods of the optimal tail selection in risk measurement. The results indicate which method may be useful in practice for investors and financial and regulatory institutions. Some methods that perform well in simulation studies, based on theoretical distributions, may not perform well when real data are in use. We analyze twelve methods with different parameters for forty-eight world indices using returns from the period of 2000–Q1 2020 and four sub-periods. The research objective is to compare the methods and to identify those which can be recognized as useful in risk measurement. The results suggest that only four tail selection methods, i.e., the Path Stability algorithm, the minimization of the Asymptotic Mean Squared Error approach, the automated Eyeball method with carefully selected tuning parameters and the Hall single bootstrap procedure may be useful in practical applications.


2017 ◽  
Vol 29 (1) ◽  
pp. 67-92 ◽  
Author(s):  
JAMES CHAPMAN ◽  
TARMO UUSTALU ◽  
NICCOLÒ VELTRI

The delay datatype was introduced by Capretta (Logical Methods in Computer Science, 1(2), article 1, 2005) as a means to deal with partial functions (as in computability theory) in Martin-Löf type theory. The delay datatype is a monad. It is often desirable to consider two delayed computations equal, if they terminate with equal values, whenever one of them terminates. The equivalence relation underlying this identification is called weak bisimilarity. In type theory, one commonly replaces quotients with setoids. In this approach, the delay datatype quotiented by weak bisimilarity is still a monad–a constructive alternative to the maybe monad. In this paper, we consider the alternative approach of Hofmann (Extensional Constructs in Intensional Type Theory, Springer, London, 1997) of extending type theory with inductive-like quotient types. In this setting, it is difficult to define the intended monad multiplication for the quotiented datatype. We give a solution where we postulate some principles, crucially proposition extensionality and the (semi-classical) axiom of countable choice. With the aid of these principles, we also prove that the quotiented delay datatype delivers free ω-complete pointed partial orders (ωcppos).Altenkirch et al. (Lecture Notes in Computer Science, vol. 10203, Springer, Heidelberg, 534–549, 2017) demonstrated that, in homotopy type theory, a certain higher inductive–inductive type is the free ωcppo on a type X essentially by definition; this allowed them to obtain a monad of free ωcppos without recourse to a choice principle. We notice that, by a similar construction, a simpler ordinary higher inductive type gives the free countably complete join semilattice on the unit type 1. This type suffices for constructing a monad, which is isomorphic to the one of Altenkirch et al. We have fully formalized our results in the Agda dependently typed programming language.


2020 ◽  
Vol 72 (4) ◽  
pp. 41-57
Author(s):  
Marek Menkiszak

In the face of a new serious crisis in Europe caused by the coronavirus pandemic, Russia has taken an ambiguous position. On the one hand, it was spreading fake news and, on the other hand, it was providing Italy with symbolic support. Russia’s immediate goal was to persuade the European Union (EU) to reduce or lift sanctions. The new situation provides a new argument to those participants of the European debate who are in favour of normalisation and even reset of relations with Russia. Among them, the voice of France is particularly clear since its President Emanuel Macron has taken up the initiative to build the ‘architecture of trust and security’ with Russia. These proposals, which are now quite vague, are based on questionable  assumptions and deepen divisions in Europe and the crisis in transatlantic relations. By rising Moscow’s hopes for some form of (geo)political bargain, they in fact encourage Russia to continue its aggressive policy towards its European neighbours. An alternative approach based on several principles is needed in the debate on EU policy towards Russia: developing all five Mogherini’s points; maintaining sanctions against Russia until the reasons for their introduction cease to exist; symmetry of commitments and benefits related to limited cooperation with Russia; inviolability of key interests, security and sovereignty of EU and NATO member and partner states; and balancing the dialogue with the Russian authorities by supporting Russian civil society. Europe can survive without Russia but Russia cannot survive without Europe, which is why European policy needs consistency and strategic patience.


Sign in / Sign up

Export Citation Format

Share Document