scholarly journals AN ADAPTIVE TEST OF STOCHASTIC MONOTONICITY

2020 ◽  
pp. 1-42
Author(s):  
Denis Chetverikov ◽  
Daniel Wilhelm ◽  
Dongwoo Kim

We propose a new nonparametric test of stochastic monotonicity which adapts to the unknown smoothness of the conditional distribution of interest, possesses desirable asymptotic properties, is conceptually easy to implement, and computationally attractive. In particular, we show that the test asymptotically controls size at a polynomial rate, is nonconservative, and detects certain smooth local alternatives that converge to the null with the fastest possible rate. Our test is based on a data-driven bandwidth value and the critical value for the test takes this randomness into account. Monte Carlo simulations indicate that the test performs well in finite samples. In particular, the simulations show that the test controls size and, under some alternatives, is significantly more powerful than existing procedures.

2021 ◽  
Vol 49 (2) ◽  
pp. 262-293
Author(s):  
Vincent Dekker ◽  
Karsten Schweikert

In this article, we compare three data-driven procedures to determine the bunching window in a Monte Carlo simulation of taxable income. Following the standard approach in the empirical bunching literature, we fit a flexible polynomial model to a simulated income distribution, excluding data in a range around a prespecified kink. First, we propose to implement methods for the estimation of structural breaks to determine a bunching regime around the kink. A second procedure is based on Cook’s distances aiming to identify outlier observations. Finally, we apply the iterative counterfactual procedure proposed by Bosch, Dekker, and Strohmaier which evaluates polynomial counterfactual models for all possible bunching windows. While our simulation results show that all three procedures are fairly accurate, the iterative counterfactual procedure is the preferred method to detect the bunching window when no prior information about the true size of the bunching window is available.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Qi Wang ◽  
Longfei Zhang

AbstractDirectly manipulating the atomic structure to achieve a specific property is a long pursuit in the field of materials. However, hindered by the disordered, non-prototypical glass structure and the complex interplay between structure and property, such inverse design is dauntingly hard for glasses. Here, combining two cutting-edge techniques, graph neural networks and swap Monte Carlo, we develop a data-driven, property-oriented inverse design route that managed to improve the plastic resistance of Cu-Zr metallic glasses in a controllable way. Swap Monte Carlo, as a sampler, effectively explores the glass landscape, and graph neural networks, with high regression accuracy in predicting the plastic resistance, serves as a decider to guide the search in configuration space. Via an unconventional strengthening mechanism, a geometrically ultra-stable yet energetically meta-stable state is unraveled, contrary to the common belief that the higher the energy, the lower the plastic resistance. This demonstrates a vast configuration space that can be easily overlooked by conventional atomistic simulations. The data-driven techniques, structural search methods and optimization algorithms consolidate to form a toolbox, paving a new way to the design of glassy materials.


2021 ◽  
Vol 12 (4) ◽  
pp. 1223-1271 ◽  
Author(s):  
Victor Aguirregabiria ◽  
Mathieu Marcoux

Imposing equilibrium restrictions provides substantial gains in the estimation of dynamic discrete games. Estimation algorithms imposing these restrictions have different merits and limitations. Algorithms that guarantee local convergence typically require the approximation of high‐dimensional Jacobians. Alternatively, the Nested Pseudo‐Likelihood (NPL) algorithm is a fixed‐point iterative procedure, which avoids the computation of these matrices, but—in games—may fail to converge to the consistent NPL estimator. In order to better capture the effect of iterating the NPL algorithm in finite samples, we study the asymptotic properties of this algorithm for data generating processes that are in a neighborhood of the NPL fixed‐point stability threshold. We find that there are always samples for which the algorithm fails to converge, and this introduces a selection bias. We also propose a spectral algorithm to compute the NPL estimator. This algorithm satisfies local convergence and avoids the approximation of Jacobian matrices. We present simulation evidence and an empirical application illustrating our theoretical results and the good properties of the spectral algorithm.


2016 ◽  
Vol 5 (4) ◽  
pp. 9 ◽  
Author(s):  
Hérica P. A. Carneiro ◽  
Dione M. Valença

In some survival studies part of the population may be no longer subject to the event of interest. The called cure rate models take this fact into account. They have been extensively studied for several authors who have proposed extensions and applications in real lifetime data. Classic large sample tests are usually considered in these applications, especially the likelihood ratio. Recently  a new test called \textit{gradient test} has been proposed. The gradient statistic shares the same asymptotic properties with the classic likelihood ratio and does not involve knowledge of the information matrix, which can be an advantage in survival models. Some simulation studies have been carried out to explore the behavior of the gradient test in finite samples and compare it with the classic tests in different models. However little is known about the properties of these large sample tests in finite sample for cure rate models. In this work we  performed a simulation study based on the promotion time model with Weibull distribution, to assess the performance of likelihood ratio and gradient tests in finite samples. An application is presented to illustrate the results.


Biometrika ◽  
2019 ◽  
Vol 107 (1) ◽  
pp. 107-122 ◽  
Author(s):  
Xuan Wang ◽  
Layla Parast ◽  
Lu Tian ◽  
Tianxi Cai

Summary In randomized clinical trials, the primary outcome, $Y$, often requires long-term follow-up and/or is costly to measure. For such settings, it is desirable to use a surrogate marker, $S$, to infer the treatment effect on $Y$, $\Delta$. Identifying such an $S$ and quantifying the proportion of treatment effect on $Y$ explained by the effect on $S$ are thus of great importance. Most existing methods for quantifying the proportion of treatment effect are model based and may yield biased estimates under model misspecification. Recently proposed nonparametric methods require strong assumptions to ensure that the proportion of treatment effect is in the range $[0,1]$. Additionally, optimal use of $S$ to approximate $\Delta$ is especially important when $S$ relates to $Y$ nonlinearly. In this paper we identify an optimal transformation of $S$, $g_{\tiny {\rm{opt}}}(\cdot)$, such that the proportion of treatment effect explained can be inferred based on $g_{\tiny {\rm{opt}}}(S)$. In addition, we provide two novel model-free definitions of proportion of treatment effect explained and simple conditions for ensuring that it lies within $[0,1]$. We provide nonparametric estimation procedures and establish asymptotic properties of the proposed estimators. Simulation studies demonstrate that the proposed methods perform well in finite samples. We illustrate the proposed procedures using a randomized study of HIV patients.


Sign in / Sign up

Export Citation Format

Share Document