scholarly journals XENON100 exclusion limit without consideringLeffas a nuisance parameter

2012 ◽  
Vol 86 (1) ◽  
Author(s):  
Jonathan H. Davis ◽  
Céline Bœhm ◽  
Niels Oppermann ◽  
Torsten Ensslin ◽  
Thomas Lacroix
2020 ◽  
Vol 499 (3) ◽  
pp. 4054-4067
Author(s):  
Steven Cunnington ◽  
Stefano Camera ◽  
Alkistis Pourtsidou

ABSTRACT Potential evidence for primordial non-Gaussianity (PNG) is expected to lie in the largest scales mapped by cosmological surveys. Forthcoming 21 cm intensity mapping experiments will aim to probe these scales by surveying neutral hydrogen (H i) within galaxies. However, foreground signals dominate the 21 cm emission, meaning foreground cleaning is required to recover the cosmological signal. The effect this has is to damp the H i power spectrum on the largest scales, especially along the line of sight. Whilst there is agreement that this contamination is potentially problematic for probing PNG, it is yet to be fully explored and quantified. In this work, we carry out the first forecasts on fNL that incorporate simulated foreground maps that are removed using techniques employed in real data. Using an Monte Carlo Markov Chain analysis on an SKA1-MID-like survey, we demonstrate that foreground cleaned data recovers biased values [$f_{\rm NL}= -102.1_{-7.96}^{+8.39}$ (68 per cent CL)] on our fNL = 0 fiducial input. Introducing a model with fixed parameters for the foreground contamination allows us to recover unbiased results ($f_{\rm NL}= -2.94_{-11.9}^{+11.4}$). However, it is not clear that we will have sufficient understanding of foreground contamination to allow for such rigid models. Treating the main parameter $k_\parallel ^\text{FG}$ in our foreground model as a nuisance parameter and marginalizing over it, still recovers unbiased results but at the expense of larger errors ($f_{\rm NL}= 0.75^{+40.2}_{-44.5}$), which can only be reduced by imposing the Planck 2018 prior. Our results show that significant progress on understanding and controlling foreground removal effects is necessary for studying PNG with H i intensity mapping.


2019 ◽  
Vol 2019 ◽  
pp. 1-8
Author(s):  
Jiaqi Song ◽  
Haihong Tao

Noncircular signals are widely used in the area of radar, sonar, and wireless communication array systems, which can offer more accurate estimates and detect more sources. In this paper, the noncircular signals are employed to improve source localization accuracy and identifiability. Firstly, an extended real-valued covariance matrix is constructed to transform complex-valued computation into real-valued computation. Based on the property of noncircular signals and symmetric uniform linear array (SULA) which consist of dual-polarization sensors, the array steering vectors can be separated into the source position parameters and the nuisance parameter. Therefore, the rank reduction (RARE) estimators are adopted to estimate the source localization parameters in sequence. By utilizing polarization information of sources and real-valued computation, the maximum number of resolvable sources, estimation accuracy, and resolution can be improved. Numerical simulations demonstrate that the proposed method outperforms the existing methods in both resolution and estimation accuracy.


2015 ◽  
Vol 448 (2) ◽  
pp. 1389-1401 ◽  
Author(s):  
L. Clerkin ◽  
D. Kirk ◽  
O. Lahav ◽  
F. B. Abdalla ◽  
E. Gaztañaga
Keyword(s):  

2019 ◽  
Vol 36 (2) ◽  
pp. 347-366 ◽  
Author(s):  
José Luis Montiel Olea

This article studies a classical problem in statistical decision theory: a hypothesis test of a sharp null in the presence of a nuisance parameter. The main contribution of this article is a characterization of two finite-sample properties often deemed reasonable in this environment: admissibility and similarity. Admissibility means that a test cannot be improved uniformly over the parameter space. Similarity requires the null rejection probability to be unaffected by the nuisance parameter.The characterization result has two parts. The first part—established by Chernozhukov, Hansen, and Jansson (2009, Econometric Theory 25, 806–818)—states that maximizing weighted average power (WAP) subject to a similarity constraint suffices to generate admissible, similar tests. The second part—hereby established—states that constrained WAP maximization is (essentially) a necessary condition for a test to be admissible and similar. The characterization result shows that choosing an admissible, similar test is tantamount to selecting a particular weight function to report weighted average power. This result applies to full vector inference with a nuisance parameter, not to subvector inference.The article also revisits the theory of testing in the instrumental variables model. Specifically—and in light of the relevance of the weighted average power criterion in the main theoretical result—the article suggests a weight function for the structural parameters of the homoskedastic instrumental variables model, based on the priors proposed by Chamberlain (2007). The corresponding test is, by construction, admissible and similar. In addition, the test is shown to have finite- and large-sample properties comparable to those of the conditional likelihood ratio test.


Author(s):  
Dylan J. Foster ◽  
Vasilis Syrgkanis

We provide excess risk guarantees for statistical learning in a setting where the population risk with respect to which we evaluate a target parameter depends on an unknown parameter that must be estimated from data (a "nuisance parameter"). We analyze a two-stage sample splitting meta-algorithm that takes as input two arbitrary estimation algorithms: one for the target parameter and one for the nuisance parameter. We show that if the population risk satisfies a condition called Neyman orthogonality, the impact of the nuisance estimation error on the excess risk bound achieved by the meta-algorithm is of second order. Our theorem is agnostic to the particular algorithms used for the target and nuisance and only makes an assumption on their individual performance. This enables the use of a plethora of existing results from statistical learning and machine learning literature to give new guarantees for learning with a nuisance component. Moreover, by focusing on excess risk rather than parameter estimation, we can give guarantees under weaker assumptions than in previous works and accommodate the case where the target parameter belongs to a complex nonparametric class. We characterize conditions on the metric entropy such that oracle rates---rates of the same order as if we knew the nuisance parameter---are achieved. We also analyze the rates achieved by specific estimation algorithms such as variance-penalized empirical risk minimization, neural network estimation and sparse high-dimensional linear model estimation. We highlight the applicability of our results in four settings of central importance in the literature: 1) heterogeneous treatment effect estimation, 2) offline policy optimization, 3) domain adaptation, and 4) learning with missing data.


Sign in / Sign up

Export Citation Format

Share Document