scholarly journals Testing KiDS cross-correlation redshifts with simulations

2020 ◽  
Vol 642 ◽  
pp. A200 ◽  
Author(s):  
J. L. van den Busch ◽  
H. Hildebrandt ◽  
A. H. Wright ◽  
C. B. Morrison ◽  
C. Blake ◽  
...  

Measuring cosmic shear in wide-field imaging surveys requires accurate knowledge of the redshift distribution of all sources. The clustering-redshift technique exploits the angular cross-correlation of a target galaxy sample with unknown redshifts and a reference sample with known redshifts. It represents an attractive alternative to colour-based methods of redshift calibration. Here we test the performance of such clustering redshift measurements using mock catalogues that resemble the Kilo-Degree Survey (KiDS). These mocks are created from the MICE simulation and closely mimic the properties of the KiDS source sample and the overlapping spectroscopic reference samples. We quantify the performance of the clustering redshifts by comparing the cross-correlation results with the true redshift distributions in each of the five KiDS photometric redshift bins. Such a comparison to an informative model is necessary due to the incompleteness of the reference samples at high redshifts. Clustering mean redshifts are unbiased at |Δz|< 0.006 under these conditions. The redshift evolution of the galaxy bias of the reference and target samples represents one of the most important systematic errors when estimating clustering redshifts. It can be reliably mitigated at this level of precision using auto-correlation measurements and self-consistency relations, and will not become a dominant source of systematic error until the arrival of Stage-IV cosmic shear surveys. Using redshift distributions from a direct colour-based estimate instead of the true redshift distributions as a model for comparison with the clustering redshifts increases the biases in the mean to up to |Δz|∼0.04. This indicates that the interpretation of clustering redshifts in real-world applications will require more sophisticated (parameterised) models of the redshift distribution in the future. If such better models are available, the clustering-redshift technique promises to be a highly complementary alternative to other methods of redshift calibration.

Author(s):  
Ellie Kitanidis ◽  
Martin White

Abstract Cross-correlations between the lensing of the cosmic microwave background (CMB) and other tracers of large-scale structure provide a unique way to reconstruct the growth of dark matter, break degeneracies between cosmology and galaxy physics, and test theories of modified gravity. We detect a cross-correlation between DESI-like luminous red galaxies (LRGs) selected from DECaLS imaging and CMB lensing maps reconstructed with the Planck satellite at a significance of S/N = 27.2 over scales ℓmin = 30, ℓmax = 1000. To correct for magnification bias, we determine the slope of the LRG cumulative magnitude function at the faint limit as s = 0.999 ± 0.015, and find corresponding corrections on the order of a few percent for $C^{\kappa g}_{\ell }, C^{gg}_{\ell }$ across the scales of interest. We fit the large-scale galaxy bias at the effective redshift of the cross-correlation zeff ≈ 0.68 using two different bias evolution agnostic models: a HaloFit times linear bias model where the bias evolution is folded into the clustering-based estimation of the redshift kernel, and a Lagrangian perturbation theory model of the clustering evaluated at zeff. We also determine the error on the bias from uncertainty in the redshift distribution; within this error, the two methods show excellent agreement with each other and with DESI survey expectations.


2020 ◽  
Vol 500 (2) ◽  
pp. 2250-2263
Author(s):  
Omar Darwish ◽  
Mathew S Madhavacheril ◽  
Blake D Sherwin ◽  
Simone Aiola ◽  
Nicholas Battaglia ◽  
...  

ABSTRACT We construct cosmic microwave background lensing mass maps using data from the 2014 and 2015 seasons of observations with the Atacama Cosmology Telescope (ACT). These maps cover 2100 square degrees of sky and overlap with a wide variety of optical surveys. The maps are signal dominated on large scales and have fidelity such that their correlation with the cosmic infrared background is clearly visible by eye. We also create lensing maps with thermal Sunyaev−Zel’dovich contamination removed using a novel cleaning procedure that only slightly degrades the lensing signal-to-noise ratio. The cross-spectrum between the cleaned lensing map and the BOSS CMASS galaxy sample is detected at 10σ significance, with an amplitude of A = 1.02 ± 0.10 relative to the Planck best-fitting Lambda cold dark matter cosmological model with fiducial linear galaxy bias. Our measurement lays the foundation for lensing cross-correlation science with current ACT data and beyond.


Author(s):  
Maria Cristina Fortuna ◽  
Henk Hoekstra ◽  
Benjamin Joachimi ◽  
Harry Johnston ◽  
Nora Elisa Chisari ◽  
...  

Abstract Intrinsic alignments (IAs) of galaxies are an important contaminant for cosmic shear studies, but the modelling is complicated by the dependence of the signal on the source galaxy sample. In this paper, we use the halo model formalism to capture this diversity and examine its implications for Stage-III and Stage-IV cosmic shear surveys. We account for the different IA signatures at large and small scales, as well for the different contributions from central/satellite and red/blue galaxies, and we use realistic mocks to account for the characteristics of the galaxy populations as a function of redshift. We inform our model using the most recent observational findings: we include a luminosity dependence at both large and small scales and a radial dependence of the signal within the halo. We predict the impact of the total IA signal on the lensing angular power spectra, including the current uncertainties from the IA best-fits to illustrate the range of possible impact on the lensing signal: the lack of constraints for fainter galaxies is the main source of uncertainty for our predictions of the IA signal. We investigate how well effective models with limited degrees of freedom can account for the complexity of the IA signal. Although these lead to negligible biases for Stage-III surveys, we find that, for Stage-IV surveys, it is essential to at least include an additional parameter to capture the redshift dependence.


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Wendell Jones ◽  
Binsheng Gong ◽  
Natalia Novoradovskaya ◽  
Dan Li ◽  
Rebecca Kusko ◽  
...  

Abstract Background Oncopanel genomic testing, which identifies important somatic variants, is increasingly common in medical practice and especially in clinical trials. Currently, there is a paucity of reliable genomic reference samples having a suitably large number of pre-identified variants for properly assessing oncopanel assay analytical quality and performance. The FDA-led Sequencing and Quality Control Phase 2 (SEQC2) consortium analyze ten diverse cancer cell lines individually and their pool, termed Sample A, to develop a reference sample with suitably large numbers of coding positions with known (variant) positives and negatives for properly evaluating oncopanel analytical performance. Results In reference Sample A, we identify more than 40,000 variants down to 1% allele frequency with more than 25,000 variants having less than 20% allele frequency with 1653 variants in COSMIC-related genes. This is 5–100× more than existing commercially available samples. We also identify an unprecedented number of negative positions in coding regions, allowing statistical rigor in assessing limit-of-detection, sensitivity, and precision. Over 300 loci are randomly selected and independently verified via droplet digital PCR with 100% concordance. Agilent normal reference Sample B can be admixed with Sample A to create new samples with a similar number of known variants at much lower allele frequency than what exists in Sample A natively, including known variants having allele frequency of 0.02%, a range suitable for assessing liquid biopsy panels. Conclusion These new reference samples and their admixtures provide superior capability for performing oncopanel quality control, analytical accuracy, and validation for small to large oncopanels and liquid biopsy assays.


2019 ◽  
Vol 489 (2) ◽  
pp. 2887-2906 ◽  
Author(s):  
S Lee ◽  
E M Huff ◽  
A J Ross ◽  
A Choi ◽  
C Hirata ◽  
...  

ABSTRACT We present a sample of galaxies with the Dark Energy Survey (DES) photometry that replicates the properties of the BOSS CMASS sample. The CMASS galaxy sample has been well characterized by the Sloan Digital Sky Survey (SDSS) collaboration and was used to obtain the most powerful redshift-space galaxy clustering measurements to date. A joint analysis of redshift-space distortions (such as those probed by CMASS from SDSS) and a galaxy–galaxy lensing measurement for an equivalent sample from DES can provide powerful cosmological constraints. Unfortunately, the DES and SDSS-BOSS footprints have only minimal overlap, primarily on the celestial equator near the SDSS Stripe 82 region. Using this overlap, we build a robust Bayesian model to select CMASS-like galaxies in the remainder of the DES footprint. The newly defined DES-CMASS (DMASS) sample consists of 117 293 effective galaxies covering $1244\,\deg ^2$. Through various validation tests, we show that the DMASS sample selected by this model matches well with the BOSS CMASS sample, specifically in the South Galactic cap (SGC) region that includes Stripe 82. Combining measurements of the angular correlation function and the clustering-z distribution of DMASS, we constrain the difference in mean galaxy bias and mean redshift between the BOSS CMASS and DMASS samples to be $\Delta b = 0.010^{+0.045}_{-0.052}$ and $\Delta z = \left(3.46^{+5.48}_{-5.55} \right) \times 10^{-3}$ for the SGC portion of CMASS, and $\Delta b = 0.044^{+0.044}_{-0.043}$ and $\Delta z= (3.51^{+4.93}_{-5.91}) \times 10^{-3}$ for the full CMASS sample. These values indicate that the mean bias of galaxies and mean redshift in the DMASS sample are consistent with both CMASS samples within 1σ.


2019 ◽  
Vol 492 (2) ◽  
pp. 2872-2896 ◽  
Author(s):  
Benjamin D Wibking ◽  
David H Weinberg ◽  
Andrés N Salcedo ◽  
Hao-Yi Wu ◽  
Sukhdeep Singh ◽  
...  

ABSTRACT We describe our non-linear emulation (i.e. interpolation) framework that combines the halo occupation distribution (HOD) galaxy bias model with N-body simulations of non-linear structure formation, designed to accurately predict the projected clustering and galaxy–galaxy lensing signals from luminous red galaxies in the redshift range 0.16 &lt; z &lt; 0.36 on comoving scales 0.6 &lt; rp &lt; 30 $h^{-1} \, \text{Mpc}$. The interpolation accuracy is ≲ 1–2 per cent across the entire physically plausible range of parameters for all scales considered. We correctly recover the true value of the cosmological parameter S8 = (σ8/0.8228)(Ωm/0.3107)0.6 from mock measurements produced via subhalo abundance matching (SHAM)-based light-cones designed to approximately match the properties of the SDSS LOWZ galaxy sample. Applying our model to Baryon Oscillation Spectroscopic Survey (BOSS) Data Release 14 (DR14) LOWZ galaxy clustering and galaxy-shear cross-correlation measurements made with Sloan Digital Sky Survey (SDSS) Data Release 8 (DR8) imaging, we perform a prototype cosmological analysis marginalizing over wCDM cosmological parameters and galaxy HOD parameters. We obtain a 4.4 per cent measurement of S8 = 0.847 ± 0.037, in 3.5σ tension with the Planck cosmological results of 1.00 ± 0.02. We discuss the possibility of underestimated systematic uncertainties or astrophysical effects that could explain this discrepancy.


2019 ◽  
Vol 11 (23) ◽  
pp. 2818
Author(s):  
Yingying Mei ◽  
Jingxiong Zhang ◽  
Wangle Zhang ◽  
Fengzhu Liu

As in conventional error matrix-based accuracy assessments, collocated reference sample data are often used for characterizing per-pixel (local) accuracies in land-cover change maps so that local accuracy predictions can be made using direct methods. In that way, correctness in “from-to” change categorization at sample pixels is assessed and modeled directly. To circumvent the issue of reference sample data being non-collocated, as is often the case for sample data collected independently for mono-temporal reference land-cover labeling or those added necessarily to reflect landscape changes, the PXCOV (Product rule with adjustment for cross-COVariance between single-date classification correctness) method was developed previously. However, the use of PXCOV becomes complicated when few or no collocated sample data are available and cross-validation cokriging, a procedure involving non-trivial geostatistical modeling, has to be incurred for estimation of cross-correlation. To overcome PXCOV’s lack of practicality when using mostly non-collocated sample data, this paper presents a simple alternative. It is furnished through stratified approximation of cross-correlation and features combined use of minimum and multiplication operators. Specifically, in this composite method (named Fuzzy+Product), minimum operator (resembling fuzzy set “min” operator and thus named Fuzzy) is applied over no-change pixels stratum where maximum correlation is assumed, while multiplication operator (i.e., product rule named Product) is applied for change pixels stratum where cross-correlation is assumed negligible (i.e., minimum correlation), without having to run cross-validation cokriging as in PXCOV. Studies were undertaken to test the proposed method based on datasets collected previously concerning GlobeLand30 2000 and 2010 land-cover at five sites in China. For each site, five model-training samples (being mostly non-collocated) of equal sizes and one independent model-testing sample (collocated) were used. Logistic regression models fitted with relevant sample data were applied to predict local accuracies in single-date classifications using selected map class occurrence pattern indices quantified in optimized moving windows. The area under the curve (AUC) of the receiver operating characteristic was used for evaluating alternative methods. Empirical results confirmed that method Fuzzy+Product is more accurate than both Fuzzy and Product in general and there are no statistically significant differences between it and PXCOV. This indicates Fuzzy+Product being a method of relative simplicity but reasonable accuracy when reference data are non-collocated or mostly so. Its value is likely best manifested when local and global accuracy characterization in multi-temporal change information (discrete and fractional) is concerned.


2019 ◽  
Vol 2019 ◽  
pp. 1-9
Author(s):  
J. M. Olivares-Ramírez ◽  
A. Dector ◽  
A. Duarte-Moller ◽  
D. Ortega Díaz ◽  
Diana Dector ◽  
...  

Currently, the automotive industry has made great advances in the incorporation of materials such as carbon fiber in high-performance cars. One of the main problems of these vehicles is warming, which is generated inside due to the heat transfer produced by solar radiation falling on the car, mainly on the roof. This research proposes the preparation of a composite material containing henequen natural fiber as a thermal barrier to be used as the roof of the car. In this research, 35 different laminates of 5 layers were prepared, combining carbon fiber, henequen natural fiber, fiberglass, and additives such as resin + Al2O3 or resin + Al. Reference samples were taken from stainless steel and one reference sample was extracted from the roof of the car. Considering the solar radiation and the heat transfer mechanisms, the temperature of the surface exposed to solar radiation was determined. The thermal conductivity of the 37 samples was determined, and the experimental results showed that the thermal conductivity of the steel with which the roof of the car is manufactured was 13.43 W·m−1·K−1 and that of the proposed laminate was 5.22 W·m−1·K−1, achieving a decrease in the thermal conductivity by 61.13%. Using the temperature and thermal conductivity data, the simulation (ANSYS) of the thermal system was performed. The results showed that the temperature inside the car with the carbon steel, which is currently used to manufacture high-performance cars, would be 62.34°C, whereas that inside the car with the proposed laminate would be 44.96°C, achieving a thermal barrier that allows a temperature difference of 17.38°C.


1967 ◽  
Vol 13 (7) ◽  
pp. 595-607 ◽  
Author(s):  
G N Bowers ◽  
M L Kelley ◽  
R B McComb

Abstract The precision of replicate analyses for alkaline phosphataseactivity measured on a survey reference sample was extremely poor. The reference sample's enzyme itself became suspect and was demonstrated to be sensitive to alkaline denaturation-in sharp contrast to the stability of the alkaline phosphatasesfound in human serum. The stability and chemical reactivity of the phosphatases present in this reference sample and in pooled frozen human serum, as well as those found in 4 partially purified nonhuman preparations and 4 commercial serum control materials, were investigated with regard to heat and alkaline denaturation, electrophoretic migration, and inhibition by phosphate, EDTA, and L-pheflylalanine. It was concluded that criteria of stability and chemical reactivity, as well as more detailed information concerning the source of enzymes utilized in reference samples and control materials, are needed. On the basis of these studies, reliance on commercial serum enzyme control materials as an enzyme "standard" cannot be endorsed.


Sign in / Sign up

Export Citation Format

Share Document