scholarly journals The importance of magnification effects in galaxy-galaxy lensing

2020 ◽  
Vol 638 ◽  
pp. A96 ◽  
Author(s):  
Sandra Unruh ◽  
Peter Schneider ◽  
Stefan Hilbert ◽  
Patrick Simon ◽  
Sandra Martin ◽  
...  

Magnification changes the observed local number density of galaxies on the sky. This biases the observed tangential shear profiles around galaxies: the so-called galaxy-galaxy lensing (GGL) signal. Inference of physical quantities, such as the mean mass profile of halos around galaxies, are correspondingly affected by magnification effects. We used simulated shear and galaxy data from the Millennium Simulation to quantify the effect on shear and mass estimates from the magnified lens and source number counts. The former is due to the large-scale matter distribution in the foreground of the lenses; the latter is caused by magnification of the source population by the matter associated with the lenses. The GGL signal is calculated from the simulations by an efficient fast Fourier transform, which can also be applied to real data. The numerical treatment is complemented by a leading-order analytical description of the magnification effects, which is shown to fit the numerical shear data well. We find the magnification effect is strongest for steep galaxy luminosity functions and high redshifts. For a KiDS+VIKING+GAMA-like survey with lens galaxies at redshift zd = 0.36 and source galaxies in the last three redshift bins with a mean redshift of ¯zs = 0.79, the magnification correction changes the shear profile up to 2%, and the mass is biased by up to 8%. We further considered an even higher redshift fiducial lens sample at zd = 0.83, with a limited magnitude of 22 mag in the r-band and a source redshift of zs = 0.99. Through this, we find that a magnification correction changes the shear profile up to 45% and that the mass is biased by up to 55%. As expected, the sign of the bias depends on the local slope of the lens luminosity function αd, where the mass is biased low for αd <  1 and biased high for αd >  1. While the magnification effect of sources is rarely more than 1% of the measured GGL signal, the statistical power of future weak lensing surveys warrants correction for this effect.

2012 ◽  
Vol 6 ◽  
pp. BBI.S9966 ◽  
Author(s):  
Guanjie Chen ◽  
Ao Yuan ◽  
Yanxun Zhou ◽  
Amy R. Bentley ◽  
Jie Zhou ◽  
...  

Advances in technology and reduced costs are facilitating large-scale sequencing of genes and exomes as well as entire genomes. Recently, we described an approach based on haplotypes called SCARVA 1 that enables the simultaneous analysis of the association between rare and common variants in disease etiology. Here, we describe an extension of SCARVA that evaluates individual markers instead of haplotypes. This modified method (SCARVAsnp) is implemented in four stages. First, all common variants in a pre-specified region (eg, gene) are evaluated individually. Second, a union procedure is used to combined all rare variants (RVs) in the index region, and the ratio of the log likelihood with one RV excluded to the log likelihood of a model with all the collapsed RVs is calculated. On the basis of previously-reported simulation studies, 1 a likelihood ratio ≥ 1.3 is considered statistically significant. Third, the direction of the association of the removed RV is determined by evaluating the change in λ values with the inclusion and exclusion of that RV. Lastly, significant common and rare variants, along with covariates, are included in a final regression model to evaluate the association between the trait and variants in that region. We apply simulated and real data sets to show that the method is simple to use, computationally effcient, and that it can accurately identify both common and rare risk variants. This method overcomes several limitations of existing methods. For example, SCARVAsnp limits loss of statistical power by not including variants that are not associated with the trait of interest in the final model. Also, SCARVAsnp takes into consideration the direction of association by effectively modelling positively and negatively associated variants.


2020 ◽  
Vol 500 (1) ◽  
pp. 22-33
Author(s):  
Matteo Bonato ◽  
Isabella Prandoni ◽  
Gianfranco De Zotti ◽  
Marisa Brienza ◽  
Raffaella Morganti ◽  
...  

ABSTRACT We present a study of the 1173 sources brighter than $S_{1.4\, \rm GHz}= 120\, \mu$Jy detected over an area of $\simeq 1.4\, \hbox{deg}^{2}$ in the Lockman Hole field. Exploiting the multiband information available in this field for ∼79 per cent of the sample, sources have been classified into radio loud (RL) active galactic nuclei (AGNs), star-forming galaxies (SFGs), and radio quiet (RQ) AGNs, using a variety of diagnostics available in the literature. Exploiting the observed tight anticorrelations between IRAC band 1 or band 2 and the source redshift we could assign a redshift to 177 sources missing a spectroscopic measurement or a reliable photometric estimate. A Monte Carlo approach was used to take into account the spread around the mean relation. The derived differential number counts and luminosity functions at several redshifts of each population show a good consistency with models and with earlier estimates made using data from different surveys and applying different approaches. Our results confirm that below $\sim 300\, \mu$Jy SFGs+RQ AGNs overtake RL AGNs that dominate at brighter flux densities. We also confirm earlier indications of a similar evolution of RQ AGNs and SFGs. Finally, we discuss the angular correlation function of our sources and highlight its sensitivity to the criteria used for the classification.


Genetics ◽  
2003 ◽  
Vol 165 (4) ◽  
pp. 2269-2282
Author(s):  
D Mester ◽  
Y Ronin ◽  
D Minkov ◽  
E Nevo ◽  
A Korol

Abstract This article is devoted to the problem of ordering in linkage groups with many dozens or even hundreds of markers. The ordering problem belongs to the field of discrete optimization on a set of all possible orders, amounting to n!/2 for n loci; hence it is considered an NP-hard problem. Several authors attempted to employ the methods developed in the well-known traveling salesman problem (TSP) for multilocus ordering, using the assumption that for a set of linked loci the true order will be the one that minimizes the total length of the linkage group. A novel, fast, and reliable algorithm developed for the TSP and based on evolution-strategy discrete optimization was applied in this study for multilocus ordering on the basis of pairwise recombination frequencies. The quality of derived maps under various complications (dominant vs. codominant markers, marker misclassification, negative and positive interference, and missing data) was analyzed using simulated data with ∼50-400 markers. High performance of the employed algorithm allows systematic treatment of the problem of verification of the obtained multilocus orders on the basis of computing-intensive bootstrap and/or jackknife approaches for detecting and removing questionable marker scores, thereby stabilizing the resulting maps. Parallel calculation technology can easily be adopted for further acceleration of the proposed algorithm. Real data analysis (on maize chromosome 1 with 230 markers) is provided to illustrate the proposed methodology.


Author(s):  
Andrew Jacobsen ◽  
Matthew Schlegel ◽  
Cameron Linke ◽  
Thomas Degris ◽  
Adam White ◽  
...  

This paper investigates different vector step-size adaptation approaches for non-stationary online, continual prediction problems. Vanilla stochastic gradient descent can be considerably improved by scaling the update with a vector of appropriately chosen step-sizes. Many methods, including AdaGrad, RMSProp, and AMSGrad, keep statistics about the learning process to approximate a second order update—a vector approximation of the inverse Hessian. Another family of approaches use meta-gradient descent to adapt the stepsize parameters to minimize prediction error. These metadescent strategies are promising for non-stationary problems, but have not been as extensively explored as quasi-second order methods. We first derive a general, incremental metadescent algorithm, called AdaGain, designed to be applicable to a much broader range of algorithms, including those with semi-gradient updates or even those with accelerations, such as RMSProp. We provide an empirical comparison of methods from both families. We conclude that methods from both families can perform well, but in non-stationary prediction problems the meta-descent methods exhibit advantages. Our method is particularly robust across several prediction problems, and is competitive with the state-of-the-art method on a large-scale, time-series prediction problem on real data from a mobile robot.


2009 ◽  
Vol 508 (1) ◽  
pp. 107-115 ◽  
Author(s):  
P. Giommi ◽  
S. Colafrancesco ◽  
P. Padovani ◽  
D. Gasparrini ◽  
E. Cavazzuti ◽  
...  

Complexity ◽  
2018 ◽  
Vol 2018 ◽  
pp. 1-16 ◽  
Author(s):  
Yiwen Zhang ◽  
Yuanyuan Zhou ◽  
Xing Guo ◽  
Jintao Wu ◽  
Qiang He ◽  
...  

The K-means algorithm is one of the ten classic algorithms in the area of data mining and has been studied by researchers in numerous fields for a long time. However, the value of the clustering number k in the K-means algorithm is not always easy to be determined, and the selection of the initial centers is vulnerable to outliers. This paper proposes an improved K-means clustering algorithm called the covering K-means algorithm (C-K-means). The C-K-means algorithm can not only acquire efficient and accurate clustering results but also self-adaptively provide a reasonable numbers of clusters based on the data features. It includes two phases: the initialization of the covering algorithm (CA) and the Lloyd iteration of the K-means. The first phase executes the CA. CA self-organizes and recognizes the number of clusters k based on the similarities in the data, and it requires neither the number of clusters to be prespecified nor the initial centers to be manually selected. Therefore, it has a “blind” feature, that is, k is not preselected. The second phase performs the Lloyd iteration based on the results of the first phase. The C-K-means algorithm combines the advantages of CA and K-means. Experiments are carried out on the Spark platform, and the results verify the good scalability of the C-K-means algorithm. This algorithm can effectively solve the problem of large-scale data clustering. Extensive experiments on real data sets show that the accuracy and efficiency of the C-K-means algorithm outperforms the existing algorithms under both sequential and parallel conditions.


2021 ◽  
Author(s):  
Cemanur Aydinalp ◽  
Sulayman Joof ◽  
Mehmet Nuri Akinci ◽  
Ibrahim Akduman ◽  
Tuba Yilmaz

In the manuscript, we propose a new technique for determination of Debye parameters, representing the dielectric properties of materials, from the reflection coefficient response of open-ended coaxial probes. The method retrieves the Debye parameters using a deep learning model designed through utilization of numerically generated data. Unlike real data, using synthetically generated input and output data for training purposes provides representation of a wide variety of materials with rapid data generation. Furthermore, the proposed method provides design flexibility and can be applied to any desired probe with intended dimensions and material. Next, we experimentally verified the designed deep learning model using measured reflection coefficients when the probe was terminated with five different standard liquids, four mixtures,and a gel-like material.and compared the results with the literature. Obtained mean percent relative error was ranging from 1.21±0.06 to 10.89±0.08. Our work also presents a large-scale statistical verification of the proposed dielectric property retrieval technique.


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Yaser Abdulraheem ◽  
Moustafa Ghannam ◽  
Hariharsudan Sivaramakrishnan Radhakrishnan ◽  
Ivan Gordon

Photovoltaic devices based on amorphous silicon/crystalline silicon (a-Si:H/c-Si) heterojunction interfaces hold the highest efficiency as of date in the class of silicon-based devices with efficiencies exceeding 26% and are regarded as a promising technology for large-scale terrestrial PV applications. The detailed understanding behind the operation of this type of device is crucial to improving and optimizing its performance. SHJ solar cells have primarily two main interfaces that play a major role in their operation: the transparent conductive oxide (TCO)/a-Si:H interface and the a-Si:H/c-Si heterojunction interface. In the work presented here, a detailed analytical description is provided for the impact of both interfaces on the performance of such devices and especially on the device fill factor ( FF ). It has been found that the TCO work function can dramatically impact the FF by introducing a series resistance element in addition to limiting the forward biased current under illumination causing the well-known S-shape characteristic in the I-V curve of such devices. On the other hand, it is shown that the thermionic emission barrier at the heterojunction interface can play a major role in introducing an added series resistance factor due to the intrinsic a-Si:H buffer layer that is usually introduced to improve surface passivation. Theoretical explanation on the role of both interfaces on device operation based on 1D device simulation is experimentally verified. The I-V characteristics of fabricated devices were compared to the curves produced by simulation, and the observed degradation in the FF of fabricated devices was explained in light of analytical findings from simulation.


2021 ◽  
Author(s):  
Yi Luan ◽  
Rui Ding ◽  
Wenshen Gu ◽  
Xiaofan Zhang ◽  
Xinliang Chen ◽  
...  

Abstract Since the end of 2019, the COVID-19 epidemic has swept the world. With the widespread spread of the COVID-19 and the continuous emergence of mutated strains, the situation for the prevention and control of the COVID-19 epidemic remains severe. On May 21, 2021, Guangzhou City, Guangdong Province, notified the discovery of a new locally confirmed case. Guangzhou became the first city in mainland China to compete with the delta mutant strain. As a local hospital with strong nucleic acid detection capabilities, Sun Yat-sen University Sun Yat-sen Memorial Hospital took the lead in launching the construction and deployment of the Mobile Shelter Laboratories and large-scale screening work in Foshan and Zhanjiang, Guangdong Province. Through summarizing "practical" experience, observation and comparison data analysis, we use real data to verify a feasible solution for rapid expansion of detection capabilities in a short period of time. We hope that these experiences will have certain reference value for other countries or regions, especially the underdeveloped areas of medical and health care.


2016 ◽  
Author(s):  
Hieab HH Adams ◽  
Hadie Adams ◽  
Lenore J Launer ◽  
Sudha Seshadri ◽  
Reinhold Schmidt ◽  
...  

Joint analysis of data from multiple studies in collaborative efforts strengthens scientific evidence, with the gold standard approach being the pooling of individual participant data (IPD). However, sharing IPD often has legal, ethical, and logistic constraints for sensitive or high-dimensional data, such as in clinical trials, observational studies, and large-scale omics studies. Therefore, meta-analysis of study-level effect estimates is routinely done, but this compromises on statistical power, accuracy, and flexibility. Here we propose a novel meta-analytical approach, named partial derivatives meta-analysis, that is mathematically equivalent to using IPD, yet only requires the sharing of aggregate data. It not only yields identical results as pooled IPD analyses, but also allows post-hoc adjustments for covariates and stratification without the need for site-specific re-analysis. Thus, in case that IPD cannot be shared, partial derivatives meta-analysis still produces gold standard results, which can be used to better inform guidelines and policies on clinical practice.


Sign in / Sign up

Export Citation Format

Share Document