scholarly journals TDCOSMO

2020 ◽  
Vol 639 ◽  
pp. A101 ◽  
Author(s):  
M. Millon ◽  
A. Galan ◽  
F. Courbin ◽  
T. Treu ◽  
S. H. Suyu ◽  
...  

Time-delay cosmography of lensed quasars has achieved 2.4% precision on the measurement of the Hubble constant, H0. As part of an ongoing effort to uncover and control systematic uncertainties, we investigate three potential sources: 1- stellar kinematics, 2- line-of-sight effects, and 3- the deflector mass model. To meet this goal in a quantitative way, we reproduced the H0LiCOW/SHARP/STRIDES (hereafter TDCOSMO) procedures on a set of real and simulated data, and we find the following. First, stellar kinematics cannot be a dominant source of error or bias since we find that a systematic change of 10% of measured velocity dispersion leads to only a 0.7% shift on H0 from the seven lenses analyzed by TDCOSMO. Second, we find no bias to arise from incorrect estimation of the line-of-sight effects. Third, we show that elliptical composite (stars + dark matter halo), power-law, and cored power-law mass profiles have the flexibility to yield a broad range in H0 values. However, the TDCOSMO procedures that model the data with both composite and power-law mass profiles are informative. If the models agree, as we observe in real systems owing to the “bulge-halo” conspiracy, H0 is recovered precisely and accurately by both models. If the two models disagree, as in the case of some pathological models illustrated here, the TDCOSMO procedure either discriminates between them through the goodness of fit, or it accounts for the discrepancy in the final error bars provided by the analysis. This conclusion is consistent with a reanalysis of six of the TDCOSMO (real) lenses: the composite model yields H0 = 74.0−1.8+1.7 km s−1 Mpc−1, while the power-law model yields 74.2−1.6+1.6 km s−1 Mpc−1. In conclusion, we find no evidence of bias or errors larger than the current statistical uncertainties reported by TDCOSMO.

2019 ◽  
Vol 629 ◽  
pp. A59 ◽  
Author(s):  
Lorenzo Posti ◽  
Antonino Marasco ◽  
Filippo Fraternali ◽  
Benoit Famaey

In ΛCDM cosmology, to first order, galaxies form out of the cooling of baryons within the virial radius of their dark matter halo. The fractions of mass and angular momentum retained in the baryonic and stellar components of disc galaxies put strong constraints on our understanding of galaxy formation. In this work, we derive the fraction of angular momentum retained in the stellar component of spirals, fj, the global star formation efficiency fM, and the ratio of the asymptotic circular velocity (Vflat) to the virial velocity fV, and their scatter, by fitting simultaneously the observed stellar mass-velocity (Tully–Fisher), size–mass, and mass–angular momentum (Fall) relations. We compare the goodness of fit of three models: (i) where the logarithm of fj, fM, and fV vary linearly with the logarithm of the observable Vflat; (ii) where these values vary as a double power law; and (iii) where these values also vary as a double power law but with a prior imposed on fM such that it follows the expectations from widely used abundance matching models. We conclude that the scatter in these fractions is particularly small (∼0.07 dex) and that the linear model is by far statistically preferred to that with abundance matching priors. This indicates that the fundamental galaxy formation parameters are small-scatter single-slope monotonic functions of mass, instead of being complicated non-monotonic functions. This incidentally confirms that the most massive spiral galaxies should have turned nearly all the baryons associated with their haloes into stars. We call this the failed feedback problem.


2020 ◽  
Vol 496 (3) ◽  
pp. 3973-3990
Author(s):  
Sut-Ieng Tam ◽  
Richard Massey ◽  
Mathilde Jauzac ◽  
Andrew Robertson

ABSTRACT We quantify the performance of mass mapping techniques on mock imaging and gravitational lensing data of galaxy clusters. The optimum method depends upon the scientific goal. We assess measurements of clusters’ radial density profiles, departures from sphericity, and their filamentary attachment to the cosmic web. We find that mass maps produced by direct (KS93) inversion of shear measurements are unbiased, and that their noise can be suppressed via filtering with mrlens. Forward-fitting techniques, such as lenstool, suppress noise further, but at a cost of biased ellipticity in the cluster core and overestimation of mass at large radii. Interestingly, current searches for filaments are noise-limited by the intrinsic shapes of weakly lensed galaxies, rather than by the projection of line-of-sight structures. Therefore, space-based or balloon-based imaging surveys that resolve a high density of lensed galaxies could soon detect one or two filaments around most clusters.


1997 ◽  
Vol 06 (04) ◽  
pp. 425-447 ◽  
Author(s):  
Takeshi Fukuyama ◽  
Yuuko Kakigi ◽  
Takashi Okamura

Nontransparent models of the multipole expansion model and the two point-mass model are analyzed from the catastrophe theory. Singularity behaviours of 2n-pole moments are discussed. We apply these models to the triple quasar PG1115+080 and compare with the typical transparent model, softened power law spheroids. The multipole expansion model gives the best fit among them.


1976 ◽  
Vol 159 (1) ◽  
pp. 105-120 ◽  
Author(s):  
J D Allen ◽  
J A Thoma

We have developed a depolymerase computer model that uses a minimization routine. The model is designed so that, given experimental bond-cleavage frequencies for oligomeric substrates and experimental Michaelis parameters as a function of substrate chain length, the optimum subsite map is generated. The minimized sum of the weighted-squared residuals of the experimental and calculated data is used as a criterion of the goodness-of-fit for the optimized subsite map. The application of the minimization procedure to subsite mapping is explored through the use of simulated data. A procedure is developed whereby the minimization model can be used to determine the number of subsites in the enzymic binding region and to locate the position of the catalytic amino acids among these subsites. The degree of propagation of experimental variance into the subsite-binding energies is estimated. The question of whether hydrolytic rate coefficients are constant or a function of the number of filled subsites is examined.


2019 ◽  
Vol 19 ◽  
pp. e00657 ◽  
Author(s):  
Peijian Shi ◽  
Lei Zhao ◽  
David A. Ratkowsky ◽  
Karl J. Niklas ◽  
Weiwei Huang ◽  
...  

2020 ◽  
Vol 12 (22) ◽  
pp. 9778
Author(s):  
Wei Zhu ◽  
Ding Ma ◽  
Zhigang Zhao ◽  
Renzhong Guo

Location-based social media have facilitated us to bridge the gap between virtual and physical worlds through the exploration of human online dynamics from a geographic perspective. This study uses a large collection of geotagged photos from Flickr to investigate the complexity of spatial interactions at the country level. We adopted three levels of administrative divisions in mainland China—province, city, and county—as basic geographic units and established three types of topology—province–province network, city–city network, and county–county network—from the extracted user movement trajectories. We conducted the scaling analysis based on heavy-tailed distribution statistics including power law exponents, goodness of fit index, and ht-index, by which we characterized a great complexity of the trajectory lengths, spatial distribution of geotagged photos, and the related metrics of built networks. The great complexity indicates the highly imbalanced ratio of populated-to-unpopulated areas or large-to-small flows between areas. More interestingly, all power law exponents were around 2 for the networks at various spatial and temporal scales. Such a recurrence of scaling statistics at multiple resolutions can be regarded a statistical self-similarity and could thus help us to reveal the fractal nature of human mobility patterns.


2011 ◽  
Vol 22 (05) ◽  
pp. 495-503 ◽  
Author(s):  
ANTHONY LONGJAS ◽  
ERIKA FILLE LEGARA ◽  
CHRISTOPHER MONTEROLA

We investigate how humans visually perceive and approximate area or space allocation through visual area experiments. The participants are asked to draw a circle concentric to the reference circle on the monitor screen using a computer mouse with area measurements relative to the area of the reference circle. The activity is repeated for triangle, square and hexagon. The area estimated corresponds to the area estimates of a participant (perceived) for a corresponding requested area to be drawn (stimulus). The area estimated fits very well (goodness of fit R2 > 0.97) to a power law given by r2α where r is the radius of the circle or the distance of the edge for triangle, square and hexagon. The power law fit demonstrates that for all shapes sampled, participants underestimated area for stimulus that are less than ~100% of the reference area and overestimated area for stimulus greater than ~100% of the reference area. The value of α is smallest for the circle (α∘ ≈ 1.33) and largest for triangle (α△ ≈ 1.56) indicating that in the presence of a reference area with the same shape, circle is perceived to be smallest among the figures considered when drawn bigger than the reference area, but largest when drawn smaller than the reference area. We also conducted experiments on length estimation and consistent with the results of Dehaene et al., Science 2008, we recover a linear relationship between the perceived length and the stimulus. We show that contrary to number mapping into space and/or length perception, human's perception of area is not corrected by the introduction of cultural interventions such as formal education.


2019 ◽  
Vol 628 ◽  
pp. A117 ◽  
Author(s):  
A. Bittner ◽  
J. Falcón-Barroso ◽  
B. Nedelchev ◽  
A. Dorta ◽  
D. A. Gadotti ◽  
...  

We present a convenient, all-in-one framework for the scientific analysis of fully reduced, (integral-field) spectroscopic data. The Galaxy IFU Spectroscopy Tool (GIST) is entirely written in Python 3 and conducts all the steps from the preparation of input data to the scientific analysis and to the production of publication-quality plots. In its basic set-up, it extracts stellar kinematics, performs an emission-line analysis, and derives stellar population properties from full spectral fitting and via the measurement of absorption line-strength indices by exploiting the well-known pPXF and GandALF routines, where the latter has now been implemented in Python. The pipeline is not specific to any instrument or analysis technique and provides easy means of modification and further development, thanks to its modular code architecture. An elaborate, Python-native parallelisation is implemented and tested on various machines. The software further features a dedicated visualisation routine with a sophisticated graphical user interface. This allows an easy, fully interactive plotting of all measurements, spectra, fits, and residuals, as well as star formation histories and the weight distribution of the models. The pipeline has been successfully applied to both low- and high-redshift data from MUSE, PPAK (CALIFA), and SINFONI, and to simulated data for HARMONI and WEAVE and is currently being used by the TIMER, Fornax3D, and PHANGS collaborations. We demonstrate its capabilities by applying it to MUSE TIMER observations of NGC 1433.


2020 ◽  
Vol 498 (4) ◽  
pp. 6013-6033
Author(s):  
Mario H Amante ◽  
Juan Magaña ◽  
V Motta ◽  
Miguel A García-Aspeitia ◽  
Tomás Verdugo

ABSTRACT Inspired by a new compilation of strong-lensing systems, which consist of 204 points in the redshift range 0.0625 < zl < 0.958 for the lens and 0.196 < zs < 3.595 for the source, we constrain three models that generate a late cosmic acceleration: the ω-cold dark matter model, the Chevallier–Polarski–Linder, and the Jassal–Bagla–Padmanabhan parametrizations. Our compilation contains only those systems with early-type galaxies acting as lenses, with spectroscopically measured stellar velocity dispersions, estimated Einstein radius, and both the lens and source redshifts. We assume an axially symmetric mass distribution in the lens equation, using a correction to alleviate differences between the measured velocity dispersion (σ) and the dark matter halo velocity dispersion (σDM) as well as other systematic errors that may affect the measurements. We have considered different subsamples to constrain the cosmological parameters of each model. Additionally, we generate a mock data of SLS to asses the impact of the chosen mass profile on the accuracy of Einstein radius estimation. Our results show that cosmological constraints are very sensitive to the selected data: Some cases show convergence problems in the estimation of cosmological parameters (e.g. systems with observed distance ratio Dobs < 0.5), others show high values for the χ2 function (e.g. systems with a lens equation Dobs > 1 or high velocity dispersion σ > 276 km s−1). However, we obtained a fiduciary sample with 143 systems, which improves the constraints on each tested cosmological model.


Sign in / Sign up

Export Citation Format

Share Document