scholarly journals Seismic glitchness at Sos Enattos site: impact on intermediate black hole binaries detection efficiency

2021 ◽  
Vol 136 (5) ◽  
Author(s):  
A. Allocca ◽  
A. Berbellini ◽  
L. Boschi ◽  
E. Calloni ◽  
G. L. Cardello ◽  
...  

AbstractThird-generation gravitational wave observatories will extend the lower frequency limit of the observation band toward 2 Hz, where new sources of gravitational waves, in particular intermediate-mass black holes (IMBH), will be detected. In this frequency region, seismic noise will play an important role, mainly through the so-called Newtonian noise, i.e., the gravity-mediated coupling between ground motion and test mass displacements. The signal lifetime of such sources in the detector is of the order of tens of seconds. In order to determine whether a candidate site to host the Einstein Telescope observatory is particularly suitable to observe such sources, it is necessary to estimate the probability distributions that, in the characteristic time scale of the signal, the sensitivity of the detector is not perturbed by Newtonian noise. In this paper, a first analysis is presented, focused on the Sos Enattos site (Sardinia, Italy), a candidate to host the Einstein Telescope. Starting from a long data set of seismic noise, this distribution is evaluated considering both the presently designed triangular ET configuration and also the classical ”L” configuration.

2020 ◽  
Vol 499 (4) ◽  
pp. 5641-5652
Author(s):  
Georgios Vernardos ◽  
Grigorios Tsagkatakis ◽  
Yannis Pantazis

ABSTRACT Gravitational lensing is a powerful tool for constraining substructure in the mass distribution of galaxies, be it from the presence of dark matter sub-haloes or due to physical mechanisms affecting the baryons throughout galaxy evolution. Such substructure is hard to model and is either ignored by traditional, smooth modelling, approaches, or treated as well-localized massive perturbers. In this work, we propose a deep learning approach to quantify the statistical properties of such perturbations directly from images, where only the extended lensed source features within a mask are considered, without the need of any lens modelling. Our training data consist of mock lensed images assuming perturbing Gaussian Random Fields permeating the smooth overall lens potential, and, for the first time, using images of real galaxies as the lensed source. We employ a novel deep neural network that can handle arbitrary uncertainty intervals associated with the training data set labels as input, provides probability distributions as output, and adopts a composite loss function. The method succeeds not only in accurately estimating the actual parameter values, but also reduces the predicted confidence intervals by 10 per cent in an unsupervised manner, i.e. without having access to the actual ground truth values. Our results are invariant to the inherent degeneracy between mass perturbations in the lens and complex brightness profiles for the source. Hence, we can quantitatively and robustly quantify the smoothness of the mass density of thousands of lenses, including confidence intervals, and provide a consistent ranking for follow-up science.


Author(s):  
Valentin Raileanu ◽  

The article briefly describes the history and fields of application of the theory of extreme values, including climatology. The data format, the Generalized Extreme Value (GEV) probability distributions with Bock Maxima, the Generalized Pareto (GP) distributions with Point of Threshold (POT) and the analysis methods are presented. Estimating the distribution parameters is done using the Maximum Likelihood Estimation (MLE) method. Free R software installation, the minimum set of required commands and the GUI in2extRemes graphical package are described. As an example, the results of the GEV analysis of a simulated data set in in2extRemes are presented.


2014 ◽  
Author(s):  
Andreas Tuerk ◽  
Gregor Wiktorin ◽  
Serhat Güler

Quantification of RNA transcripts with RNA-Seq is inaccurate due to positional fragment bias, which is not represented appropriately by current statistical models of RNA-Seq data. This article introduces the Mix2(rd. "mixquare") model, which uses a mixture of probability distributions to model the transcript specific positional fragment bias. The parameters of the Mix2model can be efficiently trained with the Expectation Maximization (EM) algorithm resulting in simultaneous estimates of the transcript abundances and transcript specific positional biases. Experiments are conducted on synthetic data and the Universal Human Reference (UHR) and Brain (HBR) sample from the Microarray quality control (MAQC) data set. Comparing the correlation between qPCR and FPKM values to state-of-the-art methods Cufflinks and PennSeq we obtain an increase in R2value from 0.44 to 0.6 and from 0.34 to 0.54. In the detection of differential expression between UHR and HBR the true positive rate increases from 0.44 to 0.71 at a false positive rate of 0.1. Finally, the Mix2model is used to investigate biases present in the MAQC data. This reveals 5 dominant biases which deviate from the common assumption of a uniform fragment distribution. The Mix2software is available at http://www.lexogen.com/fileadmin/uploads/bioinfo/mix2model.tgz.


Data in Brief ◽  
2019 ◽  
Vol 27 ◽  
pp. 104753 ◽  
Author(s):  
Guillermo Valencia Ochoa ◽  
José Núñez Alvarez ◽  
Marley Vanegas Chamorro

2018 ◽  
Vol 618 ◽  
pp. L4 ◽  
Author(s):  
A. Mirhosseini ◽  
M. Moniez

Aims. The microlensing surveys MACHO, EROS, MOA and OGLE (hereafter called MEMO) have searched for microlensing toward the Large Magellanic Cloud for a cumulated duration of 27 years. We study the potential of joining these databases to search for very massive objects, that produce microlensing events with a duration of several years. Methods. We identified the overlaps between the different catalogs and compiled their time coverage to identify common regions where a joint microlensing detection algorithm can operate. We extrapolated a conservative global microlensing detection efficiency based on simple hypotheses, and estimated detection rates for multi-year duration events. Results. Compared with the individual survey searches, we show that a combined search for long timescale microlensing should detect about ten more events caused by 100 M⊙ black holes if these objects have a major contribution to the Milky Way halo. Conclusions. Assuming that a common analysis is feasible, meaning that the difficulties that arise from using different passbands can be overcome, we show that the sensitivity of such an analysis might enable us to quantify the Galactic black hole component.


2019 ◽  
Vol 3 ◽  
pp. 11-20
Author(s):  
Binod Kumar Sah ◽  
A. Mishra

Background: The exponential and the Lindley (1958) distributions occupy central places among the class of continuous probability distributions and play important roles in statistical theory. A Generalised Exponential-Lindley Distribution (GELD) was given by Mishra and Sah (2015) of which, both the exponential and the Lindley distributions are the particular cases. Mixtures of distributions form an important class of distributions in the domain of probability distributions. A mixture distribution arises when some or all the parameters in a probability function vary according to certain probability law. In this paper, a Generalised Exponential- Lindley Mixture of Poisson Distribution (GELMPD) has been obtained by mixing Poisson distribution with the GELD. Materials and Methods: It is based on the concept of the generalisations of some continuous mixtures of Poisson distribution. Results: The Probability mass of function of generalized exponential-Lindley mixture of Poisson distribution has been obtained by mixing Poisson distribution with GELD. The first four moments about origin of this distribution have been obtained. The estimation of its parameters has been discussed using method of moments and also as maximum likelihood method. This distribution has been fitted to a number of discrete data-sets which are negative binomial in nature and it has been observed that the distribution gives a better fit than the Poisson–Lindley Distribution (PLD) of Sankaran (1970). Conclusion: P-value of the GELMPD is found greater than that in case of PLD. Hence, it is expected to be a better alternative to the PLD of Sankaran for similar type of discrete data-set which is negative binomial in nature.


Hydrology ◽  
2019 ◽  
Vol 6 (4) ◽  
pp. 89 ◽  
Author(s):  
De Luca ◽  
Galasso

In this work, the authors investigated the feasibility of calibrating a model which is suitable for the generation of continuous high-resolution rainfall series, by using only data from annual maximum rainfall (AMR) series, which are usually longer than continuous high-resolution data, or they are the unique available data set for many locations. In detail, the basic version of the Neyman–Scott Rectangular Pulses (NSRP) model was considered, and numerical experiments were carried out, in order to analyze which parameters can mostly influence the extreme value frequency distributions, and whether heavy rainfall reproduction can be improved with respect to the usual calibration with continuous data. The obtained results were highly promising, as the authors found acceptable relationships among extreme value distributions and statistical properties of intensity and duration for the pulses. Moreover, the proposed procedure is flexible, and it is clearly applicable for a generic rainfall generator, in which probability distributions and shape of the pulses, and extreme value distributions can assume any mathematical expression.


Radiocarbon ◽  
2012 ◽  
Vol 54 (02) ◽  
pp. 239-265 ◽  
Author(s):  
Robert Z Selden

The East Texas Radiocarbon Database contributes to an analysis of tempo and place for Woodland era (∼500 BC–AD 800) archaeological sites within the region. The temporal and spatial distributions of calibrated14C ages (n= 127) with a standard deviation (ΔT) of 61 from archaeological sites with Woodland components (n= 51) are useful in exploring the development and geographical continuity of the peoples in cast Texas, and lead to a refinement of our current chronological understanding of the period. While analysis of summed probability distributions (SPDs) produces less than significant findings due to sample size, they are used here to illustrate the method of date combination prior to the production of site- and period-specific SPDs. Through the incorporation of this method, the number of14C dates is reduced to 85 with a ΔTof 54. The resultant data set is then subjected to statistical analyses that conclude with the separation of the east Texas Woodland period into the Early Woodland (∼500 BC–AD 0), Middle Woodland (∼AD 0–400), and Late Woodland (∼AD 400–800) periods.


Aerospace ◽  
2018 ◽  
Vol 5 (4) ◽  
pp. 109 ◽  
Author(s):  
Michael Schultz ◽  
Sandro Lorenz ◽  
Reinhard Schmitz ◽  
Luis Delgado

Weather events have a significant impact on airport performance and cause delayed operations if the airport capacity is constrained. We provide quantification of the individual airport performance with regards to an aggregated weather-performance metric. Specific weather phenomena are categorized by the air traffic management airport performance weather algorithm, which aims to quantify weather conditions at airports based on aviation routine meteorological reports. Our results are computed from a data set of 20.5 million European flights of 2013 and local weather data. A methodology is presented to evaluate the impact of weather events on the airport performance and to select the appropriate threshold for significant weather conditions. To provide an efficient method to capture the impact of weather, we modelled departing and arrival delays with probability distributions, which depend on airport size and meteorological impacts. These derived airport performance scores could be used in comprehensive air traffic network simulations to evaluate the network impact caused by weather induced local performance deterioration.


Proceedings ◽  
2019 ◽  
Vol 33 (1) ◽  
pp. 14 ◽  
Author(s):  
Martino Trassinelli

We present here Nested_fit, a Bayesian data analysis code developed for investigations of atomic spectra and other physical data. It is based on the nested sampling algorithm with the implementation of an upgraded lawn mower robot method for finding new live points. For a given data set and a chosen model, the program provides the Bayesian evidence, for the comparison of different hypotheses/models, and the different parameter probability distributions. A large database of spectral profiles is already available (Gaussian, Lorentz, Voigt, Log-normal, etc.) and additional ones can easily added. It is written in Fortran, for an optimized parallel computation, and it is accompanied by a Python library for the results visualization.


Sign in / Sign up

Export Citation Format

Share Document