underlying distribution
Recently Published Documents


TOTAL DOCUMENTS

318
(FIVE YEARS 116)

H-INDEX

27
(FIVE YEARS 6)

2022 ◽  
Vol 924 (2) ◽  
pp. 93
Author(s):  
J. Andrew Casey-Clyde ◽  
Chiara M. F. Mingarelli ◽  
Jenny E. Greene ◽  
Kris Pardo ◽  
Morgan Nañez ◽  
...  

Abstract The nanohertz gravitational wave background (GWB) is believed to be dominated by GW emission from supermassive black hole binaries (SMBHBs). Observations of several dual-active galactic nuclei (AGN) strongly suggest a link between AGN and SMBHBs, given that these dual-AGN systems will eventually form bound binary pairs. Here we develop an exploratory SMBHB population model based on empirically constrained quasar populations, allowing us to decompose the GWB amplitude into an underlying distribution of SMBH masses, SMBHB number density, and volume enclosing the GWB. Our approach also allows us to self-consistently predict the number of local SMBHB systems from the GWB amplitude. Interestingly, we find the local number density of SMBHBs implied by the common-process signal in the NANOGrav 12.5-yr data set to be roughly five times larger than previously predicted by other models. We also find that at most ∼25% of SMBHBs can be associated with quasars. Furthermore, our quasar-based approach predicts ≳95% of the GWB signal comes from z ≲ 2.5, and that SMBHBs contributing to the GWB have masses ≳108 M ⊙. We also explore how different empirical galaxy–black hole scaling relations affect the local number density of GW sources, and find that relations predicting more massive black holes decrease the local number density of SMBHBs. Overall, our results point to the important role that a measurement of the GWB will play in directly constraining the cosmic population of SMBHBs, as well as their connections to quasars and galaxy mergers.


Universe ◽  
2021 ◽  
Vol 8 (1) ◽  
pp. 19
Author(s):  
Giulia Cusin ◽  
Ruth Durrer ◽  
Irina Dvorkin

In this paper, we studied the gravitational lensing of gravitational wave events. The probability that an observed gravitational wave source has been (de-)amplified by a given amount is a detector-dependent quantity which depends on different ingredients: the lens distribution, the underlying distribution of sources and the detector sensitivity. The main objective of the present work was to introduce a semi-analytic approach to study the distribution of the magnification of a given source population observed with a given detector. The advantage of this approach is that each ingredient can be individually varied and tested. We computed the expected magnification as both a function of redshift and of the observedsource luminosity distance, which is the only quantity one can access via observation in the absence of an electromagnetic counterpart. As a case study, we then focus on the LIGO/Virgo network and on strong lensing (μ>1).


Author(s):  
Alexandros Christos Chasoglou ◽  
Panagiotis Tsirikoglou ◽  
Anestis I Kalfas ◽  
Reza S Abhari

Abstract In the present study, an adaptive randomized Quasi Monte Carlo methodology is presented, combining Stein’s two-stage adaptive scheme and Low Discrepancy Sobol sequences. The method is used for the propagation and calculation of uncertainties related to aerodynamic pneumatic probes and high frequency fast response aerodynamic probes (FRAP). The proposed methodology allows the fast and accurate, in a probabilistic sense, calculation of uncertainties, ensuring that the total number of Monte Carlo (MC) trials is kept low based on the desired numerical accuracy. Thus, this method is well-suited for aerodynamic pressure probes, where multiple points are evaluated in their calibration space. Complete and detailed measurement models are presented for both a pneumatic probe and FRAP. The models are segregated in sub-problems allowing the evaluation and inspection of intermediate steps of MC in a transparent manner, also enabling the calculation of the relative contributions of the elemental uncertainties on the measured quantities. Various, commonly used sampling techniques for MC simulation and different adaptive MC schemes are compared, using both theoretical toy distributions and actual examples from aerodynamic probes' measurement models. The robustness of Stein's two-stage scheme is demonstrated even in cases when signiffcant deviation from normality is observed in the underlying distribution of the output of the MC. With regards to FRAP, two issues related to piezo-resistive sensors are addressed, namely temperature dependent pressure hysteresis and temporal sensor drift, and their uncertainties are accounted for in the measurement model. These effects are the most dominant factors, affecting all flow quantities' uncertainties, with signiffcance that varies mainly with Mach and operating temperature. This work highlights the need to construct accurate and detailed measurement models for aerodynamic probes, that otherwise will result in signiffcant underestimation (in most cases in excess of 50%) of the final uncertainties.


2021 ◽  
Vol 922 (2) ◽  
pp. L24
Author(s):  
Thomas Connor ◽  
Daniel Stern ◽  
Eduardo Bañados ◽  
Chiara Mazzucchelli

Abstract The z = 6.327 quasar SDSS J010013.02+280225.8 (hereafter J0100+2802) is believed to be powered by a black hole more massive than 1010 M ⊙, making it the most massive black hole known in the first billion years of the universe. However, recent high-resolution ALMA imaging shows four structures at the location of this quasar, potentially implying that it is lensed with a magnification of μ ∼ 450 and thus its black hole is significantly less massive. Furthermore, for the underlying distribution of magnifications of z ≳ 6 quasars to produce such an extreme value, theoretical models predict that a larger number of quasars in this epoch should be lensed, implying further overestimates of early black hole masses. To provide an independent constraint on the possibility that J0100+2802 is lensed, we reanalyzed archival XMM-Newton observations of the quasar and compared the expected ratios of X-ray luminosity to rest-frame UV and IR luminosities. For both cases, J0100+2802's X-ray flux is consistent with the no-lensing scenario; while this could be explained by J0100+2802 being X-ray faint, we find it does not have the X-ray or optical spectral features expected for an X-ray faint quasar. Finally, we compare the overall distribution of X-ray fluxes for known, typical z ≳ 6 quasars. We find a 3σ tension between the observed and predicted X-ray-to-UV flux ratios when adopting the magnification probability distribution required to produce a μ = 450 quasar.


Author(s):  
Dawid Gondek ◽  
Rebecca E. Lacey ◽  
Dawid G. Blanchflower ◽  
Praveetha Patalay

Abstract Aims The main objective of this study was to investigate distributional shifts underlying observed age and cohort differences in mean levels of psychological distress in the 1958 and 1970 British birth cohorts. Methods This study used data from the 1958 and 1970 British birth cohorts (n = 24,707). Psychological distress was measured by the Malaise Inventory at ages 23, 33, 42 and 50 in the 1958 cohort and 26, 34, 42 and 46–48 in the 1970 cohort. Results The shifts in the distribution across age appear to be mainly due to changing proportion of those with moderate symptoms, except for midlife (age 42–50) when we observed polarisation in distress— an increase in proportions of people with no symptoms and multiple symptoms. The elevated levels of distress in the 1970 cohort, compared with the 1958 cohort, appeared to be due to an increase in the proportion of individuals with both moderate and high symptoms. For instance, at age 33/34 42.3% endorsed at least two symptoms in the 1970 cohort vs 24.7% in 1958, resulting in a shift in the entire distribution of distress towards the more severe end of the spectrum. Conclusions Our study demonstrates the importance of studying not only mean levels of distress over time, but also the underlying shifts in its distribution. Due to the large dispersion of distress scores at any given measurement occasion, understanding the underlying distribution provides a more complete picture of population trends.


Author(s):  
Sean M. S. Hayes ◽  
Jeffrey R. Sachs ◽  
Carolyn R. Cho

AbstractNetwork inference is a valuable approach for gaining mechanistic insight from high-dimensional biological data. Existing methods for network inference focus on ranking all possible relations (edges) among all measured quantities such as genes, proteins, metabolites (features) observed, which yields a dense network that is challenging to interpret. Identifying a sparse, interpretable network using these methods thus requires an error-prone thresholding step which compromises their performance. In this article we propose a new method, DEKER-NET, that addresses this limitation by directly identifying a sparse, interpretable network without thresholding, improving real-world performance. DEKER-NET uses a novel machine learning method for feature selection in an iterative framework for network inference. DEKER-NET is extremely flexible, handling linear and nonlinear relations while making no assumptions about the underlying distribution of data, and is suitable for categorical or continuous variables. We test our method on the Dialogue for Reverse Engineering Assessments and Methods (DREAM) challenge data, demonstrating that it can directly identify sparse, interpretable networks without thresholding while maintaining performance comparable to the hypothetical best-case thresholded network of other methods.


Author(s):  
Nuri Celik

The arcsine distribution is very important tool in statistics literature especially in Brownian motion studies. However, modelling real data sets, even when the potential underlying distribution is pre-defined, is very complicated and difficult in statistical modelling. For this reason, we desire some flexibility on the underlying distribution. In this study, we propose a new distribution obtained by arcsine distribution with Azzalini’s skewness procedure. The main characteristics of the proposed distribution are determined both with theoretically and simulation study.


2021 ◽  
Author(s):  
◽  
Sione Paea

<p>Coal pyrolysis is a complex process involving a large number of chemical reactions. The most accurate and up to date approach to modeling coal pyrolysis is to adopt the Distributed Activation Energy Model (DAEM) in which the reactions are assumed to consist of a set of irreversible first-order reactions that have different activation energies and a constant frequency factor. The differences in the activation energies have usually been represented by a Gaussian distribution. This thesis firstly compares the Simple First Order Reaction Model (SFOR) with the Distributed Activation Energy Model (DAEM), to explore why the DAEM may be a more appropriate approach to modeling coal pyrolysis. The second part of the thesis uses the inverse problem approach together with the smoothing function (iterative method) to provide an improved estimate of the underlying distribution in the wide distribution case of the DAEM. The present method significantly minimizes the error due to differencing and smoothes the chopped off parts on the underlying distribution curve.</p>


2021 ◽  
Author(s):  
◽  
Sione Paea

<p>Coal pyrolysis is a complex process involving a large number of chemical reactions. The most accurate and up to date approach to modeling coal pyrolysis is to adopt the Distributed Activation Energy Model (DAEM) in which the reactions are assumed to consist of a set of irreversible first-order reactions that have different activation energies and a constant frequency factor. The differences in the activation energies have usually been represented by a Gaussian distribution. This thesis firstly compares the Simple First Order Reaction Model (SFOR) with the Distributed Activation Energy Model (DAEM), to explore why the DAEM may be a more appropriate approach to modeling coal pyrolysis. The second part of the thesis uses the inverse problem approach together with the smoothing function (iterative method) to provide an improved estimate of the underlying distribution in the wide distribution case of the DAEM. The present method significantly minimizes the error due to differencing and smoothes the chopped off parts on the underlying distribution curve.</p>


2021 ◽  
Vol 11 (20) ◽  
pp. 9644
Author(s):  
Honorius Gâlmeanu ◽  
Răzvan Andonie

Data classification in streams where the underlying distribution changes over time is known to be difficult. This problem—known as concept drift detection—involves two aspects: (i) detecting the concept drift and (ii) adapting the classifier. Online training only considers the most recent samples; they form the so-called shifting window. Dynamic adaptation to concept drift is performed by varying the width of the window. Defining an online Support Vector Machine (SVM) classifier able to cope with concept drift by dynamically changing the window size and avoiding retraining from scratch is currently an open problem. We introduce the Adaptive Incremental–Decremental SVM (AIDSVM), a model that adjusts the shifting window width using the Hoeffding statistical test. We evaluate AIDSVM performance on both synthetic and real-world drift datasets. Experiments show a significant accuracy improvement when encountering concept drift, compared with similar drift detection models defined in the literature. The AIDSVM is efficient, since it is not retrained from scratch after the shifting window slides.


Sign in / Sign up

Export Citation Format

Share Document