The Over-estimation of the Flux Density of Weak Radio Sources

1969 ◽  
Vol 1 (5) ◽  
pp. 233-234
Author(s):  
H. S. Murdoch

The over-estimation of the flux density of radio sources near the lower limit of a survey has often been considered in the past. The use of digital recording and analysis techniques now enables a quantitative approach to the problem. Monte Carlo techniques may be used to determine the error distribution, including any systematic bias.

2018 ◽  
Vol 616 ◽  
pp. A128 ◽  
Author(s):  
N. Herrera Ruiz ◽  
E. Middelberg ◽  
A. Deller ◽  
V. Smolčić ◽  
R. P. Norris ◽  
...  

We present very long baseline interferometry (VLBI) observations of 179 radio sources in the COSMOS field with extremely high sensitivity using the Green Bank Telescope (GBT) together with the Very Long Baseline Array (VLBA) (VLBA+GBT) at 1.4 GHz, to explore the faint radio population in the flux density regime of tens of μJy. Here, the identification of active galactic nuclei (AGN) is based on the VLBI detection of the source, meaning that it is independent of X-ray or infrared properties. The milli-arcsecond resolution provided by the VLBI technique implies that the detected sources must be compact and have large brightness temperatures, and therefore they are most likely AGN (when the host galaxy is located at z ≥ 0.1). On the other hand, this technique only allows us to positively identify when a radio-active AGN is present, in other words, we cannot affirm that there is no AGN when the source is not detected. For this reason, the number of identified AGN using VLBI should be always treated as a lower limit. We present a catalogue containing the 35 radio sources detected with the VLBA+GBT, ten of which were not previously detected using only the VLBA. We have constructed the radio source counts at 1.4 GHz using the samples of the VLBA and VLBA+GBT detected sources of the COSMOS field to determine a lower limit for the AGN contribution to the faint radio source population. We found an AGN contribution of >40−75% at flux density levels between 150 μJy and 1 mJy. This flux density range is characterised by the upturn of the Euclidean-normalised radio source counts, which implies a contribution of a new population. This result supports the idea that the sub-mJy radio population is composed of a significant fraction of radio-emitting AGN, rather than solely by star-forming galaxies, in agreement with previous studies.


2002 ◽  
Vol 19 (1) ◽  
pp. 14-18 ◽  
Author(s):  
T. P. Krichbaum ◽  
A. Kraus ◽  
L. Fuhrmann ◽  
G. Cimò ◽  
A. Witzel

AbstractWe summarise results from flux density monitoring campaigns performed with the 100 m radio telescope at Effelsberg and the VLA during the past 15 yrs. We briefly discuss some of the statistical properties from now more than 40 high declination sources (δ ≥ 30°), which show intraday variability (IDV). In general, IDV is more pronounced for sources with flat radio spectra and compact VLBI structures. For 0917+62, we present new VLBI images which suggest that the variability pattern is modified by the occurrence of new jet components. For 0716+71, we show the first detection of IDV at millimetre wavelengths (32 GHz). For the physical interpretation of the IDV phenomenon, a complex source and frequency dependent superposition of interstellar scintillation and source intrinsic variability should be considered.


1977 ◽  
Vol 30 (2) ◽  
pp. 231 ◽  
Author(s):  
JG Robertson

Results are given for the second zone of a deep survey made at 408 MHz with the Molonglo cross. The catalogue lists positions and flux densities for 95 sources, none of which has been previously catalogued, in a solid angle of 5�51 x 10-3 sr. The right ascensions covered (with some excluded areas) are 18h 26m-OOh 06m, with a range in declination of 45'. The lower limit of flux density is 84 inJy. An upper limit of 1000 mJy has also been imposed. The position uncertainties are typically 12''at 100 mJy and 6# at 250 mJy.


1977 ◽  
Vol 30 (2) ◽  
pp. 209 ◽  
Author(s):  
JG Robertson

Results of a deep survey made at 408 MHz with the Molonglo cross are given. The catalogue lists positions and flux densities for a total of 373 radio sources, most of which have not previously been catalogued, in a solid angle of 0�0201 Sf. This covers (with some excluded areas) right ascensions 0l h oom-06h 44m and 13h 45m-17h 19m, with a range in declination of 41'. Eighteen contour maps are given of sources that are extended or have very close companions. A thorough error analysis is given, as well as new operational definitions of completeness and reliability. The lower limit of flux density is 88 mJy, which is five times the r.m.s. error. An upper limit of 1000 mJy has also been imposed. Typical errors in positions are 15" at 100 mJy and 6" at 250 mJy.


Author(s):  
Edward P. Herbst ◽  
Frank Schorfheide

Dynamic stochastic general equilibrium (DSGE) models have become one of the workhorses of modern macroeconomics and are extensively used for academic research as well as forecasting and policy analysis at central banks. This book introduces readers to state-of-the-art computational techniques used in the Bayesian analysis of DSGE models. The book covers Markov chain Monte Carlo techniques for linearized DSGE models, novel sequential Monte Carlo methods that can be used for parameter inference, and the estimation of nonlinear DSGE models based on particle filter approximations of the likelihood function. The theoretical foundations of the algorithms are discussed in depth, and detailed empirical applications and numerical illustrations are provided. The book also gives invaluable advice on how to tailor these algorithms to specific applications and assess the accuracy and reliability of the computations. The book is essential reading for graduate students, academic researchers, and practitioners at policy institutions.


2014 ◽  
Vol 6 (1) ◽  
pp. 1006-1015
Author(s):  
Negin Shagholi ◽  
Hassan Ali ◽  
Mahdi Sadeghi ◽  
Arjang Shahvar ◽  
Hoda Darestani ◽  
...  

Medical linear accelerators, besides the clinically high energy electron and photon beams, produce other secondary particles such as neutrons which escalate the delivered dose. In this study the neutron dose at 10 and 18MV Elekta linac was obtained by using TLD600 and TLD700 as well as Monte Carlo simulation. For neutron dose assessment in 2020 cm2 field, TLDs were calibrated at first. Gamma calibration was performed with 10 and 18 MV linac and neutron calibration was done with 241Am-Be neutron source. For simulation, MCNPX code was used then calculated neutron dose equivalent was compared with measurement data. Neutron dose equivalent at 18 MV was measured by using TLDs on the phantom surface and depths of 1, 2, 3.3, 4, 5 and 6 cm. Neutron dose at depths of less than 3.3cm was zero and maximized at the depth of 4 cm (44.39 mSvGy-1), whereas calculation resulted  in the maximum of 2.32 mSvGy-1 at the same depth. Neutron dose at 10 MV was measured by using TLDs on the phantom surface and depths of 1, 2, 2.5, 3.3, 4 and 5 cm. No photoneutron dose was observed at depths of less than 3.3cm and the maximum was at 4cm equal to 5.44mSvGy-1, however, the calculated data showed the maximum of 0.077mSvGy-1 at the same depth. The comparison between measured photo neutron dose and calculated data along the beam axis in different depths, shows that the measurement data were much more than the calculated data, so it seems that TLD600 and TLD700 pairs are not suitable dosimeters for neutron dosimetry in linac central axis due to high photon flux, whereas MCNPX Monte Carlo techniques still remain a valuable tool for photonuclear dose studies.


2020 ◽  
Vol 25 (2) ◽  
pp. 111-122
Author(s):  
Aries Andrianto

Based on Bank Indonesia data, electronic money transactions have grown rapidly in the past 10 years. Throughout 2018, the volume of electronic money transactions was 2.92 billion transactions, growing 16,600 times compared to 2009.This study aims to analyze the factors that influence interest in using the LinkAja digital wallet using the UTAUT 2 method. The object of this study is the LinkAja digital wallet user who is domiciled in Jakarta. The independent variables examined in this study were Performance Expectancy, Effort Expectancy, Social Influence, Facilitating Conditions, Hedonic Motivation, and Habit on Behavior Intention using PLS-SEM analysis techniques. The results of this study indicate that Price Value has a positive effect on Behavior Intention.


Entropy ◽  
2021 ◽  
Vol 23 (6) ◽  
pp. 662
Author(s):  
Mateu Sbert ◽  
Jordi Poch ◽  
Shuning Chen ◽  
Víctor Elvira

In this paper, we present order invariance theoretical results for weighted quasi-arithmetic means of a monotonic series of numbers. The quasi-arithmetic mean, or Kolmogorov–Nagumo mean, generalizes the classical mean and appears in many disciplines, from information theory to physics, from economics to traffic flow. Stochastic orders are defined on weights (or equivalently, discrete probability distributions). They were introduced to study risk in economics and decision theory, and recently have found utility in Monte Carlo techniques and in image processing. We show in this paper that, if two distributions of weights are ordered under first stochastic order, then for any monotonic series of numbers their weighted quasi-arithmetic means share the same order. This means for instance that arithmetic and harmonic mean for two different distributions of weights always have to be aligned if the weights are stochastically ordered, this is, either both means increase or both decrease. We explore the invariance properties when convex (concave) functions define both the quasi-arithmetic mean and the series of numbers, we show its relationship with increasing concave order and increasing convex order, and we observe the important role played by a new defined mirror property of stochastic orders. We also give some applications to entropy and cross-entropy and present an example of multiple importance sampling Monte Carlo technique that illustrates the usefulness and transversality of our approach. Invariance theorems are useful when a system is represented by a set of quasi-arithmetic means and we want to change the distribution of weights so that all means evolve in the same direction.


Mathematics ◽  
2021 ◽  
Vol 9 (5) ◽  
pp. 580
Author(s):  
Pavel Shcherbakov ◽  
Mingyue Ding ◽  
Ming Yuchi

Various Monte Carlo techniques for random point generation over sets of interest are widely used in many areas of computational mathematics, optimization, data processing, etc. Whereas for regularly shaped sets such sampling is immediate to arrange, for nontrivial, implicitly specified domains these techniques are not easy to implement. We consider the so-called Hit-and-Run algorithm, a representative of the class of Markov chain Monte Carlo methods, which became popular in recent years. To perform random sampling over a set, this method requires only the knowledge of the intersection of a line through a point inside the set with the boundary of this set. This component of the Hit-and-Run procedure, known as boundary oracle, has to be performed quickly when applied to economy point representation of many-dimensional sets within the randomized approach to data mining, image reconstruction, control, optimization, etc. In this paper, we consider several vector and matrix sets typically encountered in control and specified by linear matrix inequalities. Closed-form solutions are proposed for finding the respective points of intersection, leading to efficient boundary oracles; they are generalized to robust formulations where the system matrices contain norm-bounded uncertainty.


Sign in / Sign up

Export Citation Format

Share Document