scholarly journals Bias of the Hubble Constant Value Caused by Errors in Galactic Distance Indicators

2021 ◽  
Vol 66 (11) ◽  
pp. 955
Author(s):  
S.L. Parnovsky

The bias in the determination of the Hubble parameter and the Hubble constant in the modern Universe is discussed. It could appear due to the statistical processing of data on the redshifts of galaxies and the estimated distances based on some statistical relations with limited accuracy. This causes a number of effects leading to either underestimation or overestimation of the Hubble parameter when using any methods of statistical processing, primarily the least squares method (LSM). The value of the Hubble constant is underestimated when processing a whole sample; when the sample is constrained by distance, especially when constrained from above. Moreover, it is significantly overestimated due to the data selection. The bias significantly exceeds the values of the erro ofr the Hubble constant calculated by the LSM formulae. These effects are demonstrated both analytically and using Monte Carlo simulations, which introduce deviations in the velocities and estimated distances to the original dataset described by the Hubble law. The characteristics of the deviations are similar to real observations. Errors in the estimated distances are up to 20%. They lead to the fact that, when processing the same mock sample using LSM, it is possible to obtain an estimate of the Hubble constant from 96% of the true value when processing the entire sample to 110% when processing the subsample with distances limited from above. The impact of these effects can lead to a bias in the Hubble constant obtained from real data and an overestimation of the accuracy of determining this value. This may call into question the accuracy of determining the Hubble constant and can significantly reduce the tension between the values obtained from the observations in the early and modern Universes, which were actively discussed during the last year.

2017 ◽  
Vol 14 (2) ◽  
pp. 55-68 ◽  
Author(s):  
Rita Bužinskienė

AbstractIn accordance with generally accepted accounting standards, most intangibles are not accounted for and not reflected in the traditional financial accounting. For this reason, most companies account intangible assets (IAs) as expenses. In the research, 57 sub-elements of IAs were applied, which are grouped into eight main elements of IAs. The classification of IAs consists in two parts of assets: accounting and non-accounting. This classification can be successfully applied in different branches of enterprises, to expand and supplement the theoretical and practical concepts of the company's financial management. The article proposes to evaluate not only the value of financial information for IAs (accounted) but also the value of non-financial information for IAs (non-accounted), thus revealing the true value of IAs that is available to the companies of Lithuania. It names a value of general IAs. The results of the research confirmed the IA valuation methodology, which allows companies to calculate the fair value of an IA. The obtained extended IAs valuation information may be valuable to both the owners of the company and investors, as this value plays an important practical role in assessing the impact of IAs on the market value of companies.


1999 ◽  
Vol 183 ◽  
pp. 68-68
Author(s):  
Koichi Iwamoto ◽  
Ken'Ichi Nomoto

The large luminosity (MV ≈ −19 ∼ −20) and the homogeneity in light curves and spectra of Type Ia supernovae(SNe Ia) have led to their use as distance indicators ultimately to determine the Hubble constant (H0). However, an increasing number of the observed samples from intermediate- and high-z (z ∼ 0.1 − 1) SN Ia survey projects(Hamuy et al. 1996, Perlmutter et al. 1997) have shown that there is a significant dispersion in the maximum brightness (∼ 0.4 mag) and the brighter-slower correlation between the brightness and the postmaximum decline rate, which was first pointed out by Phillips(1993). By taking the correlation into account, Hamuy et al.(1996) gave an estimate of H0 within the error bars half as much as previous ones.


2002 ◽  
Vol 12 ◽  
pp. 688 ◽  
Author(s):  
P.M. Garnavich ◽  
K. Stanek

AbstractThe ideal distance indicator would be a standard candle abundant enough to provide many examples within reach of parallax measurements and sufficiently bright to be seen out to Local Group galaxies. The red clump stars closely match this description. These are the metal rich equivalent of the better known horizontal branch stars, and their brightness dispersion is only 0.2 mag (one sigma) in the Solar neighborhood. Using Hipparcos to calibrate a large, local sample, the red clump method has been used to measure accurate distances to the Galactic center (Paczyński & Stanek 1998), M31 (Stanek & Garnavich 1998), LMC (Udalski et al. 1998; Stanek et al. 1998; Udalski 1999) and some clusters in our Galaxy (e.g. 47Tuc: Kaluzny et al. 1998). As with all the distance indicators, the main worry lies in the possible systematics of the method, in particular, the brightness dependence on the stellar metallicity and age. These dependences have come under close scrutiny and, indeed, the population effects on the red clump brightness appear small and calibratable. Perhaps the most controversial result from the red clump method is the estimation of a “short” distance to the Large Magellanic Cloud (Udalski et al. 1998; Stanek, Zaritsky & Harris 1998; Udalski 2000). This distance to the LMC is shorter by 12% than the “standard” value, and has very important implications for the Cepheid distance scale and the determination of the Hubble constant.


Author(s):  
Л. І. Лєві

Розглянута у роботі технологія дає змогу шляхомпоєднання переваг м’яких обчислень і реґресійного ана-лізу будувати багатофакторні залежності з неперерв-ним виходом, враховуючи як можливість визначенняступеня важливості вхідних змінних, так і їх взаємодійнеобхідного порядку. Проте під час моделюванняоб’єктів із неперервним виходом, коли необхідна до-статня точність визначення чіткого значення вихідноївеличини, знаходження параметрів нечіткого рівнянняреґресії за методом найменших квадратів та парамет-рів функцій належностей шляхом статистичної оброб-ки експертної інформації не може в повній мірі забез-печити потрібну точність. Для цього потрібно налаш-тувати за навчальною вибіркою нечітку реґресійну мо-дель у відповідності до тестуючої вибірки. In work considered technology allows to build multivariate dependence with continuous output by combining the advantages of soft computing and regression analysis, given the opportunity, the definition of importance of input variables and their necessary interactions. However, when modeling objects with continuous output when a sufficient accuracy of the determination of a precise value of the output value is necessary, the identification of the parameters of fuzzy regression equations using the least squares method and parameters of membership functions by statistical processing of expert information is not sufficient to provide the desired accuracy. It requires configuration on the training set of a fuzzy regression model in accordance with the testing sample.


2020 ◽  
Vol 10 (21) ◽  
pp. 7559
Author(s):  
Mustapha Lhous ◽  
Omar Zakary ◽  
Mostafa Rachik ◽  
El Mostafa Magri ◽  
Abdessamad Tridane

This work investigates the optimal control of the second phase of the COVID-19 lockdown in Morocco. The model consists of susceptible, exposed, infected, recovered, and quarantine compartments (SEIRQD model), where we take into account contact tracing, social distancing, quarantine, and treatment measures during the nationwide lockdown in Morocco. First, we present different components of the model and their interactions. Second, to validate our model, the nonlinear least-squares method is used to estimate the model’s parameters by fitting the model outcomes to real data of the COVID-19 in Morocco. Next, to investigate the impact of optimal control strategies on this pandemic in the country. We also give numerical simulations to illustrate and compare the obtained results with the actual situation in Morocco.


2021 ◽  
Vol 1 (2) ◽  
pp. 49-53
Author(s):  
Ikram Bensouf ◽  
Naceur M’Hamdi ◽  
Hatem Ouled Ahmed ◽  
Faten Lasfar ◽  
Belgacem Ben Aoun ◽  
...  

The aim of the study is to investigate the effects of age, sex, running distance and origin of horse on racing speed for Purebred Arabian horse in Tunisia. Although the occidental type is known to be more successful in racing than the Tunisian type, we undertook this study to try to confirm or deny this supremacy for a sample of racehorses born in Tunisia from occidental father. A total of 333 racing records were considered for race performance. The effects of environmental factors on (sex, age, father’s origin, race distance, number of race seasons) race performance were analyzed using the least-squares method(LSM).The racehorses studied were all Arabian Purebred horses in operation at the racecourse of Ksar Said from 2010 to 2020. They are 180 horses, 90 horses born of a Tunisian father, and 90 horses born in Tunisia ofthe occidentalfather. These horses are the best and most successful in their category. The study revealed that the gender and age effectswere statistically insignificant onracingperformance. Race performance was significantly influenced by the distance and the origin of the father which affirms the improving role of the occidentalhorse in the Tunisian population.


2008 ◽  
Vol 2008 ◽  
pp. 1-10 ◽  
Author(s):  
D. R. Novog ◽  
P. Sermer

This paper provides a novel and robust methodology for determination of nuclear reactor trip setpoints which accounts for uncertainties in input parameters and models, as well as accounting for the variations in operating states that periodically occur. Further it demonstrates that in performing best estimate and uncertainty calculations, it is critical to consider the impact of all fuel channels and instrumentation in the integration of these uncertainties in setpoint determination. This methodology is based on the concept of a true trip setpoint, which is the reactor setpoint that would be required in an ideal situation where all key inputs and plant responses were known, such that during the accident sequence a reactor shutdown will occur which just prevents the acceptance criteria from being exceeded. Since this true value cannot be established, the uncertainties in plant simulations and plant measurements as well as operational variations which lead to time changes in the true value of initial conditions must be considered. This paper presents the general concept used to determine the actuation setpoints considering the uncertainties and changes in initial conditions, and allowing for safety systems instrumentation redundancy. The results demonstrate unique statistical behavior with respect to both fuel and instrumentation uncertainties which has not previously been investigated.


2007 ◽  
Vol 178 (4) ◽  
pp. 275-291 ◽  
Author(s):  
Carine Lezin ◽  
Jacques Rey ◽  
Philippe Faure ◽  
René Cubaynes ◽  
Thierry Pelissie ◽  
...  

Abstract On the eastern edge of the Aquitaine Basin, the Lias-Dogger transition and the events, which occurred during this time interval are studied in the Quercy sedimentary basin. Stratigraphic correlations are proposed using a biochronological calibration based on the determination of numerous ammonites and brachiopods. Facies analyses using statistical processing integrate the presence of faults and tectonic compartments and lead to reconstruction of palaeoenvironments in space and time. The paper includes the description of system tracts following Haq et al. [1987] and Vail et al. [1991], and twelve palaeogeographic maps of the area studied. The objectives are to distinguish the various allocyclic and autocyclic factors controlling sedimentation and to show the impact of the Mid-Cimmerian tectonic event on the evolution of the basin.


2019 ◽  
Vol 633 ◽  
pp. A19 ◽  
Author(s):  
Hans Böhringer ◽  
Gayoung Chon ◽  
Chris A. Collins

For precision cosmological studies it is important to know the local properties of the reference point from which we observe the Universe. Particularly for the determination of the Hubble constant with low-redshift distance indicators, the values observed depend on the average matter density within the distance range covered. In this study we used the spatial distribution of galaxy clusters to map the matter density distribution in the local Universe. The study is based on our CLASSIX galaxy cluster survey, which is highly complete and well characterised, where galaxy clusters are detected by their X-ray emission. In total, 1653 galaxy clusters outside the “zone of avoidance” fulfil the selection criteria and are involved in this study. We find a local underdensity in the cluster distribution of about 30–60% which extends about 85 Mpc to the north and ∼170 Mpc to the south. We study the density distribution as a function of redshift in detail in several regions in the sky. For three regions for which the galaxy density distribution has previously been studied, we find good agreement between the density distribution of clusters and galaxies. Correcting for the bias in the cluster distribution we infer an underdensity in the matter distribution of about −30 ± 15% (−20 ± 10%) in a region with a radius of about 100 (∼140) Mpc. Calculating the probability of finding such an underdensity through structure formation theory in a ΛCDM universe with concordance cosmological parameters, we find a probability characterised by σ-values of 1.3 − 3.7. This indicates low probabilities, but with values of around 10% at the lower uncertainty limit, the existence of an underdensity cannot be ruled out. Inside this underdensity, the observed Hubble parameter will be larger by about 5.5 +2.1−2.8%, which explains part of the discrepancy between the locally measured value of H0 compared to the value of the Hubble parameter inferred from the Planck observations of cosmic microwave background anisotropies. If distance indicators outside the local underdensity are included, as in many modern analyses, this effect is diluted.


2020 ◽  
Vol 16 (6) ◽  
pp. 46-55
Author(s):  
A.V. Golubek ◽  
◽  
N.M. Dron' ◽  

Introduction. A constant increase in the amount of space debris already constitutes a significant threat to satellites in nearEarth orbits, starting with the trajectory of their launch vehicle injection. Problem Statement. Design and development of various modern methods of protection against space debris requires knowledge of the statistical characteristics of the distribution of the kinematic parameters of the simultaneous motion of a launch vehicle injecting satellite and a group of space debris objects in the area of its trajectory. Purpose. Development of a mathematical model of a launch vehicle rendezvous with a group of observable orbital debris while injecting a satellite into near-earth orbits with an altitude of up to 2100 km and an inclination from 45 to 90 degrees. Materials and Methods. The following methods are used in the research: analysis, synthesis, comparison, simulation modeling, statistical processing of experimental results, approximation, correlation analysis, and the least squares method. Results. The simultaneous motion of a launch vehicle and a group of space debris objects has been studied. The distributions of relative distance, relative velocity, angle of encounter, and moments of time of approach of a launch vehicle to a group of the observed space debris at a relative distance of less than 5 km have been obtained. The dependence of the average rendezvous concentration on the distribution of space debris across the average altitude of the orbit and the inclination of the target orbit of the launch vehicle has been determined. The dependence of the average probability of rendezvous in the launch on the inclination of the target orbit, the number of orbital debris, and the relative distance of the rendezvous has been determined. Conclusions. The obtained mathematical model of rendezvous of a launch vehicle with a group of observed orbital debris can be used while designing means of cleaning the near-Earth space and systems to protect modern satellite launch vehicles from orbital debris. In addition, the results of the research can be used to assess the impact of unobserved orbital debris on the flight of a launch vehicle.


Sign in / Sign up

Export Citation Format

Share Document