statistical error
Recently Published Documents


TOTAL DOCUMENTS

525
(FIVE YEARS 145)

H-INDEX

33
(FIVE YEARS 5)

2022 ◽  
Author(s):  
Zuoheng Zou ◽  
Yu Meng ◽  
Chuan 刘川 Liu

Abstract We perform a lattice QCD calculation of the $\chi_{c0} \rightarrow 2\gamma$ decay width using a model-independent method which does not require a momentum extrapolation of the corresponding off-shell form factors. The simulation is performed on ensembles of $N_f=2$ twisted mass lattice QCD gauge configurations with three different lattice spacings. After a continuum extrapolation, the decay width is obtained to be $\Gamma_{\gamma\gamma}(\chi_{c0})=3.65(83)_{\mathrm{stat}}(21)_{\mathrm{lat.syst}}(66)_{\mathrm{syst}}\, \textrm{keV}$. Albeit this large statistical error, our result is compatible with the experimental results within 1.3$\sigma$. Potential improvements of the lattice calculation in the future are also discussed. Content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI. Article funded by SCOAP3 and published under licence by Chinese Physical Society and the Institute of High Energy Physics of the Chinese Academy of Science and the Institute of Modern Physics of the Chinese Academy of Sciences and IOP Publishing Ltd.


Electronics ◽  
2022 ◽  
Vol 11 (2) ◽  
pp. 256
Author(s):  
Yun Chen ◽  
Guoping Zhang ◽  
Hongbo Xu ◽  
Yinshuan Ren ◽  
Xue Chen ◽  
...  

Non-orthogonal multiple access (NOMA) is a new multiple access method that has been considered in 5G cellular communications in recent years, and can provide better throughput than traditional orthogonal multiple access (OMA) to save communication bandwidth. Device-to-device (D2D) communication, as a key technology of 5G, can reuse network resources to improve the spectrum utilization of the entire communication network. Combining NOMA technology with D2D is an effective solution to improve mobile edge computing (MEC) communication throughput and user access density. Considering the estimation error of channel, we investigate the power of the transmit nodes optimization problem of NOMA-based D2D networks under the rates outage probability (OP) constraints of all single users. Specifically, under the channel statistical error model, the total system transmit power is minimized with the rate OP constraint of a single device. Unfortunately, the problem presented is thorny and non-convex. After equivalent transformation of the rate OP constraints by the Bernstein inequality, an algorithm based on semi-definite relaxation (SDR) can efficiently solve this challenging non-convex problem. Numerical results show that the channel estimation error increases the power consumption of the system. We also compare NOMA with the OMA mode, and the numerical results show that the D2D offloading systems based on NOMA are superior to OMA.


2021 ◽  
Vol 13 (4) ◽  
pp. 501-508
Author(s):  
Alla A. Kornilova ◽  
◽  
Vladimir I. Vysotskii ◽  
Sergey N. Gaydamaka ◽  
Marina A. Gladchenko ◽  
...  

It was found during the research that in the experimental and control bioreactors, which at the beginning of the experiments contained only cesium and strontium, by the end of the experiments, yttrium and barium were found. These isotopes are formed as a result of low-energy nuclear reactions involving protons. In addition, in experimental bioreactors with an optimal composition, a two to threefold increase in the concentration of yttrium was recorded in comparison with the control non-optimal experiments. Accumulation of strontium and cesium in biomass was registered, which is explained by the process of biosorption. It is known that biosorption is the first step towards nuclear transformation (biotransmutation). At the same time, one of the main conditions for the nuclear transformation of biomass elements is its maximum efficient growth. An unexpected fact discovered during the experiment is that yttrium and barium were also found in the control bioreactor, where no biomass was added before the experiment, but only deionized water, glucose, and the initial stable cesium and strontium salts. It is important to note that these elements were not detected in the analysis of the initial salts, substrates, and deionized water. Most likely, the presence of yttrium and barium is due to inoculation of the control fluid of the bioreactor (where no biomass pellets were added) with microorganisms from the experimental bioreactors during their periodic opening for taking current pH samples and adding glucose. Also, the work recorded a decrease in the content of cesium and strontium in the liquid by 20% and 55%, respectively, which goes beyond the statistical error.


MAUSAM ◽  
2021 ◽  
Vol 49 (2) ◽  
pp. 177-182
Author(s):  
P. GOYAL ◽  
T.V.B.P.S. RAMA KRISHNA

Two models IT Line Source Model (IITLS) and H1WAY-2 model have been used to estimate the concentrations of the hydrocarbons (HC) and oxides of nitrogen (NOx) due to transportation sector. An elaborate source inventory for the extrapolation of the HC and NO. emissions by vehicular transport has been developed in IITLS model. The model's predicted concentrations have been compared with the observed values at three receptors namely, Mool Chand, Ashram and AIIMS in Delhi. A statistical error analysis of the model's results and observed values has been made for evaluating the model 's performance. In the present study, it has been observed that IITLS model performs better than HIWAY-2 model.


2021 ◽  
Author(s):  
Mohammad Rasheed Khan ◽  
Shams Kalam ◽  
Asiya Abbasi

Abstract Accurate permeability estimation in tight carbonates is a key reservoir characterization challenge, more pronounced with heterogeneous pore structures. Experiments on large volumes of core samples are required to precisely characterize permeability in such reservoirs which means investment of large amounts of time and capital. Therefore, it is imperative that an integrated model exists that can predict field-wide permeability for un-cored sections to optimize reservoir strategies. Various studies exist with a scope to address this challenge, however, most of them lack universality in application or do not consider important carbonate geometrical features. Accordingly, this work presents a novel correlation to determine permeability of tight carbonates as a function of carbonate pore geometry utilizing a combination of machine learning and optimization algorithms. Primarily, a Deep Learning Neural Network (NN) is constructed and further optimized to produce a data-driven permeability predictor. Customization of the model to tight-heterogenous pore-scale features is accomplished by considering key geometrical carbonate topologies, porosity, formation resistivity, pore cementation representation, characteristic pore throat diameter, pore diameter, and grain diameter. Multiple realizations are conducted spanning from a perceptron-based model to a multi-layered neural net with varying degrees of activation and transfer functions. Next, a physical equation is derived from the optimized model to provide a stand-alone equation for permeability estimation. Validation of the proposed model is conducted by graphical and statistical error analysis of model testing on unseen dataset. A major outcome of this study is the development of a physical mathematical equation which can be used without diving into the intricacy of artificial intelligence algorithms. To evaluate performance of the new correlation, an error metric comprising of average absolute percentage error (AAPE), root mean squared error (RMSE), and correlation coefficient (CC) was used. The proposed correlation performs with low error values and gives CC more than 0.95. A possible reason for this outcome is that the machine learning algorithms can construct relationship between various non-linear inputs (for e.g., carbonate heterogeneity) and output (permeability) parameters through its inbuilt complex interaction of transfer and activation function methodologies.


2021 ◽  
Vol 11 (23) ◽  
pp. 11491
Author(s):  
Laura Sofía Hoyos-Gomez ◽  
Belizza Janet Ruiz-Mendoza

Solar irradiance is an available resource that could support electrification in regions that are low on socio-economic indices. Therefore, it is increasingly important to understand the behavior of solar irradiance. and data on solar irradiance. Some locations, especially those with a low socio-economic population, do not have measured solar irradiance data, and if such information exists, it is not complete. There are different approaches for estimating solar irradiance, from learning models to empirical models. The latter has the advantage of low computational costs, allowing its wide use. Researchers estimate solar energy resources using information from other meteorological variables, such as temperature. However, there is no broad analysis of these techniques in tropical and mountainous environments. Therefore, in order to address this gap, our research analyzes the performance of three well-known empirical temperature-based models—Hargreaves and Samani, Bristol and Campbell, and Okundamiya and Nzeako—and proposes a new one for tropical and mountainous environments. The new empirical technique models daily solar irradiance in some areas better than the other three models. Statistical error comparison allows us to select the best model for each location and determines the data imputation model. Hargreaves and Samani’s model had better results in the Pacific zone with an average RMSE of 936,195 Wh/m2 day, SD of 36,01%, MAE of 748,435 Wh/m2 day, and U95 of 1.836,325 Wh/m2 day. The new proposed model showed better results in the Andean and Amazon zones with an average RMSE of 1.032,99 Wh/m2 day, SD of 34,455 Wh/m2 day, MAE of 825,46 Wh/m2 day, and U95 of 2.025,84 Wh/m2 day. Another result was the linear relationship between the new empirical model constants and the altitude of 2500 MASL (mean above sea level).


2021 ◽  
Vol 922 (2) ◽  
pp. 259
Author(s):  
M. Millea ◽  
C. M. Daley ◽  
T-L. Chou ◽  
E. Anderes ◽  
P. A. R. Ade ◽  
...  

Abstract We perform the first simultaneous Bayesian parameter inference and optimal reconstruction of the gravitational lensing of the cosmic microwave background (CMB), using 100 deg2 of polarization observations from the SPTpol receiver on the South Pole Telescope. These data reach noise levels as low as 5.8 μK arcmin in polarization, which are low enough that the typically used quadratic estimator (QE) technique for analyzing CMB lensing is significantly suboptimal. Conversely, the Bayesian procedure extracts all lensing information from the data and is optimal at any noise level. We infer the amplitude of the gravitational lensing potential to be A ϕ = 0.949 ± 0.122 using the Bayesian pipeline, consistent with our QE pipeline result, but with 17% smaller error bars. The Bayesian analysis also provides a simple way to account for systematic uncertainties, performing a similar job as frequentist “bias hardening” or linear bias correction, and reducing the systematic uncertainty on A ϕ due to polarization calibration from almost half of the statistical error to effectively zero. Finally, we jointly constrain A ϕ along with A L, the amplitude of lensing-like effects on the CMB power spectra, demonstrating that the Bayesian method can be used to easily infer parameters both from an optimal lensing reconstruction and from the delensed CMB, while exactly accounting for the correlation between the two. These results demonstrate the feasibility of the Bayesian approach on real data, and pave the way for future analysis of deep CMB polarization measurements with SPT-3G, Simons Observatory, and CMB-S4, where improvements relative to the QE can reach 1.5 times tighter constraints on A ϕ and seven times lower effective lensing reconstruction noise.


Author(s):  
Miwako Takahashi ◽  
Shuntaro Yoshimura ◽  
Sodai Takyu ◽  
Susumu Aikou ◽  
Yasuhiro Okumura ◽  
...  

Abstract Purpose To reduce postoperative complications, intraoperative lymph node (LN) diagnosis with 18F-fluoro-2-deoxy-D-glucose (FDG) is expected to optimize the extent of LN dissection, leading to less invasive surgery. However, such a diagnostic device has not yet been realized. We proposed the concept of coincidence detection wherein a pair of scintillation crystals formed the head of the forceps. To estimate the clinical impact of this detector, we determined the cut-off value using FDG as a marker for intraoperative LN diagnosis in patients with esophageal cancer, the specifications needed for the detector, and its feasibility using numerical simulation. Methods We investigated the dataset including pathological diagnosis and radioactivity of 1073 LNs resected from 20 patients who underwent FDG-positron emission tomography followed by surgery for esophageal cancer on the same day. The specifications for the detector were determined assuming that it should measure 100 counts (less than 10% statistical error) or more within the intraoperative measurement time of 30 s. The detector sensitivity was estimated using GEANT4 simulation and the expected diagnostic ability was calculated. Results The cut-off value was 620 Bq for intraoperative LN diagnosis. The simulation study showed that the detector had a radiation detection sensitivity of 0.96%, which was better than the estimated specification needed for the detector. Among the 1035 non-metastatic LNs, 815 were below the cut-off value. Conclusion The forceps-type coincidence detector can provide sufficient sensitivity for intraoperative LN diagnosis. Approximately 80% of the prophylactic LN dissections in esophageal cancer can be avoided using this detector.


2021 ◽  
Vol 922 (2) ◽  
pp. 116
Author(s):  
Brian DiGiorgio ◽  
Kevin Bundy ◽  
Kyle B. Westfall ◽  
Alexie Leauthaud ◽  
David Stark

Abstract Kinematic weak lensing describes the distortion of a galaxy’s projected velocity field due to lensing shear, an effect recently reported for the first time by Gurri et al. based on a sample of 18 galaxies at z ∼ 0.1. In this paper, we develop a new formalism that combines the shape information from imaging surveys with the kinematic information from resolved spectroscopy to better constrain the lensing distortion of source galaxies and to potentially address systematic errors that affect conventional weak-lensing analyses. Using a Bayesian forward model applied to mock galaxy observations, we model distortions in the source galaxy’s velocity field simultaneously with the apparent shear-induced offset between the kinematic and photometric major axes. We show that this combination dramatically reduces the statistical uncertainty on the inferred shear, yielding statistical error gains of a factor of 2–6 compared to kinematics alone. While we have not accounted for errors from intrinsic kinematic irregularities, our approach opens kinematic lensing studies to higher redshifts where resolved spectroscopy is more challenging. For example, we show that ground-based integral-field spectroscopy of background galaxies at z ∼ 0.7 can deliver gravitational shear measurements with signal-to-noise ratio of ∼1 per source galaxy at 1 arcminute separations from a galaxy cluster at z ∼ 0.3. This suggests that even modest samples observed with existing instruments could deliver improved galaxy cluster mass measurements and well-sampled probes of their halo mass profiles to large radii.


Sign in / Sign up

Export Citation Format

Share Document