statistical noise
Recently Published Documents


TOTAL DOCUMENTS

153
(FIVE YEARS 24)

H-INDEX

14
(FIVE YEARS 2)

Author(s):  
Manuel Rodrigues ◽  
Gilles Metris ◽  
Judicael Bedouet ◽  
Joel Bergé ◽  
Patrice Carle ◽  
...  

Abstract Testing the Weak Equivalence Principle (WEP) to a precision of 10-15 requires a quantity of data that give enough confidence on the final result: ideally, the longer the measurement the better the rejection of the statistical noise. The science sessions had a duration of 120 orbits maximum and were regularly repeated and spaced out to accommodate operational constraints but also in order to repeat the experiment in different conditions and to allow time to calibrate the instrument. Several science sessions were performed over the 2.5 year duration of the experiment. This paper aims to describe how the data have been produced on the basis of a mission scenario and a data flow process, driven by a tradeoff between the science objectives and the operational constraints. The mission was led by the Centre National d’Etudes Spatiales (CNES) which provided the satellite, the launch and the ground operations. The ground segment was distributed between CNES and Office National d’Etudes et de Recherches Aerospatiales (ONERA). CNES provided the raw data through the Centre d’Expertise de Compensation de Trainee (CECT: Drag-free expertise centre). The science was led by the Observatoire de la Coote d’Azur (OCA) and ONERA was in charge of the data process. The latter also provided the instrument and the Science Mission Centre of MICROSCOPE (CMSM).


Author(s):  
Yidi Yao ◽  
Liang Li ◽  
Zhiqiang Chen

Abstract Multi-energy spectral CT has a broader range of applications with the recent development of photon-counting detectors. However, the photons counted in each energy bin decrease when the number of energy bins increases, which causes a higher statistical noise level of the CT image. In this work, we propose a novel iterative dynamic dual-energy CT algorithm to reduce the statistical noise. In the proposed algorithm, the multi-energy projections are estimated from the dynamic dual-energy CT data during the iterative process. The proposed algorithm is verified on sufficient numerical simulations and a laboratory two-energy-threshold PCD system. By applying the same reconstruction algorithm, the dynamic dual-energy CT's final reconstruction results have a much lower statistical noise level than the conventional multi-energy CT. Moreover, based on the analysis of the simulation results, we explain why the dynamic dual-energy CT has a lower statistical noise level than the conventional multi-energy CT. The reason is that: the statistical noise level of multi-energy projection estimated with the proposed algorithm is much lower than that of the conventional multi-energy CT, which leads to less statistical noise of the dynamic dual-energy CT imaging.


2021 ◽  
Vol 257 (2) ◽  
pp. 55
Author(s):  
Chinami Kato ◽  
Hiroki Nagakura ◽  
Taiki Morinaga

Abstract Neutrinos have a unique quantum feature as flavor conversions. Recent studies suggested that collective neutrino oscillations play important roles in high-energy astrophysical phenomena. The quantum kinetic equation (QKE) is capable of describing the neutrino flavor conversion, transport, and matter collision self-consistently. However, we have experienced many technical difficulties in their numerical implementation. In this paper, we present a new QKE solver based on a Monte Carlo (MC) approach. This is an upgraded version of our classical MC neutrino transport solver; in essence, a flavor degree of freedom including mixing state is added into each MC particle. This extension requires updating numerical treatments of collision terms, in particular for scattering processes. We deal with the technical problem by generating a new MC particle at each scattering event. To reduce statistical noise inherent in MC methods, we develop the effective mean free path method. This suppresses a sudden change of flavor state due to collisions without increasing the number of MC particles. We present a suite of code tests to validate these new modules with comparison to the results reported in previous studies. Our QKE-MC solver is developed with fundamentally different philosophy and design from other deterministic and mesh methods, suggesting that it will be complementary to others and potentially provide new insights into physical processes of neutrino dynamics.


2021 ◽  
Vol 91 (3) ◽  
pp. 17-26
Author(s):  
V. N. Vasiliev ◽  
A. Yu. Smyslov

Purpose: To study the spatial resolution achievable by dose modulation in a water phantom using a multi-leaf collimator, jaws and their combination. To estimate the power spectrum density of the useful signal (dose distribution) and statistical noise, evaluate the frequency interval containing the useful signal. Material and methods: Using the Gafchromic EBT2 radiochromic film, nested squares dose patterns formed in a water-equivalent phantom by 6 and 15 MV photon beams of the TrueBeam medical accelerator, jaws, a multi-leaf collimator, and a combination of these devices were measured. Dose response to step function (ESF) data was extracted from the penumbra and the linear photon source dose response function (LSF) was calculated. To move to frequency domain, fast Fourier transform was performed over the obtained datasets, as well as over individual LSF peaks, and then power spectra densities were calculated. The Nyquist frequency associated with data sampling was 1.42 mm-1, the Hann window was used to minimize leakage effect. Results: The shape of the obtained LSF peaks was approximated by a sum of two Gaussian distributions with the same center positions but different widths. The LSF peak width at half maximum (FWHM) was 1.7-3.9 mm depending on the modulation device. No significant difference was observed in the peak widths at energies of 6 and 15 MV. In most cases, the width of the peak along the X-axis was wider than along the Y-axis. The power spectrum of the useful signal had a maximum near zero frequency, a 50 % level was near 0.09 mm-1 and its high frequency limit was about 0.4 mm-1. Above this value, only the spectrum of statistical noise was recorded, uniformly distributed over frequency. Conclusion: The obtained values of the LSF peak width in the range 1.7-3.9 mm characterize the ability of dose modulation by the considered devices or their combination, which can be significant for treatment of small targets (less than 3-4 cm), where these limits of spatial resolution can be reached. The obtained relationships in frequency domain can be used for optimal removal of statistical noise using Wiener filters from profiles or two-dimensional dose distributions.


Author(s):  
Yoann Launay ◽  
Jean-Michel Gillet

This article retraces different methods that have been explored to account for the atomic thermal motion in the reconstruction of one-electron reduced density matrices from experimental X-ray structure factors (XSF) and directional Compton profiles (DCP). Attention has been paid to propose the simplest possible model, which obeys the necessary N-representability conditions, while accurately reproducing all available experimental data. The deconvolution of thermal effects makes it possible to obtain an experimental static density matrix, which can directly be compared with theoretical 1-RDM (reduced density matrix). It is found that above a 1% statistical noise level, the role played by Compton scattering data becomes negligible and no accurate 1-RDM is reachable. Since no thermal 1-RDM is available as a reference, the quality of an experimentally derived temperature-dependent matrix is difficult to assess. However, the accuracy of the obtained static 1-RDM, through the performance of the refined observables, is strong evidence that the Semi-Definite Programming method is robust and well adapted to the reconstruction of an experimental dynamical 1-RDM.


PeerJ ◽  
2021 ◽  
Vol 9 ◽  
pp. e11856
Author(s):  
Boudewijn F. Roukema

The noise in daily infection counts of an epidemic should be super-Poissonian due to intrinsic epidemiological and administrative clustering. Here, we use this clustering to classify the official national SARS-CoV-2 daily infection counts and check for infection counts that are unusually anti-clustered. We adopt a one-parameter model of $\phi _i^{\prime}$ infections per cluster, dividing any daily count ni into $n_i/ _i^{\prime}$ ‘clusters’, for ‘country’ i. We assume that ${n_i}/\phi _i^{\prime}$ on a given day j is drawn from a Poisson distribution whose mean is robustly estimated from the four neighbouring days, and calculate the inferred Poisson probability $P_{ij}^{\prime}$ of the observation. The $P_{ij}^{\prime}$ values should be uniformly distributed. We find the value $\phi_i$ that minimises the Kolmogorov–Smirnov distance from a uniform distribution. We investigate the (ϕi, Ni) distribution, for total infection count Ni. We consider consecutive count sequences above a threshold of 50 daily infections. We find that most of the daily infection count sequences are inconsistent with a Poissonian model. Most are found to be consistent with the ϕi model. The 28-, 14- and 7-day least noisy sequences for several countries are best modelled as sub-Poissonian, suggesting a distinct epidemiological family. The 28-day least noisy sequence of Algeria has a preferred model that is strongly sub-Poissonian, with $\phi _i^{28} < 0.1$. Tajikistan, Turkey, Russia, Belarus, Albania, United Arab Emirates and Nicaragua have preferred models that are also sub-Poissonian, with $\phi _i^{28} < 0.5$. A statistically significant (Pτ < 0.05) correlation was found between the lack of media freedom in a country, as represented by a high Reporters sans frontieres Press Freedom Index (PFI2020), and the lack of statistical noise in the country’s daily counts. The ϕi model appears to be an effective detector of suspiciously low statistical noise in the national SARS-CoV-2 daily infection counts.


2021 ◽  
Vol 92 (7) ◽  
pp. 073901
Author(s):  
Younsik Kim ◽  
Dongjin Oh ◽  
Soonsang Huh ◽  
Dongjoon Song ◽  
Sunbeom Jeong ◽  
...  

2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Masatoshi Morimoto ◽  
Nobuyuki Kudomi ◽  
Yukito Maeda ◽  
Takuya Kobata ◽  
Akihiro Oishi ◽  
...  

Abstract Background The amount of signal decreases when the acquisition duration is shortened. However, it is not clear how this affects the quantitative values. This study aims to clarify the effect of acquisition time shortening in brain tumor PET/CT using 11C-methionine on the quantitative values. Method This study was a retrospective analysis of 30 patients who underwent clinical 11C-methionine PET/CT examination. PET images were acquired in list mode for 10 min. PET images of acquisition duration from 1 to 10 min with 1-min step were reconstructed. We examined the effect on the quantitative values of acquisition duration. We placed a volume of interest to include the entire tumor and regions of interest in the shape of a large crescent in the contralateral hemisphere in 5 contiguous axial slices as normal tissue. Quantitative values examined were maximum, peak, and mean standardized uptake values (SUVmax, SUVpeak, SUVmean), metabolic tumor volume (MTV), and maximum tumor to normal tissue ratio (TNRmax), with each duration compared to that with 10 min. Results SUVmax, MTV, and TNRmax showed the highest values due to the effects of statistical noise when the acquisition time was 1 min. These values were stable when the acquisition duration was > 6 min. SUVpeak and SUVmean showed mostly consistent values regardless of duration. Conclusions SUVmax, MTV, and TNRmax are affected by acquisition time. If the acquisition duration was > 6 min, the fluctuation could be suppressed within 5% in these quantitative values. However, SUVpeak was suggested to be a robust index regardless of the acquisition duration.


Author(s):  
Patricio J. Valades-Pelayo ◽  
Manuel A. Ramirez-Cabrera

Abstract This manuscript analyzes the suitability of a recently proposed numerical method, the First-Order Scattering Method (FOS), to describe radiation transfer in a Solar Compound Parabolic Collector Photoreactor (CPCP). The study considers five different irradiance conditions ranging from fully diffuse to fully direct solar radiation, with 90 and 45° angled rays. Three photocatalysts at different loadings were considered: Evonik P25, Graphene Oxide, and Goethite, selected due to (1) their relevance in photocatalytic applications and (2) the availability of optical transport properties in the open literature. The study shows that the method is efficient and free of statistical noise, while its accuracy is not affected by the boundary condition’s complexity. The method’s accuracy is very high for photocatalysts with low to moderate albedos, such as Goethite and Graphene Oxide, displaying Normalized Absoluted Mean Error below 3%, i.e., comparable to the Monte Carlo (MC) Method’s statistical fluctuations.


PeerJ ◽  
2021 ◽  
Vol 9 ◽  
pp. e10700
Author(s):  
Nikolas S. Williams ◽  
Genevieve M. McArthur ◽  
Nicholas A. Badcock

Background The use of consumer-grade electroencephalography (EEG) systems for research purposes has become more prevalent. In event-related potential (ERP) research, it is critical that these systems have precise and accurate timing. The aim of the current study was to investigate the timing reliability of event-marking solutions used with Emotiv commercial EEG systems. Method We conducted three experiments. In Experiment 1 we established a jitter threshold (i.e. the point at which jitter made an event-marking method unreliable). To do this, we introduced statistical noise to the temporal position of event-marks of a pre-existing ERP dataset (recorded with a research-grade system, Neuroscan SynAmps2 at 1,000 Hz using parallel-port event-marking) and calculated the level at which the waveform peaks differed statistically from the original waveform. In Experiment 2 we established a method to identify ‘true’ events (i.e. when an event should appear in the EEG data). We did this by inserting 1,000 events into Neuroscan data using a custom-built event-marking system, the ‘Airmarker’, which marks events by triggering voltage spikes in two EEG channels. We used the lag between Airmarker events and events generated by Neuroscan as a reference for comparisons in Experiment 3. In Experiment 3 we measured the precision and accuracy of three types of Emotiv event-marking by generating 1,000 events, 1 s apart. We measured precision as the variability (standard deviation in ms) of Emotiv events and accuracy as the mean difference between Emotiv events and true events. The three triggering methods we tested were: (1) Parallel-port-generated TTL triggers; (2) Arduino-generated TTL triggers; and (3) Serial-port triggers. In Methods 1 and 2 we used an auxiliary device, Emotiv Extender, to incorporate triggers into the EEG data. We tested these event-marking methods across three configurations of Emotiv EEG systems: (1) Emotiv EPOC+ sampling at 128 Hz; (2) Emotiv EPOC+ sampling at 256 Hz; and (3) Emotiv EPOC Flex sampling at 128 Hz. Results In Experiment 1 we found that the smaller P1 and N1 peaks were attenuated at lower levels of jitter relative to the larger P2 peak (21 ms, 16 ms, and 45 ms for P1, N1, and P2, respectively). In Experiment 2, we found an average lag of 30.96 ms for Airmarker events relative to Neuroscan events. In Experiment 3, we found some lag in all configurations. However, all configurations exhibited precision of less than a single sample, with serial-port-marking the most precise when paired with EPOC+ sampling at 256 Hz. Conclusion All Emotiv event-marking methods and configurations that we tested were precise enough for ERP research as the precision of each method would provide ERP waveforms statistically equivalent to a research-standard system. Though all systems exhibited some level of inaccuracy, researchers could easily account for these during data processing.


Sign in / Sign up

Export Citation Format

Share Document