precision error
Recently Published Documents


TOTAL DOCUMENTS

122
(FIVE YEARS 42)

H-INDEX

15
(FIVE YEARS 4)

2021 ◽  
Vol 14 (12) ◽  
pp. 7999-8017
Author(s):  
Siraput Jongaramrungruang ◽  
Georgios Matheou ◽  
Andrew K. Thorpe ◽  
Zhao-Cheng Zeng ◽  
Christian Frankenberg

Abstract. Methane (CH4) is the second most important anthropogenic greenhouse gas with a significant impact on radiative forcing, tropospheric air quality, and stratospheric water vapor. Remote sensing observations enable the detection and quantification of local methane emissions across large geographical areas, which is a critical step for understanding local flux distributions and subsequently prioritizing mitigation strategies. Obtaining methane column concentration measurements with low noise and minimal surface interference has direct consequences for accurately determining the location and emission rates of methane sources. The quality of retrieved column enhancements depends on the choices of the instrument and retrieval parameters. Here, we studied the changes in precision error and bias as a result of different spectral resolutions, instrument optical performance, and detector exposure times by using a realistic instrument noise model. In addition, we formally analyzed the impact of spectrally complex surface albedo features on retrievals using the iterative maximum a posteriori differential optical absorption spectroscopy (IMAP-DOAS) algorithm. We built an end-to-end modeling framework that can simulate observed radiances from reflected solar irradiance through a simulated CH4 plume over several natural and artificial surfaces. Our analysis shows that complex surface features can alias into retrieved methane abundances, explaining the existence of retrieval biases in current airborne methane observations. The impact can be mitigated with higher spectral resolution and a larger polynomial degree to approximate surface albedo variations. Using a spectral resolution of 1.5 nm, an exposure time of 20 ms, and a polynomial degree of 25, a retrieval precision error below 0.007 mole m−2 or 1.0 % of total atmospheric CH4 column can be achieved for high albedo cases, while minimizing the bias due to surface interference such that the noise is uncorrelated among various surfaces. At coarser spectral resolutions, it becomes increasingly harder to separate complex surface albedo features from atmospheric absorption features. Our modeling framework provides the basis for assessing tradeoffs for future remote sensing instruments and algorithmic designs. For instance, we find that improving the spectral resolution beyond 0.2 nm would actually decrease the retrieval precision, as detector readout noise will play an increasing role. Our work contributes towards building an enhanced monitoring system that can measure CH4 concentration fields to determine methane sources accurately and efficiently at scale.


2021 ◽  
Vol 27 (6) ◽  
pp. 627-636
Author(s):  
Mikael Seabra Moraes ◽  
Priscila Custódio Martins ◽  
Diego Augusto Santos Silva

ABSTRACT Introduction: Bone mineral density (BMD) and bone mineral content (BMC) vary depending on the type of sport practiced and the body region, and their measurement can be an effective way to predict health risks throughout an athlete’s life. Objective: To describe the methodological aspects (measurement of bone parameters, body regions, precision errors and covariates) and to compare BMD and BMC by body region (total body, upper limbs, lower limbs and trunk) among university athletes practicing different sports. Methods: A search was performed on the databases PubMed, Web of Science, Scopus, ScienceDirect, EBSCOhost, SportDiscus, LILACS and SciELO. Studies were selected that: (1) compared BMD and BMC of athletes practicing at least two different sports (2) used dual-energy X-ray absorptiometry (DXA) to assess bone parameters (3) focused on university athletes. The extracted data were: place of study, participant selection, participants’ sex, sport practiced, type of study, bone parameters, DXA model, software used, scan and body regions, precision error, precision protocol, covariates and comparison of bone parameters between different sports by body region. Results: The main results were: 1) BMD is the most investigated bone parameter; 2) total body, lumbar spine and proximal femur (mainly femoral neck) are the most studied body regions; 3) although not recommended, the coefficient of variation is the main indicator of precision error; 4) total body mass and height are the most commonly used covariates; 5) swimmers and runners have lower BMD and BMC values; and 6) it is speculated that basketball players and gymnasts have greater osteogenic potential. Conclusions: Swimmers and runners should include weight-bearing exercises in their training routines. In addition to body mass and height, other covariates are important. The results of this review can help guide intervention strategies focused on preventing diseases and health problems during and after the athletic career. Level of evidence II; Systematic Review.


2021 ◽  
Author(s):  
Patrick Weber ◽  
Andreas Petzold ◽  
Oliver Felix Bischof ◽  
Benedikt Fischer ◽  
Marcel Berg ◽  
...  

Abstract. Aerosol intensive optical properties like the Ångström exponents for aerosol light extinction, scattering and absorption, or the single-scattering albedo are indicators for aerosol size distributions, chemical composition and radiative behaviour and contain also source information. The observation of these parameters requires the measurement of aerosol optical properties at multiple wavelengths which usually implies the use of several instruments. Our study aims to quantify the uncertainties of the determination of multiple-wavelengths intensive properties by an optical closure approach, using different test aerosols. In our laboratory study, we measured the full set of aerosol optical properties for a range of light-absorbing aerosols with different properties, mixed externally with ammonium sulphate to generate aerosols of controlled single-scattering albedo. The investigated aerosol types were: fresh combustion soot emitted by an inverted flame soot generator (SOOT, fractal aggregates), Aquadag (AQ, spherical shape), Cabot industrial soot (BC, compact clusters), and an acrylic paint (Magic Black, MB). One focus was on the validity of the Differential Method (DM: absorption = extinction minus scattering) for the determination of Ångström exponents for different particle loads and mixtures of light-absorbing aerosol with ammonium sulphate, in comparison to data obtained from single instruments. The instruments used in this study were two CAPS PMssa (Cavity Attenuated Phase Shift Single Scattering Albedo, λ = 450, 630 nm) for light extinction and scattering coefficients, one Integrating Nephelometer (λ = 450, 550, 700 nm) for light scattering coefficient and one Tricolour Absorption Photometer (TAP, λ = 467, 528, 652 nm) for filter-based light absorption coefficient measurement. Our key finding is that the coefficients of light absorption σap, scattering σsp and extinction σep from the Differential Method agree with data from single reference instruments, and the slopes of regression lines equal unity within the precision error. We found, however, that the precision error for the DM suppresses 100 % for σap values lower than 10–20 Mm−1 for atmospheric relevant single scattering albedo. This increasing uncertainty with decreasing σap yields an absorption Ångström exponent (AAE) that is too uncertain for measurements in the range of atmospheric aerosol loadings. We recommend using DM only for measuring AAE values for σap > 50 Mm−1. Ångström exponents for scattering and extinction are reliable for extinction coefficients from 20 up to 1000 Mm−1 and stay within 10 % deviation from reference instruments, regardless of the chosen method. Single-scattering albedo (SSA) values for 450 nm and 630 nm wavelengths agree with values from the reference method σsp (NEPH)/σep (CAPS PMSSA) with less than 10 % uncertainty for all instrument combinations and sampled aerosol types which fulfil the proposed goal for measurement uncertainty of 10 % proposed by Laj et al., 2020 for GCOS (Global Climate Observing System) applications.


2021 ◽  
Vol 67 (9) ◽  
pp. 433-444
Author(s):  
Youyu Liu ◽  
Yi Li ◽  
Xuyou Zhang ◽  
Bo Chen

To suppress the chattering of manipulators under heavy-load operations, a control method called fuzzy equivalence & terminal sliding mode (FETSM) was applied to the trajectory tracking of motion curves for manipulators. Based on the switching term of the equivalent sliding mode (ESM), a fuzzy parameter matrix processed by the simple fuzzy rules was introduced, and the fuzzy switching term was obtained. By summing the fuzzy switching term and the equivalent term of the equivalence and a terminal sliding mode (ETSM), the control law of the FETSM for manipulators was obtained. On this basis, the stability of the system was analysed and the finite arrival time of it was deduced. On the premise of ensuring the stability of the system, the fuzzy rules and membership functions were designed for the fuzzy constants in the fuzzy switching term. Simulation tests show that the proposed FETSM can ensure sufficient trajectory-tracking precision, error convergence speed, and robustness. Compared with the ETSM, the proposed FETSM can reduce the chattering time by 94.75 % on average; compared with the proportion-integral-differential (PID) control method, the maximum chattering amplitude by the FETSM can be reduced by at least 99.21 %. Thus, the proposed FETSM is suitable for those manipulators under heavy-load operations.


Informatics ◽  
2021 ◽  
Vol 8 (3) ◽  
pp. 54
Author(s):  
Constantinos Chalatsis ◽  
Constantin Papaodysseus ◽  
Dimitris Arabadjis ◽  
Athanasios Rafail Mamatsis ◽  
Nikolaos V. Karadimas

A first aim of the present work is the determination of the actual sources of the “finite precision error” generation and accumulation in two important algorithms: Bernoulli’s map and the folded Baker’s map. These two computational schemes attract the attention of a growing number of researchers, in connection with a wide range of applications. However, both Bernoulli’s and Baker’s maps, when implemented in a contemporary computing machine, suffer from a very serious numerical error due to the finite word length. This error, causally, causes a failure of these two algorithms after a relatively very small number of iterations. In the present manuscript, novel methods for eliminating this numerical error are presented. In fact, the introduced approach succeeds in executing the Bernoulli’s map and the folded Baker’s map in a computing machine for many hundreds of thousands of iterations, offering results practically free of finite precision error. These successful techniques are based on the determination and understanding of the substantial sources of finite precision (round-off) error, which is generated and accumulated in these two important chaotic maps.


2021 ◽  
Author(s):  
Siraput Jongaramrungruang ◽  
Georgios Matheou ◽  
Andrew K. Thorpe ◽  
Zhao-Cheng Zeng ◽  
Christian Frankenberg

Abstract. Methane (CH4) is the 2nd most important anthropogenic greenhouse gas with a significant impact on radiative forcing, tropospheric air quality and stratospheric water vapor. Remote-sensing observations enable the detection and quantification of local methane emissions across large geographical areas, which is a critical step for understanding local flux distributions and subsequently prioritizing mitigation strategies. Obtaining methane column concentration measurements with low noise and minimal surface interference has direct consequences for accurately determining the location and emission rates of methane sources. The quality of retrieved column enhancements depends on the choices of instrument and retrieval parameters. Here, we studied the changes in precision error and bias as a result of different spectral resolutions, instrument optical performance and detector exposure times by using a realistic instrument noise model. In addition, we formally analysed the impact of spectrally complex surface albedo features on retrievals using the Iterative Maximum a Posteriori- Differential Optical Absorption Spectroscopy (IMAP-DOAS) algorithm. We built an end-to-end modelling framework that can simulate observed radiances from reflected solar irradiance through a simulated CH4 plume over several natural and man-made surfaces. Our analysis shows that complex surface features can alias into retrieved methane abundances, explaining the existence of retrieval biases in current airborne methane observations. The impact can be mitigated with higher spectral resolution and a larger polynomial degree to approximate surface albedo variations. Using a spectral resolution of 1.5 nm, an exposure time of 20 ms, and a polynomial degree of 25, a retrieval precision error below 0.007 mole m−2 or 1.0 % of total atmospheric CH4 column can be achieved for high albedo cases, while minimizing the bias due to surface interference such that the noise is uncorrelated among various surfaces. At coarser spectral resolutions, it becomes increasingly harder to separate complex surface albedo features from atmospheric absorption features. Our modelling framework provides the basis for assessing trade-offs for future remote-sensing instruments and algorithmic designs. For instance, we find that improving the spectral resolution beyond 0.2 nm would actually decrease the retrieval precision as detector readout noise will play an increasing role. Our work contributes towards building an enhanced monitoring system that can measure CH4 concentration fields to determine methane sources accurately and efficiently at scale.


2021 ◽  
Vol 17 (7) ◽  
pp. 155014772110317
Author(s):  
Ershen Wang ◽  
Caimiao Sun ◽  
Chuanyun Wang ◽  
Pingping Qu ◽  
Yufeng Huang ◽  
...  

In this article, we propose a new particle swarm optimization–based satellite selection algorithm for BeiDou Navigation Satellite System/Global Positioning System receiver, which aims to reduce the computational complexity of receivers under the multi-constellation Global Navigation Satellite System. The influences of the key parameters of the algorithm—such as the inertia weighting factor, acceleration coefficient, and population size—on the performance of the particle swarm optimization satellite selection algorithm are discussed herein. In addition, the algorithm is improved using the adaptive simulated annealing particle swarm optimization (ASAPSO) approach to prevent converging to a local minimum. The new approach takes advantage of the adaptive adjustment of the evolutionary parameters and particle velocity; thus, it improves the ability of the approach to escape local extrema. The theoretical derivations are discussed. The experiments are validated using 3-h real Global Navigation Satellite System observation data. The results show that in terms of the accuracy of the geometric dilution of precision error of the algorithm, the ASAPSO satellite selection algorithm is about 86% smaller than the greedy satellite selection algorithm, and about 80% is less than the geometric dilution of precision error of the particle swarm optimization satellite selection algorithm. In addition, the speed of selecting the minimum geometric dilution of precision value of satellites based on the ASAPSO algorithm is better than that of the traditional traversal algorithm and particle swarm optimization algorithm. Therefore, the proposed ASAPSO algorithm reduces the satellite selection time and improves the geometric dilution of precision using the selected satellite algorithm.


2021 ◽  
Author(s):  
Tim Brandes ◽  
Stefano Scarso ◽  
Christian Koch ◽  
Stephan Staudacher

Abstract A numerical experiment of intentionally reduced complexity is used to demonstrate a method to classify flight missions in terms of the operational severity experienced by the engines. In this proof of concept, the general term of severity is limited to the erosion of the core flow compressor blade and vane leading edges. A Monte Carlo simulation of varying operational conditions generates a required database of 10000 flight missions. Each flight is sampled at a rate of 1 Hz. Eleven measurable or synthesizable physical parameters are deemed to be relevant for the problem. They are reduced to seven universal non-dimensional groups which are averaged for each flight. The application of principal component analysis allows a further reduction to three principal components. They are used to run a support-vector machine model in order to classify the flights. A linear kernel function is chosen for the support-vector machine due to its low computation time compared to other functions. The robustness of the classification approach against measurement precision error is evaluated. In addition, a minimum number of flights required for training and a sensible number of severity classes are documented. Furthermore, the importance to train the algorithms on a sufficiently wide range of operations is presented.


2021 ◽  
Vol 22 (S6) ◽  
Author(s):  
Xuan Zhang ◽  
Yuansheng Liu ◽  
Zuguo Yu ◽  
Michael Blumenstein ◽  
Gyorgy Hutvagner ◽  
...  

Abstract Background Genomic reads from sequencing platforms contain random errors. Global correction algorithms have been developed, aiming to rectify all possible errors in the reads using generic genome-wide patterns. However, the non-uniform sequencing depths hinder the global approach to conduct effective error removal. As some genes may get under-corrected or over-corrected by the global approach, we conduct instance-based error correction for short reads of disease-associated genes or pathways. The paramount requirement is to ensure the relevant reads, instead of the whole genome, are error-free to provide significant benefits for single-nucleotide polymorphism (SNP) or variant calling studies on the specific genes. Results To rectify possible errors in the short reads of disease-associated genes, our novel idea is to exploit local sequence features and statistics directly related to these genes. Extensive experiments are conducted in comparison with state-of-the-art methods on both simulated and real datasets of lung cancer associated genes (including single-end and paired-end reads). The results demonstrated the superiority of our method with the best performance on precision, recall and gain rate, as well as on sequence assembly results (e.g., N50, the length of contig and contig quality). Conclusion Instance-based strategy makes it possible to explore fine-grained patterns focusing on specific genes, providing high precision error correction and convincing gene sequence assembly. SNP case studies show that errors occurring at some traditional SNP areas can be accurately corrected, providing high precision and sensitivity for investigations on disease-causing point mutations.


Mathematics ◽  
2021 ◽  
Vol 9 (11) ◽  
pp. 1199
Author(s):  
Constantin Papaodysseus ◽  
Dimitris Arabadjis ◽  
Fotios Giannopoulos ◽  
Athanasios Rafail Mamatsis ◽  
Constantinos Chalatsis

In the present paper, a novel approach is introduced for the study, estimation and exact tracking of the finite precision error generated and accumulated during any number of multiplications. It is shown that, as a rule, this operation is very “toxic”, in the sense that it may force the finite precision error accumulation to grow arbitrarily large, under specific conditions that are fully described here. First, an ensemble of definitions of general applicability is given for the rigorous determination of the number of erroneous digits accumulated in any quantity of an arbitrary algorithm. Next, the exact number of erroneous digits produced in a single multiplication is given as a function of the involved operands, together with formulae offering the corresponding probabilities. In case the statistical properties of these operands are known, exact evaluation of the aforementioned probabilities takes place. Subsequently, the statistical properties of the accumulated finite precision error during any number of successive multiplications are explicitly analyzed. A method for exact tracking of this accumulated error is presented, together with associated theorems. Moreover, numerous dedicated experiments are developed and the corresponding results that fully support the theoretical analysis are given. Eventually, a number of important, probable and possible applications is proposed, where all of them are based on the methodology and the results introduced in the present work. The proposed methodology is expandable, so as to tackle the round-off error analysis in all arithmetic operations.


Sign in / Sign up

Export Citation Format

Share Document