scholarly journals ADAPTIVE GUARANTEED ESTIMATION OF A CONSTANT SIGNAL UNDER UNCERTAINTY OF MEASUREMENT ERRORS

Author(s):  
D.V. Khadanovich ◽  
◽  
V.I. Shiryaev ◽  

In the guaranteed estimation problems under uncertainty relative to disturbances and meas-urement errors, the admissible sets of their possible values are determined. The solution is chosen due to conditions of guaranteed bounded estimates optimization corresponding to the worst realiza-tion of disturbances and measurement errors. The result of the guaranteed estimation is an unim-provable bounded estimate (information set), which turns to be overly pessimistic (reinsurance) if a prior admissible set of measurement errors is too large compared to their realized values. The admis-sible sets of disturbances and measurement errors can turn to be only rough upper estimates on a short observation interval. The goal of research is the accuracy enhancement problem of guaran-teed estimation when measurement errors are not realized in the worst way, i.e. the environment in which the object operates does not behave as aggressively as it is built in a priori data on the permis-sible set of error values. Research design. The problem of adaptive guaranteed estimation of a con-stant signal from noisy measurements is considered. The adaptive filtering problem is, according to the results of measurement processing, from the whole set of possible realizations of errors, to choose the one that would generate the measurement sequence. Results. An adaptive guaranteed estimation algorithm is presented. The adaptive algorithm construction is based on a multi-alternative method based on the Kalman filter bank. The method uses a set of filters, each of which is tuned to a specific hypoth-esis about the measurement error model. Filter residuals are used to compute estimates of realized measurement errors. The choice of the realization of possible errors is performed using a function that has the meaning of the residual variance over a short time interval. Conclusion. The computa-tional scheme of the adaptive algorithm, the numerical example, and comparative analysis of ob-tained estimates are presented.

2020 ◽  
Author(s):  
Frederik Tack ◽  
Alexis Merlaud ◽  
Marian-Daniel Iordache ◽  
Gaia Pinardi ◽  
Ermioni Dimitropoulou ◽  
...  

Abstract. Sentinel-5 Precursor (S-5P), launched in October 2017, carrying the TROPOspheric Monitoring Instrument (TROPOMI) nadir-viewing spectrometer, is the first mission of the Copernicus Programme dedicated to the monitoring of air quality, climate, and ozone. In the presented study, the TROPOMI tropospheric nitrogen dioxide (NO2) L2 product (OFFL v1.03.01; 3.5 km × 7 km at nadir observations) has been validated over strongly polluted urban regions by comparison with coincident high-resolution Airborne Prism EXperiment (APEX) remote sensing observations (~75 m × 120 m). Satellite products can be optimally assessed based on (APEX) airborne remote sensing observations as a large amount of satellite pixels can be fully mapped at high accuracy and in a relatively short time interval, reducing the impact of spatio-temporal mismatches. In the framework of the S5PVAL-BE campaign, the APEX imaging spectrometer has been deployed during four mapping flights (26–29 June 2019) over the two largest urban regions in Belgium, i.e. Brussels and Antwerp, in order to map the horizontal distribution of tropospheric NO2. For each flight, 10 to 20 TROPOMI pixels were fully covered by approximately 2800 to 4000 APEX measurements within each TROPOMI pixel. The TROPOMI and APEX NO2 vertical column density (VCD) retrieval schemes are similar in concept. Overall for the ensemble of the four flights, the standard TROPOMI NO2 VCD product is well correlated (R = 0.92) but biased negatively by −1.2 ± 1.2 × 1015 molec cm−2 or −14 % ± 12 %, on average, with respect to coincident APEX NO2 retrievals. When replacing the coarse 1° × 1° TM5-MP a priori NO2 profiles by NO2 profile shapes from the CAMS regional CTM ensemble at 0.1° × 0.1°, the slope increases by 11 % to 0.93, and the bias is reduced to −0.1 ± 1.0 × 1015 molec cm−2 or −1.0 % ± 12 %. When the absolute value of the difference is taken, the bias is 1.3 × 1015 molec cm−2 or 16 %, and 0.7 × 1015 molec cm−2 or 9 % on average, when comparing APEX NO2 VCDs with TM5-MP-based and CAMS-based NO2 VCDs, respectively. Both sets of retrievals are well within the accuracy requirement of a maximum bias of 25–50 % for the TROPOMI tropospheric NO2 product for all individual compared pixels. Additionally, the APEX data set allows the study of TROPOMI subpixel variability and impact of signal smoothing due to its finite satellite pixel size, typically coarser than fine-scale gradients in the urban NO2 field. The amount of underestimation of peak plume values and overestimation of urban background values in the TROPOMI data is in the order of 1–2 × 1015 molec cm−2 on average, or 10 %–20 %, in case of an urban scene.


2020 ◽  
Author(s):  
Frederik Tack ◽  
Alexis Merlaud ◽  
Marian-Daniel Iordache ◽  
Gaia Pinardi ◽  
Ermioni Dimitropoulou ◽  
...  

<p>Sentinel-5 Precursor (S-5P), launched in October 2017, is the first mission of the Copernicus Programme dedicated to the monitoring of air quality and climate. Its characteristics, such as the fine spatial resolution, introduce many new opportunities and challenges, requiring to carefully assess the quality and validity of the generated data products by comparison with independent reference observations.</p><p>In the presented study, the S-5P/TROPOMI tropospheric nitrogen dioxide (NO<sub>2</sub>) L2 product (3.5 x 7 km<sup>2 </sup>at nadir observations) has been validated over strongly polluted urban regions based on comparison with coincident high-resolution airborne remote sensing observations (~100 m<sup>2</sup>). Airborne imagers are able to map the horizontal distribution of tropospheric NO<sub>2</sub>, as well as its strong spatio-temporal variability, at high resolution and with high accuracy. Satellite products can be optimally assessed based on airborne observations as a large amount of satellite pixels can be fully mapped in a relatively short time interval, reducing the impact of spatiotemporal mismatches. Additionally, such data sets allow to study the TROPOMI subpixel variability and impact of signal smoothing due to its finite satellite pixel size, typically coarser than fine-scale gradients in the urban NO<sub>2</sub> field.</p><p>In the framework of the S5PVAL-BE campaign, the Airborne Prism EXperiment (APEX) imaging spectrometer has been deployed during four mapping flights (26-29 June 2019) over the two largest urban regions in Belgium, i.e. Brussels and Antwerp, in order to map the horizontal distribution of tropospheric NO<sub>2</sub>. Per flight, 15 to 20 TROPOMI pixels were fully covered by approximately 5000 APEX measurements for each TROPOMI pixel. Mapping flights and ancillary ground-based measurements (car-mobile DOAS, MAX-DOAS, CIMEL, ceilometer, etc.) were conducted in coincidence with the overpass of TROPOMI (typically between noon and 2 PM UTC). The TROPOMI and APEX NO<sub>2</sub> vertical column density (VCD) retrieval schemes are similar in concept. Retrieved NO<sub>2 </sub>VCDs were georeferenced, gridded and intercompared. As strongly polluted areas typically exhibit strong NO<sub>2 </sub>vertical gradients (besides the strong horizontal gradients), a custom TROPOMI tropospheric NO<sub>2 </sub>product was computed and compared as well with APEX by replacing the coarse 1° x 1° a priori NO<sub>2 </sub>vertical profiles from TM5-MP by NO<sub>2</sub> profile shapes from the CAMS regional CTM ensemble at 0.1° x 0.1°.</p><p>Overall for the ensemble of the four flights, the standard TROPOMI NO<sub>2</sub> VCD product is well correlated (R= 0.94) but biased low (slope = 0.73) with respect to APEX NO<sub>2</sub> retrievals. When replacing the TM5-MP a priori NO<sub>2</sub> profiles by CAMS-based profiles, the slope increases to 0.88. When calculating the NO<sub>2</sub> VCD differences, the bias is on average -1.3 ± 1.2 x 10<sup>15</sup> molec cm<sup>-2</sup> or -16% ± 11% for the difference between APEX NO<sub>2</sub> VCDs and the standard TROPOMI NO<sub>2</sub> VCD product. The bias is substantially reduced when replacing the coarse TM5-MP a priori NO<sub>2</sub> profiles by CAMS-based profiles, being -0.1 ± 1.1  x 10<sup>15</sup> molec cm<sup>-2</sup> or -0.1% ± 11%. Both sets of retrievals are well within the accuracy requirement of a maximum bias of 25-50% for the TROPOMI tropospheric NO<sub>2</sub> product for all individual compared pixels.</p>


1970 ◽  
Vol 37 ◽  
pp. 134-137
Author(s):  
P. Gorenstein ◽  
E. M. Kellogg ◽  
H. Gursky

An X-ray observation of the Cassiopeia Region by the ASE group from a sounding rocket on December 5, 1968, has resulted in the determination of locations for two sources that are precise to about 0.1 of a square degree. The positions of two well-known radio sources Cas A and SN 1572 (Tycho's Supernova), objects which are remnants of relatively recent galactic supernova, are consistent with these locations. Inasmuch as that region of the galaxy does not appear to contain nearly as large a concentration of objects as the galactic center, it is reasonable to make the identification between the X-ray sources and the supernova remnants on the basis of there being a small a priori probability of having an accidental coincidence within 0.1 square degrees. Cas A is almost certainly the same source as Cas XR-1 which the NRL group saw in an earlier survey [1]. During the December flight the Crab nebula was also observed for a short time interval.


Author(s):  
I. S. Kikin

A method of autonomous a posteriori estimation of control target’s state coordinates is demonstrated. The method’s accuracy does not depend on automatic control system sensors errors. An algorithmic implementation of the method is proposed – an algorithm for processing the array of data on the control target observed inputs and outputs, obtained by passive information accumulation during the observation interval of the control target normal functioning. At the final stage of the estimation algorithm, the implemented control process is simulated with complete a priori information about the conditions for its implementation (simulation estimation method). The algorithm execution time should be negligible in relation to the duration of the observation interval (instantaneous a posteriori estimation of the control target’s state). The proposed method allows to cyclically correct instrumental errors of automatic control and regulation systems without using external sources of information.


Author(s):  
O. S. Galinina ◽  
S. D. Andreev ◽  
A. M. Tyurlikov

Introduction: Machine-to-machine communication assumes data transmission from various wireless devices and attracts attention of cellular operators. In this regard, it is crucial to recognize and control overload situations when a large number of such devices access the network over a short time interval.Purpose:Analysis of the radio network overload at the initial network entry stage in a machine-to-machine communication system.Results: A system is considered that features multiple smart meters, which may report alarms and autonomously collect energy consumption information. An analytical approach is proposed to study the operation of a large number of devices in such a system as well as model the settings of the random-access protocol in a cellular network and overload control mechanisms with respect to the access success probability, network access latency, and device power consumption. A comparison between the obtained analytical results and simulation data is also offered. 


2021 ◽  
Vol 13 (14) ◽  
pp. 2739
Author(s):  
Huizhong Zhu ◽  
Jun Li ◽  
Longjiang Tang ◽  
Maorong Ge ◽  
Aigong Xu

Although ionosphere-free (IF) combination is usually employed in long-range precise positioning, in order to employ the knowledge of the spatiotemporal ionospheric delays variations and avoid the difficulty in choosing the IF combinations in case of triple-frequency data processing, using uncombined observations with proper ionospheric constraints is more beneficial. Yet, determining the appropriate power spectral density (PSD) of ionospheric delays is one of the most important issues in the uncombined processing, as the empirical methods cannot consider the actual ionosphere activities. The ionospheric delays derived from actual dual-frequency phase observations contain not only the real-time ionospheric delays variations, but also the observation noise which could be much larger than ionospheric delays changes over a very short time interval, so that the statistics of the ionospheric delays cannot be retrieved properly. Fortunately, the ionospheric delays variations and the observation noise behave in different ways, i.e., can be represented by random-walk and white noise process, respectively, so that they can be separated statistically. In this paper, we proposed an approach to determine the PSD of ionospheric delays for each satellite in real-time by denoising the ionospheric delay observations. Based on the relationship between the PSD, observation noise and the ionospheric observations, several aspects impacting the PSD calculation are investigated numerically and the optimal values are suggested. The proposed approach with the suggested optimal parameters is applied to the processing of three long-range baselines of 103 km, 175 km and 200 km with triple-frequency BDS data in both static and kinematic mode. The improvement in the first ambiguity fixing time (FAFT), the positioning accuracy and the estimated ionospheric delays are analysed and compared with that using empirical PSD. The results show that the FAFT can be shortened by at least 8% compared with using a unique empirical PSD for all satellites although it is even fine-tuned according to the actual observations and improved by 34% compared with that using PSD derived from ionospheric delay observations without denoising. Finally, the positioning performance of BDS three-frequency observations shows that the averaged FAFT is 226 s and 270 s, and the positioning accuracies after ambiguity fixing are 1 cm, 1 cm and 3 cm in the East, North and Up directions for static and 3 cm, 3 cm and 6 cm for kinematic mode, respectively.


2021 ◽  
Vol 7 (1) ◽  
Author(s):  
Christiane Schön ◽  
Claudia Reule ◽  
Katharina Knaub ◽  
Antje Micka ◽  
Manfred Wilhelm ◽  
...  

Abstract Background The assessment of improvement or maintenance of joint health in healthy subjects is a great challenge. The aim of the study was the evaluation of a joint stress test to assess joint discomfort in subjects with activity-related knee joint discomfort (ArJD). Results Forty-five subjects were recruited to perform the single-leg-step-down (SLSD) test (15 subjects per group). Subjects with ArJD of the knee (age 22–62 years) were compared to healthy subjects (age 24–59 years) with no knee joint discomfort during daily life sporting activity and to subjects with mild-to-moderate osteoarthritis of the knee joint (OA, Kellgren score 2–3, age 42–64 years). The subjects performed the SLSD test with two different protocols: (I) standardization for knee joint discomfort; (II) standardization for load on the knee joint. In addition, range of motion (ROM), reach test, acute pain at rest and after a single-leg squat and knee injury, and osteoarthritis outcome score (KOOS) were assessed. In OA and ArJD subjects, knee joint discomfort could be reproducibly induced in a short time interval of less than 10 min (200 steps). In healthy subjects, no pain was recorded. A clear differentiation between study groups was observed with the SLSD test (maximal step number) as well as KOOS questionnaire, ROM, and reach test. In addition, a moderate to good intra-class correlation was shown for the investigated outcomes. Conclusions These results suggest the SLSD test is a reliable tool for the assessment of knee joint health function in ArJD and OA subjects to study the improvements in their activities. Further, this model can be used as a stress model in intervention studies to study the impact of stress on knee joint health function.


1998 ◽  
Vol 1644 (1) ◽  
pp. 142-149 ◽  
Author(s):  
Gang-Len Chang ◽  
Xianding Tao

An effective method for estimating time-varying turning fractions at signalized intersections is described. With the inclusion of approximate intersection delay, the proposed model can account for the impacts of signal setting on the dynamic distribution of intersection flows. To improve the estimation accuracy, the use of preestimated turning fractions from a relatively longer time interval has been proposed to serve as additional constraints for the same estimation but over a short time interval. The results of extensive simulation experiments indicated that the proposed method can yield sufficiently accurate as well as efficient estimation of dynamic turning fractions for signalized intersections.


Sign in / Sign up

Export Citation Format

Share Document