Quantifying the Error of Radar-Estimated Refractivity by Multiple Elevation and Dual-Polarimetric Data

2018 ◽  
Vol 35 (10) ◽  
pp. 1897-1911
Author(s):  
Ya-Chien Feng ◽  
Frédéric Fabry

AbstractTo properly use radar refractivity data quantitatively, good knowledge of its errors is required. The data quality of refractivity critically depends on the phase measurements of ground targets that are used for the refractivity estimation. In this study, the observational error structure of refractivity is first estimated based on quantifying the uncertainties of phase measurements, data processing, and the refractivity estimation method. New correlations between the time series of phase measurements at different elevation angles and between polarizations are developed to assess the bulk phase variability of individual targets. Then, the observational error of refractivity is obtained by simulating the uncertainties of phase measurements through the original refractivity estimation method. Resulting errors in refractivity are found to be smaller than 1 N-unit in areas densely populated with reliable point-like stationary ground targets but grow as the target density becomes sparse.

2017 ◽  
Author(s):  
Weikai Li ◽  
Lishan Qiao ◽  
Zhengxia Wang ◽  
Dinggang Shen

AbstractFunctional brain network (FBN) has been becoming an increasingly important measurement for exploring the cerebral working mechanism and mining informative biomarkers for assisting diagnosis of some neurodegenerative disorders. Despite its potential performance in discovering the valuable patterns hidden in the brains, the estimated FBNs are often heavily influenced by the quality of the observed data (e.g., BOLD signal series). In practice, a preprocessing pipeline is usually employed for improving the data quality prior to the FBN estimation; but, even so, some data points in the time series are still not clean enough, possibly including original artifacts (e.g., micro head motion), non-resting functional disturbing (e.g., mind-wandering), and new “noises” caused by the preprocessing pipeline per se. Therefore, not all data points in the time series can contribute to the subsequent FBN estimation. To address this issue, in this paper, we propose a novel FBN estimation method by introducing a latent variable as an indicator of the data quality, and develop an alternating optimization algorithm for scrubbing the data and estimating FBN simultaneously in a single framework. As a result, we can obtain more accurate FBNs with the self-scrubbing data. To illustrate the effectiveness of the proposed method, we conduct experiments on two publicly available datasets to identify mild cognitive impairment (MCI) patients from normal control (NC) subjects based on the estimated FBNs. Experimental results show that the proposed FBN modelling method can achieve higher classification accuracy, significantly outperforming the baseline methods.


2015 ◽  
Vol 31 (2) ◽  
pp. 231-247 ◽  
Author(s):  
Matthias Schnetzer ◽  
Franz Astleithner ◽  
Predrag Cetkovic ◽  
Stefan Humer ◽  
Manuela Lenk ◽  
...  

Abstract This article contributes a framework for the quality assessment of imputations within a broader structure to evaluate the quality of register-based data. Four quality-related hyperdimensions examine the data processing from the raw-data level to the final statistics. Our focus lies on the quality assessment of different imputation steps and their influence on overall data quality. We suggest classification rates as a measure of accuracy of imputation and derive several computational approaches.


2020 ◽  
Author(s):  
Andras Fabian ◽  
Carine Bruyninx ◽  
Juliette Legrand ◽  
Anna .Miglio

<p>Global Navigation Satellite Systems (GNSS) are a widely spread cost effective technique for geodetic applications and monitoring the Earth’s atmosphere. Therefore, the density of the GNSS networks have grown considerable since the last decade. Each of the networks collects huge amounts of data from permanently operating GNSS stations. The quality of the data is variable, depending on the evaluated time period and satellite system. Conventionally, the quality information is extracted from daily estimates of different types of GNSS parameters such as number of data gaps, multipath level, number of cycle slips, number of dual frequency observations with respect to the expected number, and from their combinations.</p><p>The EUREF Permanent GNSS Network Central Bureau (EPN CB, Bruyninx et al., 2019) is operationally collecting and analysing the quality of more than 300 GNSS stations and investigates the main reason of any quality degradation. EPN CB is currently operating a semi-automatic (followed by a manual) data-monitoring tool to detect the quality degradations and investigate the source of the problems. In the upcoming years, this data-monitoring tool will be used to also monitor the GNSS component of the European Plate Observing System (EPOS) expected to include more than 3000 GNSS stations. This anticipated inflation of GNSS stations to be monitored will make it increasingly challenging to select the high quality GNSS data. EPN CB’s current system requires time-consuming semi-automatic inspection of data quality and it is not designed to handle the larger amounts of data. In addition, the current system does not exploit correlations between the daily data quality, time series and the GNSS station metadata (such as equipment type and receiver firmware) often common to many stations.</p><p>In this poster, we will first present the currently used method of GNSS data quality checking and its limitations. Based on more than 20 years of GNSS observations collected in the EPN, we will show typical cases of correlations between the time series of data quality metrics and GNSS station metadata. Then, we will set up the requirements and design the new GNSS data quality monitoring system capable of handling more than 300 stations. Based on the collected EPN samples and the typical cases, we will introduce ongoing improvements taking advantage of artificial intelligence techniques, show the possible design of the neutral network, and present supervised training of the neutral network.</p><p>Bruyninx C., Legrand J., Fabian A., Pottiaux E. (2019) GNSS Metadata and Data Validation in the EUREF Permanent Network. GPS Sol., 23(4), https://doi: 10.1007/s10291-019-0880-9</p>


Author(s):  
Eaton E. Lattman ◽  
Thomas D. Grant ◽  
Edward H. Snell

Extracting information from scattering data is very sensitive to the quality of the data. In this chapter data quality characterization is described, including initial data processing procedures to alert the user to potential data quality issues. Accurate buffer subtraction is crucial for correct modeling and analysis of SAS data, and mechanisms for identifying buffer subtraction errors are discussed. Examining SAS parameters such as a function of concentration or exposure is very useful for identifying concentration dependent artifacts or radiation damage that, if unnoticed, can be very detrimental to further analysis, including misinterpreting the results and drawing erroneous conclusions. SAS is often used for analyzing flexible molecules in solution that may be difficult to study with other structural techniques. Qualitative and quantitative assessments of flexibility are described.


2016 ◽  
Vol 45 (2) ◽  
pp. 3-14 ◽  
Author(s):  
Eva-Maria Asamer ◽  
Franz Astleithner ◽  
Predrag Cetkovic ◽  
Stefan Humer ◽  
Manuela Lenk ◽  
...  

In 2011, Statistics Austria carried out the first register-based census. The use of administrative data for statistical purposes is accompanied by various advantages like a reduced burden for the respondents and less costs for the NSI. However, new challenges, like the quality assessment of this kind of data, arise. Therefore, Statistics Austria developed a comprehensive standardized framework for the evaluation of the data quality for registerbased statistics.In this paper, we present the principle of the quality framework and detailed results from the quality evaluation of the 2011 Austrian census. For each attribute in the census a quality measure is derived from four hyperdimensions. The first three hyperdimensions focus on the documentation of data, the usability of the records and the comparison of data to an external source. The fourth hyperdimension assesses the quality of the imputations. In the framework all the available information on each attribute can be combined to form one final quality indicator. This procedure allows to track changes in quality during data processing and to compare the quality of different census generations.


2008 ◽  
Vol 47 (4) ◽  
pp. 1006-1016 ◽  
Author(s):  
Guang-Yu Shi ◽  
Tadahiro Hayasaka ◽  
Atsumu Ohmura ◽  
Zhi-Hua Chen ◽  
Biao Wang ◽  
...  

Abstract Solar radiation is one of the most important factors affecting climate and the environment. Routine measurements of irradiance are valuable for climate change research because of long time series and areal coverage. In this study, a set of quality assessment (QA) algorithms is used to test the quality of daily solar global, direct, and diffuse radiation measurements taken at 122 observatories in China during 1957–2000. The QA algorithms include a physical threshold test (QA1), a global radiation sunshine duration test (QA2), and a standard deviation test applied to time series of annually averaged solar global radiation (QA3). The results show that the percentages of global, direct, and diffuse solar radiation data that fail to pass QA1 are 3.07%, 0.01%, and 2.52%, respectively; the percentages of global solar radiation data that fail to pass the QA2 and QA3 are 0.77% and 0.49%, respectively. The method implemented by the Global Energy Balance Archive is also applied to check the data quality of solar radiation in China. Of the 84 stations with a time series longer that 20 yr, suspect data at 35 of the sites were found. Based on data that passed the QA tests, trends in ground solar radiation and the effect of the data quality assessment on the trends are analyzed. There is a decrease in ground solar global and direct radiation in China over the years under study. Although the quality assessment process has significant effects on the data from individual stations and/or time periods, it does not affect the long-term trends in the data.


2019 ◽  
Vol 19 (1) ◽  
pp. 9
Author(s):  
Dwi Anugrah Wibisono ◽  
Dian Anggraeni ◽  
Alfian Futuhul Hadi

Forecasting is a time series analytic that used to find out upcoming improvement in the next event using past events as a reference. One of the forecasting models that can be used to predict a time series is Kalman Filter method. The modification of the estimation method of Kalman Filter is Ensemble Kalman Filter (EnKF). This research aims to find the result of EnKF algorithm implementation on SARIMA model. To start with, preticipation forecast data is changed in the form of SARIMA model to obtain some SARIMA model candidates. Next, this best model of SARIMA applied to Kalman Filter models. After Kalman Filter models created, forecasting could be done by applying pass rainfall data to the models. It can be used to predict rainfall intensity for next year. The quality of this forecasting can be assessed by looking at MAPE’s value and RMSE’s value. This research shows that enkf method relative can fix sarima method’s model, proved by mape and rmse values which are smaller and indicate a more accurate prediction. Keywords: Ensemble Kalman Filter, Forecast, SARIMA


JAMIA Open ◽  
2021 ◽  
Vol 4 (3) ◽  
Author(s):  
Ali S Afshar ◽  
Yijun Li ◽  
Zixu Chen ◽  
Yuxuan Chen ◽  
Jae Hun Lee ◽  
...  

Abstract Physiological data, such as heart rate and blood pressure, are critical to clinical decision-making in the intensive care unit (ICU). Vital signs data, which are available from electronic health records, can be used to diagnose and predict important clinical outcomes; While there have been some reports on the data quality of nurse-verified vital sign data, little has been reported on the data quality of higher frequency time-series vital signs acquired in ICUs, that would enable such predictive modeling. In this study, we assessed the data quality issues, defined as the completeness, accuracy, and timeliness, of minute-by-minute time series vital signs data within the MIMIC-III data set, captured from 16009 patient-ICU stays and corresponding to 9410 unique adult patients. We measured data quality of four time-series vital signs data streams in the MIMIC-III data set: heart rate (HR), respiratory rate (RR), blood oxygen saturation (SpO2), and arterial blood pressure (ABP). Approximately, 30% of patient-ICU stays did not have at least 1 min of data during the time-frame of the ICU stay for HR, RR, and SpO2. The percentage of patient-ICU stays that did not have at least 1 min of ABP data was ∼56%. We observed ∼80% coverage of the total duration of the ICU stay for HR, RR, and SpO2. Finally, only 12.5%%, 9.9%, 7.5%, and 4.4% of ICU lengths of stay had ≥ 99% data available for HR, RR, SpO2, and ABP, respectively, that would meet the three data quality requirements we looked into in this study. Our findings on data completeness, accuracy, and timeliness have important implications for data scientists and informatics researchers who use time series vital signs data to develop predictive models of ICU outcomes.


2013 ◽  
Vol 401-403 ◽  
pp. 1278-1281
Author(s):  
Hui Liu ◽  
Lei Li ◽  
Chuan Mei Wang

Efforts to develop data processing or some variable evaluation of the components autocorrelation network or interpersonal relationship network have been hampered by many obstacles. One important reason is the lag of autocorrelation network model (ANM) development. Autocorrelation network models are used to deal with the data with autocorrelation network data. However, some data are correlated with its lag station, so its necessary to introduce the time series in the ANM. The network autocorrelation model with lag and auto-correlated indicators or variables is put forward based on the expansion of the existing social network effect model. The estimation method of the network autocorrelation model is illustrated; the time series stationarity, the consistency and effectiveness of the estimation method are discussed. Also the application of the model is discussed, especially the practical significance on the forecasting and controlling.


2006 ◽  
Vol 23 (10) ◽  
pp. 1340-1356 ◽  
Author(s):  
Jonathan J. Gourley ◽  
Pierre Tabary ◽  
Jacques Parent du Chatelet

Abstract The French operational radar network is being upgraded and expanded from 2002 to 2006 by Meteo-France in partnership with the French Ministry of the Environment. A detailed examination of the quality of the raw polarimetric variables is reported here. The analysis procedures determine the precision of the measurements and quantify errors resulting from miscalibration, near-radome interference, and noise effects. Correction methods to remove biases resulting from effective noise powers in the horizontal and vertical channels, radar miscalibration, and the system offset in differential propagation phase measurements are presented and evaluated. Filtering methods were also required in order to remove azimuthal dependencies discovered with fields of differential reflectivity and differential propagation phase. The developed data quality analysis procedures may be useful to the agencies that are in the process of upgrading their radar networks with dual-polarization capabilities.


Sign in / Sign up

Export Citation Format

Share Document