scholarly journals A Preliminary Study of Three-Point Onboard External Calibration for Tracking Radiometric Stability and Accuracy

2019 ◽  
Vol 11 (23) ◽  
pp. 2790 ◽  
Author(s):  
Mustafa Aksoy ◽  
Paul E. Racette

Absolute calibration of radiometers is usually implemented onboard using one hot and one cold external calibration targets. However, two-point calibration methods are unable to differentiate calibration drifts and associated errors from fluctuations in receiver gain and offset. Furthermore, they are inadequate to characterize temporal calibration stability of radiometers. In this paper, a preliminary study with linear radiometer systems has been presented to show that onboard external three-point calibration offers the means to quantify calibration drifts in the radiometer systems, and characterize associated errors as well as temporal stability in Earth and space measurements. Radiometers with three external calibration reference targets operating two data processing paths: i.e., (1) measurement path and (2) calibration validation path have been introduced. In the calibration validation data processing path, measurements of one known calibration target is calibrated using the other two calibration references, and temporal calibration stability and possible calibration temperature drifts are analyzed. In the measurement data processing path, the impact of the calibration drifts on Earth and space measurements is quantified and bounded by an upper limit. This two-path analysis is performed through calibration error analysis (CEA) diagrams introduced in this paper.

2014 ◽  
Vol 981 ◽  
pp. 647-652
Author(s):  
Ling Fei Zhang ◽  
Sheng Yu Wang

During quantitative analysis on measured object by using equal precision measurement method, there remains certain difference between the measuring result and truth value due to the impact on measuring method, measuring tools and measuring environment. In order to reduce measurement error, we usually make continuous equal precision measurements on the measured object repeatedly. Then we get the final result by theoretical calculations, error analysis and dispose on measurement data. The data processing shows complicated and error-prone .But now we take computer as a carrier, then combining with virtual instrument technology to accomplish the data-processing system. It can cover the manual computation shortage and can take humanization disposal on measurement data. Moreover, the results can show with multi-mode intuitively.


2021 ◽  
Author(s):  
Lin Lawrence Guo ◽  
Stephen R Pfohl ◽  
Jason Fries ◽  
Alistair Johnson ◽  
Jose Posada ◽  
...  

Importance: Temporal dataset shift associated with changes in healthcare over time is a barrier to deploying machine learning-based clinical decision support systems. Algorithms that learn robust models by estimating invariant properties across time periods for domain generalization (DG) and unsupervised domain adaptation (UDA) might be suitable to proactively mitigate dataset shift. Objective: To characterize the impact of temporal dataset shift on clinical prediction models and benchmark DG and UDA algorithms on improving model robustness. Design, Setting, and Participants: In this cohort study, intensive care unit patients from the MIMIC-IV database were categorized by year groups (2008-2010, 2011-2013, 2014-2016 and 2017-2019). Tasks were predicting mortality, long length of stay, sepsis and invasive ventilation. Feedforward neural networks were used as prediction models. The baseline experiment trained models using empirical risk minimization (ERM) on 2008-2010 (ERM[08-10]) and evaluated them on subsequent year groups. DG experiment trained models using algorithms that estimated invariant properties using 2008-2016 and evaluated them on 2017-2019. UDA experiment leveraged unlabelled samples from 2017-2019 for unsupervised distribution matching. DG and UDA models were compared to ERM[08-16] models trained using 2008-2016. Main Outcome(s) and Measure(s): Main performance measures were area-under-the-receiver-operating-characteristic curve (AUROC), area-under-the-precision-recall curve and absolute calibration error. Threshold-based metrics including false-positives and false-negatives were used to assess the clinical impact of temporal dataset shift and its mitigation strategies. Results: In the baseline experiments, dataset shift was most evident for sepsis prediction (maximum AUROC drop, 0.090; 95% confidence interval (CI), 0.080-0.101). Considering a scenario of 100 consecutively admitted patients showed that ERM[08-10] applied to 2017-2019 was associated with one additional false-negative among 11 patients with sepsis, when compared to the model applied to 2008-2010. When compared with ERM[08-16], DG and UDA experiments failed to produce more robust models (range of AUROC difference, -0.003-0.050). Conclusions and Relevance: DG and UDA failed to produce more robust models compared to ERM in the setting of temporal dataset shift. Alternate approaches are required to preserve model performance over time in clinical medicine.


2014 ◽  
Vol 1 (4) ◽  
pp. 9-13 ◽  
Author(s):  
Aqeel Ahmed ◽  
Muhammad Sehail Younis

This preliminary study attempts to link among the critical success factors on overall project success in public sector organizations in Pakistan.  In this study it’s reflected that major critical success factors (soundness of Business & workforce, planning & control, quality performance and past performance) can enhance the success of the project in Pakistan.  The purpose of this preliminary study was to verify the reliability of the survey instrument which has been used in European countries. It was found that the planning & control was the highest Cronbach Alpha value, while the ranged for each constructs in the present study from 0.68 to 0.88.  Therefore, based on the Cronbach alpha value score, the proposed survey instrument has fulfilled the basic requirement of a valid instrument.


Pharmaceutics ◽  
2021 ◽  
Vol 13 (7) ◽  
pp. 996
Author(s):  
Niels Lasse Martin ◽  
Ann Kathrin Schomberg ◽  
Jan Henrik Finke ◽  
Tim Gyung-min Abraham ◽  
Arno Kwade ◽  
...  

In pharmaceutical manufacturing, the utmost aim is reliably producing high quality products. Simulation approaches allow virtual experiments of processes in the planning phase and the implementation of digital twins in operation. The industrial processing of active pharmaceutical ingredients (APIs) into tablets requires the combination of discrete and continuous sub-processes with complex interdependencies regarding the material structures and characteristics. The API and excipients are mixed, granulated if required, and subsequently tableted. Thereby, the structure as well as the properties of the intermediate and final product are influenced by the raw materials, the parametrized processes and environmental conditions, which are subject to certain fluctuations. In this study, for the first time, an agent-based simulation model is presented, which enables the prediction, tracking, and tracing of resulting structures and properties of the intermediates of an industrial tableting process. Therefore, the methodology for the identification and development of product and process agents in an agent-based simulation is shown. Implemented physical models describe the impact of process parameters on material structures. The tablet production with a pilot scale rotary press is experimentally characterized to provide calibration and validation data. Finally, the simulation results, predicting the final structures, are compared to the experimental data.


2021 ◽  
pp. 000276422110216
Author(s):  
Kazimierz M. Slomczynski ◽  
Irina Tomescu-Dubrow ◽  
Ilona Wysmulek

This article proposes a new approach to analyze protest participation measured in surveys of uneven quality. Because single international survey projects cover only a fraction of the world’s nations in specific periods, researchers increasingly turn to ex-post harmonization of different survey data sets not a priori designed as comparable. However, very few scholars systematically examine the impact of the survey data quality on substantive results. We argue that the variation in source data, especially deviations from standards of survey documentation, data processing, and computer files—proposed by methodologists of Total Survey Error, Survey Quality Monitoring, and Fitness for Intended Use—is important for analyzing protest behavior. In particular, we apply the Survey Data Recycling framework to investigate the extent to which indicators of attending demonstrations and signing petitions in 1,184 national survey projects are associated with measures of data quality, controlling for variability in the questionnaire items. We demonstrate that the null hypothesis of no impact of measures of survey quality on indicators of protest participation must be rejected. Measures of survey documentation, data processing, and computer records, taken together, explain over 5% of the intersurvey variance in the proportions of the populations attending demonstrations or signing petitions.


Atmosphere ◽  
2021 ◽  
Vol 12 (2) ◽  
pp. 190
Author(s):  
William Hicks ◽  
Sean Beevers ◽  
Anja H. Tremper ◽  
Gregor Stewart ◽  
Max Priestman ◽  
...  

This research quantifies current sources of non-exhaust particulate matter traffic emissions in London using simultaneous, highly time-resolved, atmospheric particulate matter mass and chemical composition measurements. The measurement campaign ran at Marylebone Road (roadside) and Honor Oak Park (background) urban monitoring sites over a 12-month period between 1 September 2019 and 31 August 2020. The measurement data were used to determine the traffic increment (roadside–background) and covered a range of meteorological conditions, seasons, and driving styles, as well as the influence of the COVID-19 “lockdown” on non-exhaust concentrations. Non-exhaust particulate matter (PM)10 concentrations were calculated using chemical tracer scaling factors for brake wear (barium), tyre wear (zinc), and resuspension (silicon) and as average vehicle fleet non-exhaust emission factors, using a CO2 “dilution approach”. The effect of lockdown, which saw a 32% reduction in traffic volume and a 15% increase in average speed on Marylebone Road, resulted in lower PM10 and PM2.5 traffic increments and brake wear concentrations but similar tyre and resuspension concentrations, confirming that factors that determine non-exhaust emissions are complex. Brake wear was found to be the highest average non-exhaust emission source. In addition, results indicate that non-exhaust emission factors were dependent upon speed and road surface wetness conditions. Further statistical analysis incorporating a wider variability in vehicle mix, speeds, and meteorological conditions, as well as advanced source apportionment of the PM measurement data, were undertaken to enhance our understanding of these important vehicle sources.


2014 ◽  
Vol 611-612 ◽  
pp. 452-459 ◽  
Author(s):  
Giovenco Axel ◽  
Frédéric Valiorgue ◽  
Cédric Courbon ◽  
Joël Rech ◽  
Ugo Masciantonio

The present work is motivated by the will to improve Finite Element (FE) Modelling of cutting tool wear. As a first step, the characterisation of wear mechanisms and identification of a wear model appear to be fundamental. The key idea of this work consists in using a dedicated tribometer, able to simulate relevant tribological conditions encountered in cutting (pressure, velocity). The tribometer can be used to estimate the evolution of wear versus time for various tribological conditions (pressure, velocity, temperature). Based on this design of experiments, it becomes possible to identify analytically a wear model. As a preliminary study this paper will be focused on the impact of sliding speed at the contact interface between 304L stainless steel and tungsten carbide (WC) coated with titanium nitride (TiN) pin. This experiment enables to observe a modification of wear phenomena between sliding speeds of 60 m/min and 180 m/min. Finally, the impact on macroscopic parameters has been observed.


Sign in / Sign up

Export Citation Format

Share Document