calibration interval
Recently Published Documents


TOTAL DOCUMENTS

34
(FIVE YEARS 9)

H-INDEX

4
(FIVE YEARS 0)

Author(s):  
Александр Николаевич Теплых ◽  
Петр Сергеевич Гуляев

Одной из основных погрешностей турбинных расходомеров является изменение коэффициента преобразования, который зависит от множества факторов. Авторами проведены экспериментальные исследования с целью установления влияния вязкости и температуры нефти на изменение коэффициента преобразования турбинных преобразователей расхода (ТПР) типа MVTM. В рамках работы выполнен анализ составляющих погрешности, рассмотрен вопрос статистической обработки результатов измерений методами теории вероятности и математической статистики. По результатам исследований с использованием трех расходомеров установлено изменение коэффициента преобразования в диапазоне от 0,2 до 0,3 %. Подтверждено, что изменение температуры и вязкости нефти оказывает существенное влияние на точность измерений, проводимых с помощью ТПР типа MVTM. Результаты работы могут быть применены с целью совершенствования методов измерения массы нефти при учетных операциях с использованием СИКН в части создания новых алгоритмов коррекции величины коэффициента преобразования для стабилизации метрологических характеристик ТПР в межповерочном интервале и, как следствие, минимизации финансовых и временных затрат на проведение внеочередных поверок. One of the main errors of turbine flow meters is the change in the conversion coefficient, which depends on many factors. The authors carried out experimental researches in order to establish the influence of oil viscosity and temperature on the change in the conversion coefficient of MVTM type turbine flow converters (TFC). Within the framework of the research, the analysis of the components of the error was carried out, the issue of statistical processing of measurement results by methods of probability theory and mathematical statistics was considered. According to the results of researches using three flow meters, a change in the conversion coefficient in the range from 0.2 to 0.3 % was established. It is confirmed that changes in oil temperature and viscosity have a significant impact on the accuracy of measurements carried out with the MVTM type TFC. The results of the work can be applied to improve the methods of measuring the mass of oil during metering operations with the use of CQCS in terms of creating new algorithms for the correction of the conversion coefficient value to stabilize the metrological characteristics of TFC in the calibration interval and, consequently, minimizing the financial and time costs of conducting unscheduled verifications.


2021 ◽  
Vol 14 (2) ◽  
pp. 1439-1455
Author(s):  
Shujiro Komiya ◽  
Fumiyoshi Kondo ◽  
Heiko Moossen ◽  
Thomas Seifert ◽  
Uwe Schultz ◽  
...  

Abstract. The recent development and improvement of commercial laser-based spectrometers have expanded in situ continuous observations of water vapour (H2O) stable isotope compositions (e.g. δ18O and δ2H) in a variety of sites worldwide. However, we still lack continuous observations in the Amazon, a region that significantly influences atmospheric and hydrological cycles on local to global scales. In order to achieve accurate on-site observations, commercial water isotope analysers require regular in situ calibration, which includes the correction of H2O concentration dependence ([H2O] dependence) of isotopic measurements. Past studies have assessed the [H2O] dependence for air with H2O concentrations of up to 35 000 ppm, a value that is frequently surpassed in tropical rainforest settings like the central Amazon where we plan continuous observations. Here we investigated the performance of two commercial analysers (L1102i and L2130i models, Picarro, Inc., USA) for measuring δ18O and δ2H in atmospheric moisture at four different H2O levels from 21 500 to 41 000 ppm. These H2O levels were created by a custom-built calibration unit designed for regular in situ calibration. Measurements on the newer analyser model (L2130i) had better precision for δ18O and δ2H and demonstrated less influence of H2O concentration on the measurement accuracy at each concentration level compared to the older L1102i. Based on our findings, we identified the most appropriate calibration strategy for [H2O] dependence, adapted to our calibration system. The best strategy required conducting a two-point calibration with four different H2O concentration levels, carried out at the beginning and end of the calibration interval. The smallest uncertainties in calibrating [H2O] dependence of isotopic accuracy of the two analysers were achieved using a linear surface fitting method and a 28 h calibration interval, except for the δ18O accuracy of the L1102i analyser for which the cubic fitting method gave the best results. The uncertainties in [H2O] dependence calibration did not show any significant difference using calibration intervals from 28 up to 196 h; this suggested that one [H2O] dependence calibration per week for the L2130i and L1102i analysers is sufficient. This study shows that the cavity ring-down spectroscopy (CRDS) analysers, appropriately calibrated for [H2O] dependence, allow the detection of natural signals of stable water vapour isotopes at very high humidity levels, which has promising implications for water cycle studies in areas like the central Amazon rainforest and other tropical regions.


2021 ◽  
Vol 13 (3) ◽  
pp. 468
Author(s):  
Rahim Aguejdad

The temporal non-stationarity of land use and cover change (LUCC) processes is one of the main sources of uncertainty that may influence the calibration and the validation of spatial path-dependent LUCC models. In relation to that, this research aims to investigate the influence of the temporal non-stationarity of land change on urban growth modeling accuracy based on an empirical approach that uses past LUCC. Accordingly, the urban development in Rennes Metropolitan (France) was simulated using fifteen past calibration intervals which are set from six training dates. The study used Idrisi’s Cellular Automata-Markov model (CA-Markov) which is an inductive pattern-based LUCC software package. The land demand for the simulation year was estimated using the Markov Chain method. Model validation was carried out by assessing the quantity of change, allocation, and spatial patterns accuracy. The quantity disagreement was analyzed by taking into consideration the temporal non-stationarity of change rate over the calibration and the prediction intervals, the model ability to reproduce the past amount of change in the future, and the time duration of the prediction interval. The results show that the calibration interval significantly influenced the amount and the spatial allocation of the estimated change. In addition to that, the spatial allocation of change using CA-Markov depended highly on the basis land cover image rather than the observed transition during the calibration period. Therefore, this study provides useful insights on the role of the training dates in the simulation of non-stationary LUCC.


Metrologia ◽  
2020 ◽  
Vol 57 (6) ◽  
pp. 065007
Author(s):  
Seung-Nam Park ◽  
Hyung-Seok Shim ◽  
Hehree Cho ◽  
Mun-Seog Kim

2020 ◽  
pp. 166-169
Author(s):  
Олександр Володимирович Томашевський ◽  
Геннадій Валентинович Сніжной

The operational efficiency of measuring equipment (ME) is important in determining the cost of maintaining ME. To characterize the operational efficiency of the ME, an efficiency indicator has been introduced, an increase of which will reduce costs caused by the release of defective products due to the use of ME with unreliable indications. Over time, the ME parameters change under the influence of external factors and the ME aging processes inevitably occur, as a result of which the parameters of the ME metrological service system change. Therefore, in the general case, the parameters of the metrological maintenance system of ME should be considered as random variables. Accordingly, the efficiency indicator of measuring instruments is also a random variable, for the determination of which it is advisable to apply the methods of mathematical statistics and computer simulation. The performance indicator depends on the parameters of the metrological maintenance ME system, such as the calibration interval, the time spent by the ME on metrological maintenance, and the likelihood of ME failure-free operation. As a random variable, the efficiency indicator has a certain distribution function. To determine the distribution function of the efficiency indicator and the corresponding statistical characteristics, a computer simulation method was used. A study was made of the influence on the indicator of the effectiveness of the parameters of the metrological maintenance system ME (interesting interval, the failure rate of ME). The value of the verification interval and the failure rate of MEs varied over a wide range typical of real production. The time spent by ME on metrological services is considered as a random variable with a normal distribution law. To obtain random numbers with a normal distribution law, the Box-Muller method is used. After modeling, the statistical processing of the obtained results was done. It is shown that in real production, the efficiency indicator has a normal distribution law and the value of the efficiency indicator with an increase in the calibration interval does not practically change.


Author(s):  
Mlađen Krndija ◽  
Marina Latinović ◽  
Gordana Broćeta ◽  
Gojko Savić

To optimize measurement procedures in laboratories, in terms of the balance between economics and risk, determination of the optimal calibration interval for measuring equipment has significant importance. This paper will show an approximate, but effective method for determination of initial calibration interval, regarding “ILAC” guidelines and original recommendations based on authors’ experience. The presented applied method is adapted for the equipment used in a laboratory for building materials and structural testing, and the results of its application are shown on the examples of several different instruments. Impact factors on calibration intervals are analyzed, and the basic recommendations for revision of the initial calibration intervals are given.


2020 ◽  
pp. 50-53
Author(s):  
V. R. Kozubovsky

Gas analyzers, especially toxic and explosive devices, are usually measuring equipment. Therefore, it is very important to ensure their metrological parameters. For this purpose, their metrological certification is carried out periodically. However, this procedure is quite costly and is carried out by metrological centers that put state sea­ler seals and write a suitable device for operation or not. The interval of the calibration interval is usually more than 1 year and during this period the metrological parameters of the device change and it becomes unsuitable (from the point of view of metrological centers) for operation. Device developers, when writing “inter-calibration interval, for example, 1 year”, they guarantee the preservation of metrological parameters for 1 year. If a longer period has elapsed, the instrument must be set up before calibration. However, the device is customized by the developer or a qualified professional. As a rule, the owner of the appliance does not have such an opportunity and the state verifier lacks it. Therefore, it is very important that the owner of the device is able to independently check its performance. There are many met­hods of calibration of the instrument — for example, partial darke­ning of the working channel, introduction of a sealed cuvette with a certain concentration of the measured gas into the working channel, etc. [1-6]. But all of them have certain disadvantages both in terms of a large error and the possibility of their implementation in option.


Sign in / Sign up

Export Citation Format

Share Document