International Journal of Metrology and Quality Engineering
Latest Publications


TOTAL DOCUMENTS

278
(FIVE YEARS 60)

H-INDEX

10
(FIVE YEARS 2)

Published By Edp Sciences

2107-6847, 2107-6839

Author(s):  
Jean-Pierre Fanton

The concepts of convolution and deconvolution are well known in the field of physical measurement. In particular, they are of interest in the field of metrology, since they can positively influence the performance of the measurement. Numerous mathematical models and computer developments dedicated to convolution and deconvolution have emerged, enabling a more efficient use of experimental data; this in sectors as different as biology, astronomy, manufacturing and energy industries. The subject finds today a new topicality because it has been made accessible to a large public for applications such as processing photographic images. The purpose of this paper is to take into account some recent evolutions such as the introduction of convolution methods in international test standards. Thus, its first part delivers a few reminders of some associated definitions. They concern linear systems properties, and integral transforms. If convolution, in most cases, does not create major calculation problems, deconvolution on the contrary is an inverse problem, and as such needs more attention. The principles of some of the methods available today are exposed. In the third part, illustrations are given on recent examples of applications, belonging to the domain of electrical energy networks and photographic enhancement.


Author(s):  
Yu Wang ◽  
Ruiwei Li ◽  
Lin Luo ◽  
Lin Ruan

The application of elbow flowmeter in rotary equipments is beneficial to reduce the pipeline complexity. However, the intervention of centrifugal acceleration will lead to the change of metrological characteristics of elbow flowmeter. Based on the analysis of the differential pressure formation mechanism of the environmental acceleration on the elbow flowmeter, the calculation formula of the flow rate measurement with the elbow flowmeter in the rotating state is derived, and the fitting method of the discharge coefficient is put forward. The CFD method was used to analyze the internal flow field of the elbow flowmeter under rotating state, summarize the pressure distribution characteristics of the pipe wall, and verify the feasibility of the discharge coefficient fitting strategy by simulation. The results show that for the elbow flowmeters with diameters of 10 mm and 15 mm and the radius to diameter ratio of 1.5, as long as the water flow rate is between 1.5 m/s and 5 m/s, the measurement accuracy can be guaranteed to be above 4%.


Author(s):  
Ke Li ◽  
Bo Yu ◽  
Zhaoyao Shi ◽  
Zanhui Shu ◽  
Rui Li

With the development of gears towards high temperature, high pressure, high speed and high stress, gear measurement, in which only the static geometric accuracy is considered, is unable to meet the current application requirements. While, the low precision and single function gear tester constrains the measurement of gear dynamic performance. For the resolution of this problem, based on the principle of gear system dynamics and several precision mechanical design techniques, a gear dynamic testing machine has been developed, providing new instruments for gear testing. On the basis of research of the principle of dynamic performance test, the primary measurement items of the testing machine have been determined. The measuring principles of each item and the driving and loading form of the testing machine have been examined. The measurement and control system of the testing machine and its corresponding software have been developed. The instrument can not only obtain the static precision index of the gear, but also obtain the dynamic performance index of the gear in variable working conditions. According to the actual test, the uncertainty of instrument is 3.8 μm and the external disturbance caused by the shaft vibration is less than 0.6 μm, which can meet the 5–6 grade precision gear testing requirement.


Author(s):  
Yuanhong Zhao ◽  
Qingping Yang

Post-occupancy evaluation (POE) is a systematic method to evaluate the actual building performance against the theoretical design intents after the building has been occupied for some time, to understand how the building is performing and to capture lessons learned. The POE offers an opportunity to investigate the buildings' actual performance based upon the occupants' satisfaction levels in the aspects of building overall design, indoor environmental quality, thermal comfort, etc. However, as the key part of POE, occupant satisfaction assessment (OSA) is a missing link in the building performance evaluation (BPE) domain, and there is not a systematic evaluation method for the OSA. Moreover, it is time-consuming and error-prone to conduct the OSA manually. This paper presents from the end-user's satisfaction perspective a semantic post-occupancy evaluation ontology (POEontology) to facilitate the occupant satisfaction assessment of buildings, with the ultimate aim of optimizing building operation guidelines, and improving occupants' use experience quality and well-being. An ontology-based knowledge model has been developed to capture the fragmented knowledge of building use satisfaction assessment in the POE domain, with the benchmarking evaluation rules encoded in Semantic Web Rule Language (SWRL) to enable automatic rule-based rating and reasoning. This ontology model also enables the effective OSA-related knowledge retrieving and sharing, and promotes its implementation in the POE domain. A field study has been conducted based upon the Building Use Study (BUS) methodology to validate the proposed ontology framework.


Author(s):  
Junbing Shi ◽  
Yingmin Wang ◽  
Xiaoyong Zhang ◽  
Libo Yang

When studying underwater acoustic exploration, tracking and positioning, the target signals collected by hydrophones are often submerged in strong intermittent noise and environmental noise. In this paper, an algorithm that combines empirical mode decomposition and wavelet transform is proposed to achieve the efficient extraction of target signals in the environment with strong noise. First the calibration of baseline drift is performed on the algorithm, and then it is decomposed into different intrinsic mode functions via empirical mode. The wavelet threshold processing is conducted according to the correlation coefficient of each mode component and the original signal, and finally the signals are reconstructed. The simulation and experiment results show that compared with the conventional empirical mode decomposition method and wavelet threshold method, when the signal-to-noise ratio is low and there exist high-frequency intermittent jamming and baseline drift, the combined algorithm can better extract the target signal, laying the foundation for direction-of-arrival estimation and target positioning in the next step.


Author(s):  
Yacine Koucha ◽  
QingPing Yang

The COVID-19 outbreak is of great concern due to the high rates of infection and the large number of deaths worldwide. In this paper, we considered a Bayesian inference and failure mode and effects analysis of the modified susceptible-exposed-infectious-removed model for the transmission dynamics of COVID-19 with an exponentially distributed infectious period. We estimated the effective reproduction number based on laboratory-confirmed cases and death data using Bayesian inference and analyse the impact of the community spread of COVID-19 across the United Kingdom. We used the failure mode and effects analysis tool to evaluate the effectiveness of the action measures taken to manage the COVID-19 pandemic. We focused on COVID-19 infections and therefore the failure mode is taken as positive cases. The model is applied to COVID-19 data showing the effectiveness of interventions adopted to control the epidemic by reducing the reproduction number of COVID-19. Results have shown that the combination of Bayesian inference, compartmental modelling and failure mode and effects analysis is effective in modelling and studying the risks of COVID-19 transmissions, leading to the quantitative evaluation of the action measures and the identification of the lessons learned from the governmental measures and actions taken in response to COVID-19 in the United Kingdom. Analytical and numerical methods are used to highlight the practical implications of our findings. The proposed methodology will find applications in current and future COVID-19 like pandemics and wide quality engineering.


Author(s):  
Qijian Tang ◽  
Qingping Yang ◽  
Xiangjun Wang ◽  
Alistair B. Forbes

Pointing accuracy is an important indicator for electro-optical detection systems, as it significantly affects the system performance. However, as a result of misalignment, nonperpendicularity in the manufacturing and assembly processes, as well as the sensor errors such as camera distortion and angular sensor error, the pointing accuracy is significantly affected. These errors should be compensated before using the system. Parametric models are firstly proposed to compensate for the errors, whilst the semi-parametric models with the nonlinearity added are also put forward. Both methods should analyse the parametric part first, which is a complicated and inaccurate process. This paper presents a nonparametric model, without any prior information about mechanical dimensions, etc. It depends only on the test data. Gaussian Process regression is used to represent the relationship between data and predict the compensated output. The test results have shown that the regression variances have decreased by more than an order of magnitude, and the means have also been significantly reduced, with the pointing error well improved. The nonparametric model based on Gaussian Process is thus demonstrated to be an effective and powerful tool for the pointing error compensation.


Author(s):  
José Joaquín Mesa-Jiménez ◽  
Lee Stokes ◽  
QingPing Yang ◽  
Valerie Livina

In the context of sensor data generated by Building Management Systems (BMS), early warning signals are still an unexplored topic. The early detection of anomalies can help preventing malfunctions of key parts of a heating, cooling and air conditioning (HVAC) system that may lead to a range of BMS problems, from important energy waste to fatal errors in the worst case. We analyse early warning signals in BMS sensor data for early failure detection. In this paper, the studied failure is a malfunction of one specific Air Handling Unit (AHU) control system that causes temperature spikes of up to 30 degrees Celsius due to overreaction of the heating and cooling valves in response to an anomalous temperature change caused by the pre-heat coil in winter period in a specific area of a manufacturing facility. For such purpose, variance, lag-1 autocorrelation function (ACF1), power spectrum (PS) and variational autoencoder (VAE) techniques are applied to both univariate and multivariate scenarios. The univariate scenario considers the application of these techniques to the control variable only (the one that displays the failure), whereas the multivariate analysis considers the variables affecting the control variable for the same purpose. Results show that anomalies can be detected up to 32 hours prior to failure, which gives sufficient time to BMS engineers to prevent a failure and therefore, an proactive approach to BMS failures is adopted instead of a reactive one.


Author(s):  
Hicham Mezouara ◽  
Latifa Dlimi ◽  
Abdelouahhab Salih ◽  
Mohamed Afechcar ◽  
Houcine Zniker

This study treats the measurement uncertainties that we can find in the stiffness modulus of the bituminous test. We present all the sensors installed on rigidity modulus measurement chains and also their uncertainty ranges. Several parameters influence the rigidity module's value, such as the parameters related to experimental conditions, and others are rather connected to the equipment's specification, which are the speed, the loading level, the temperature, the tested sample dimension, and the data acquisition, etc. All these factors have a great influence on the value of the modulus of rigidity. To qualify the uncertainty factors, we used two approaches: the first one is made by following the method described by the GUM (Guide to the expression of uncertainty in measurement), the second approach based on the numerical simulation of the Monte Carlo. The two results are then compared for an interval of confidence of 95%. The paper also shows the employment of the basic methods of statistical analysis, such as the Comparing of two variances. Essential concepts in measurement uncertainty have been compiled and the determination of the stiffness module parameters are discussed. It has been demonstrated that the biggest source of error in the stiffness modulus measuring process is the repeatability has a contribution of around 45.23%.


Author(s):  
Gouda M. Mahmoud ◽  
Seif M. Osman ◽  
Riham S. Hegazy

In accordance with the recent version of ISO 376:2011, the classification of the force transducers is based on the relative errors calculated from the calibration results. This classification approach doesn't take the uncertainty of measurement into consideration. It becomes one of the most important factors that must be utilized when making a classification decision based on of ISO/IEC 17025:2017. In this study a proposed approach for force proving instrument classification was developed. This approach is based on taking into account the calibration results uncertainty of the instruments as a decision rule for classifications. Since the expanded budget uncertainty is a combination of different parameters that may affect the classifications decisions so it is more realistic and more accurate for decision making. The results of this paper demonstrate a recommendation for ISO 376:2011 to modify its classification criteria for the force proving instruments in the upcoming version of this standard.


Sign in / Sign up

Export Citation Format

Share Document