measurement theory
Recently Published Documents


TOTAL DOCUMENTS

726
(FIVE YEARS 155)

H-INDEX

35
(FIVE YEARS 5)

2022 ◽  
Author(s):  
Ye Xiaoming ◽  
Ding Shijun ◽  
Liu Haibo

Abstract In the traditional measurement theory, precision is defined as the dispersion of measured value, and is used as the basis of weights calculation in the adjustment of measurement data with different qualities, which leads to the trouble that trueness is completely ignored in the weight allocation. In this paper, following the pure concepts of probability theory, the measured value (observed value) is regarded as a constant, the error as a random variable, and the variance is the dispersion of all possible values of an unknown error. Thus, a rigorous formula for weights calculation and variance propagation is derived, which solves the theoretical trouble of determining the weight values in the adjustment of multi-channel observation data with different qualities. The results show that the optimal weights are not only determined by the covariance array of observation errors, but also related to the model of adjustment.


Entropy ◽  
2022 ◽  
Vol 24 (1) ◽  
pp. 106
Author(s):  
Abraham G. Kofman ◽  
Gershon Kurizki

The consensus regarding quantum measurements rests on two statements: (i) von Neumann’s standard quantum measurement theory leaves undetermined the basis in which observables are measured, and (ii) the environmental decoherence of the measuring device (the “meter”) unambiguously determines the measuring (“pointer”) basis. The latter statement means that the environment monitors (measures) selected observables of the meter and (indirectly) of the system. Equivalently, a measured quantum state must end up in one of the “pointer states” that persist in the presence of the environment. We find that, unless we restrict ourselves to projective measurements, decoherence does not necessarily determine the pointer basis of the meter. Namely, generalized measurements commonly allow the observer to choose from a multitude of alternative pointer bases that provide the same information on the observables, regardless of decoherence. By contrast, the measured observable does not depend on the pointer basis, whether in the presence or in the absence of decoherence. These results grant further support to our notion of Quantum Lamarckism, whereby the observer’s choices play an indispensable role in quantum mechanics.


2022 ◽  
pp. 306-323
Author(s):  
Victoria Konovalenko Slettli ◽  
Elena Panteleeva

The study aims to examine whether an online national student survey can contribute to the understanding of intellectual capital in higher education institutions. The study adopts a performance management and measurement perspective towards NSS and applies the lens of intellectual capital measurement theory which distinguishes between human, relational, and structural capitals. By adopting a conceptual and explorative research approach, the study is based on an intensive analysis of document sources related to the Norwegian online national student survey – Study Barometer. The results suggest that the Norwegian national student survey reflects certain categories of the intellectual capital framework – including those categories that are of interest to university stakeholders. However, the scope of the intellectual capital categories in the survey is limited to a few certain items. The study concludes that national online student survey can be used as a performance measurement tool and assist our understanding of the IC in HEIs – even though to a limited degree.


2021 ◽  
Author(s):  
Christopher J Fariss ◽  
Therese Anders ◽  
Jonathan Markowitz ◽  
Miriam Barnum

Gross Domestic Product (GDP), GDP per capita, and population are central to the study of politics and economics broadly, and conflict processes in particular. Despite the prominence of these variables in empirical research, existing data lack historical coverage and are assumed to be measured without error. We develop a latent variable modeling framework that expands data coverage (1500 A.D--2018 A.D) and, by making use of multiple indicators for each variable, provides a principled framework to estimate uncertainty for values for all country-year variables relative to one another. Expanded temporal coverage of estimates provides new insights about the relationship between development and democracy, conflict, repression, and health. We also demonstrate how to incorporate uncertainty in observational models. Results show that the relationship between repression and development is weaker than models that do not incorporate uncertainty suggest. Future extensions of the latent variable model can address other forms of systematic measurement error with new data, new measurement theory, or both.


Author(s):  
Іryna Gryshanova

Control of water resources is becoming an important strategic issue. That is why authorities set the goal for wa-ter agencies to manage the availability of water and create regulations to its rational use. The main point in water control is measurement. There are three important aspects of measurements of water resources: at water extraction from nature, at the consumption and at custody transfer. Control of water consumption sometimes is based not on measurements, but on preliminary estimation, for example, by pumping. Ultrasonic measurement technology as a key feature of automated control of resources has a potential role in this market. In contrast to mechanical (turbine) meters, ultrasonic meters have a priority because they also give a possibility to realize smart metering. In contrast to electromagnetic meters, which also measure with high accuracy and realize smart functions, ultrasonic meters much more suitable for rough water, wastewater and sewage. Such water resources are usually poorly controlled, which means that no one knows their exact cost. Measurement is mandatory to control cost and for billing. Accuracy is important issue, especially when we say about measurements in large pipe diameters. There is practically no alternative to ultrasonic flow meters. Market of diverse meters concentrated on diameters under 400 mm. For larger diameters, only ultrasonic meters are in use. They have many chords, difficult algorithms for data processing. Thus, they are applicable over a very wide flow range. In this paper, we discover transit-time ultrasonic flow meters to understand features of their measurement theory taking into account all factors affecting their work. This article describes errors inherent in these flowmeters during measurements. As far as accuracy is significantly important in billing, the cost of 1% measurement error in consumption of water resources for small (DN50÷DN150 mm) and large (DN200÷DN1200 mm) pipe diameters has been evaluated and analyzed. The losses from the installation of low-quality metering devices are demonstrated and discussed.


2021 ◽  
Author(s):  
Angély Loubert ◽  
Antoine Regnault ◽  
Véronique Sébille ◽  
Jean-Benoit Hardouin

Abstract BackgroundIn the analysis of clinical trial endpoints, calibration of patient-reported outcomes (PRO) instruments ensures that resulting “scores” represent the same quantity of the measured concept between applications. Rasch measurement theory (RMT) is a psychometric approach that guarantees algebraic separation of person and item parameter estimates, allowing formal calibration of PRO instruments. In the RMT framework, calibration is performed using the item parameter estimates obtained from a previous “calibration” study. But if calibration is based on poorly estimated item parameters (e.g., because the sample size of the calibration sample was low), this may hamper the ability to detect a treatment effect, and direct estimation of item parameters from the trial data (non-calibration) may then be preferred. The objective of this simulation study was to assess the impact of calibration on the comparison of PRO results between treatment groups, using different analysis methods.MethodsPRO results were simulated following a polytomous Rasch model, for a calibration and a trial sample. Scenarios included varying sample sizes, with instrument of varying number of items and modalities, and varying item parameters distributions. Different treatment effect sizes and distributions of the two patient samples were also explored. Comparison of treatment groups was performed using different methods based on a random effect Rasch model. Calibrated and non-calibrated approaches were compared based on type-I error, power, bias, and variance of the estimates for the difference between groups.Results There was no impact of the calibration approach on type-I error, power, bias, and dispersion of the estimates. Among other findings, mistargeting between the PRO instrument and patients from the trial sample (regarding the level of measured concept) resulted in a lower power and higher position bias than appropriate targeting. ConclusionsCalibration of PROs in clinical trials does not compromise the ability to accurately assess a treatment effect and is essential to properly interpret PRO results. Given its important added value, calibration should thus always be performed when a PRO instrument is used as an endpoint in a clinical trial, in the RMT framework.


Entropy ◽  
2021 ◽  
Vol 24 (1) ◽  
pp. 4
Author(s):  
Charis Anastopoulos ◽  
Ntina Savvidou

Proposed quantum experiments in deep space will be able to explore quantum information issues in regimes where relativistic effects are important. In this essay, we argue that a proper extension of quantum information theory into the relativistic domain requires the expression of all informational notions in terms of quantum field theoretic (QFT) concepts. This task requires a working and practicable theory of QFT measurements. We present the foundational problems in constructing such a theory, especially in relation to longstanding causality and locality issues in the foundations of QFT. Finally, we present the ongoing Quantum Temporal Probabilities program for constructing a measurement theory that (i) works, in principle, for any QFT, (ii) allows for a first- principles investigation of all relevant issues of causality and locality, and (iii) it can be directly applied to experiments of current interest.


Author(s):  
Lucio Flavio Campanile ◽  
Stephanie Kirmse ◽  
Alexander Hasse

Compliant mechanisms are alternatives to conventional mechanisms which exploit elastic strain to produce desired deformations instead of using moveable parts. They are designed for a kinematic task (providing desired deformations) but do not possess a kinematics in the strict sense. This leads to difficulties while assessing the quality of a compliant mechanism’s design. The kinematics of a compliant mechanism can be seen as a fuzzy property. There is no unique kinematics, since every deformation need a particular force system to act; however, certain deformations are easier to obtain than others. A parallel can be made with measurement theory: the measured value of a quantity is not unique, but exists as statistic distribution of measures. A representative measure of this distribution can be chosen to evaluate how far the measures divert from a reference value. Based on this analogy, the concept of accuracy and precision of compliant systems are introduced and discussed in this paper. A quantitative determination of these qualities based on the eigenvalue analysis of the hinge’s stiffness is proposed. This new approach is capable of removing most of the ambiguities included in the state-of-the-art assessment criteria (usually based on the concepts of path deviation and parasitic motion).


2021 ◽  
Author(s):  
Moin Syed

Psychological researchers have long sought to make universal claims about behavior and mental processes. The various crises in psychology—reproducibility, replication, measurement, theory, generalizability—have all demonstrated that such claims are premature, and perhaps impossible using mainstream theoretical and methodological approaches. Both the lack of diversity of samples and simplistic conceptualizations of diversity (e.g., WEIRD, individualism/collectivism) have contributed to an “inference crisis,” in which researchers are ill equipped to make sense of group variation in psychological phenomena, particularly with respect to race/ethnicity. This talk will highlight how the lack of sophisticated frameworks for understanding racial/ethnic differences is a major barrier to developing a reproducible, cumulative psychology.


Sign in / Sign up

Export Citation Format

Share Document