Patrick Suppes and Joseph L. Zinnes. Basic measurement theory. Handbook of mathematical psychology, Volume I, edited by R. Duncan Luce, Robert R. Bush, and Eugene Galanter, John Wiley and Sons, Inc., New York and London1963, pp. 1–76.

1971 ◽  
Vol 36 (2) ◽  
pp. 322-323 ◽  
Author(s):  
Robert L. Causey
Perception ◽  
1998 ◽  
Vol 27 (10) ◽  
pp. 1221-1228 ◽  
Author(s):  
Ernest Greene

Naito and Cole [1994, in Contributions to Mathematical Psychology: Psychometrics and Methodology Eds G H Fischer and D Laming (New York: Springer)] provide a configuration which they describe as the Gravity Lens illusion. In this configuration, four small dots are presented in proximity to four large disks, and one is asked to compare the slope of an imaginary line which connects one pair of dots with the slope of a line which connects the other pair. In fact the slopes are the same, ie their axes are parallel, but because of the positioning of the large disks they appear to be at different orientations. Naito and Cole propose that the perceptual bias is analogous to the effects of gravity on the metrics of physical space, such that mental projections in the vicinity of a disk (or an open circle) are distorted just as the path of light is bent as it passes a massive body such as a star. Here we provide a simple test of this concept by having subjects judge alignments of dots which lie near tangents to a circle. Subjects were asked to project straight lines through pairs of stimulus dots, selecting and marking points in open space which were collinear with each pair. As would be predicted by the Gravity Lens theory, the locations selected by subjects were displaced from straight lines. However, the error magnitudes were substantially larger for judgments of dot pairs which had an oblique alignment, as compared with dot pairs which were aligned with a cardinal axis. This differential of effect as a function of stimulus orientation is not predicted by the gravity concept.


Author(s):  
Andrew Mackenzie

Abstract For qualitative probability spaces, monotone continuity and third-order atom-swarming are together sufficient for a unique countably additive probability measure representation that may have atoms (Mackenzie in Theor Econ 14:709–778, 2019). We provide a new proof by appealing to a theorem of Luce (Ann Math Stat 38:780–786, 1967), highlighting the usefulness of extensive measurement theory (Krantz et al. in Foundations of Measurement Volume I: Additive and Polynomial Representations. Academic Press, New York, 1971) for economists.


2006 ◽  
Vol 4 (1) ◽  
pp. 83-101 ◽  
Author(s):  
Colleen A. Redding ◽  
Jay E. Maddock ◽  
Joseph S. Rossi

Measurement theory and practice defines how well we can measure the most important constructs in the health behavior field. This article reviews the sequential process of measurement development that builds upon both theory and evidence, as well as building toward the future of measurement development. Some basic measurement theory and concepts are reviewed, including types of reliability and validity. The process of scale development and selection is described in some detail with clear advise for choosing measures and criteria for selecting items and scales. Finally two different examples of theory-based measurement development are described in detail: one of an alcohol expectancy scale grounded in Social Learning Theory, and the other of scales assessing confidence in remaining quit and temptation to smoke, grounded in the Transtheoretical model conceptualization of self efficacy. These examples illustrate two different ways that measurement development efforts can produce good scales, with different strengths. Finally, some future directions for the field are discussed within the context of health behavior measurement.


Author(s):  
Yu Wang

Measurement plays a fundamental role in our modern world, and the measurement theory uses statistical tools to measure and to analyze data. In this chapter, we will examine several statistical techniques for measuring user behavior. We will first discuss the fundamental characteristics of user behavior, and then we will describe the scoring and profiling approaches to measure user behavior. The fundamental idea of measurement theory is that measurements are not the same as the outcome being measured. Hence, if we want to draw conclusions about the outcome, we must take into account the nature of the correspondence between the outcome and the measurements. Our goal for measuring user behavior is to understand the behavior patterns so we can further profile users or groups correctly. Readers who are interested in basic measurement theory should refer to Krantz, Luce, Suppes & Tversky (1971), Suppes, Krantz, Luce & Tversky (1989), Luce, Krantz, Suppes & Tversky (1991), Hand (2004), and Shultz & Whitney (2005). Any measurement could involve two types of errors, systematic errors and random errors. A systematic error remains the same direction throughout a set of measurement processes, and can have all positive or all negative (or both) values consistently. Generally, a systematic error is difficult to identify and account for. System errors generally originate in one of two ways: (1) error of calibration, and (2) error of use. Error due to calibration occurs, for example, if network data is collected incorrectly. More specifically, if an allowable value for one variable should have a range from 1 to 1000 but we incorrectly limit the range to a maximum of 100, then all the collected traffic data corresponding to this variable will be affected in the same way, giving rise to a systematic error. Errors of use occur, for example, if the data is collected correctly but was somehow transferred incorrectly. If we define a “byte” as a data type for a variable with a maximum range greater than 256, we expect incorrect results on observations with values greater than 256 for this variable. A random error varies from a process to process and is equally likely to be randomly selected as positive or negative. Random errors arise because of either uncontrolled variables or specimen variations. In any case, the idea is to control all variables that can influence the result of the measurement and to control them closely enough that the resulting random errors are no longer objectionable. Random errors can be addressed with statistical methods. In most measurements, only random errors will contribute to estimates of probable error. One of the common random errors in measuring user behavior is the variance. A robust profiling measurement has to be able to take into account the variances in profiling patterns on (1) the network system side, such as variances in network groups or domains, traffic volume, and operating systems, (2) the user side, such as job responsibilities, working schedules, department categorization, security privileges, and computer skills must also be considered. The profiling measurement must be able to separate such variances from the system and user sides. Hence, revolutionizing network infrastructure or altering employment would have less of an impact on the overall profiling system. Recently, the hierarchical generalized linear model has been increasingly used to address such variances; we will further discuss this modern technique later in this chapter.


Sign in / Sign up

Export Citation Format

Share Document