Representationalism as a Basis for Metaphysics

Author(s):  
J. E. Wolff

This chapter addresses two challenges for using the representational theory of measurement (RTM) as a basis for a metaphysics of quantities. The first is the dominant interpretation of representationalism as being committed to operationalism and empiricism. The chapter argues in favour of treating RTM itself as a mathematical framework open to different interpretations and proposes a more realist understanding of RTM, which treats the mapping between represented and representing structure as an isomorphism rather than a mere homomorphism. This adjustment then enables us to address the second challenge, which is the permissivism present in standard representationalism, according to which there is no special division into quantitative and non-quantitative attributes. Based on results in abstract measurement theory, the chapter argues that, on the contrary, RTM provides the means to draw such a distinction at an intuitively plausible place: only attributes representable on ‘super-ratio scales’ are quantitative.

Author(s):  
Daniel Lassiter

Most previous work on graded modality has relied on qualitative orderings, rather than degree semantics. This chapter introduces Representational Theory of Measurement (RTM), a framework which makes it possible to translate between qualitative and degree-based scales. I describe a way of using RTM to extend the compositional degree semantics introduced in chapter 1 to qualitative scales. English data are used to motivate the application of the RTM discussion between ordinal, interval, and ratio scales to scalar adjectives, with special attention to the kinds of statements that are semantically interpretable relative to different scale types. I also propose and motivate empirically a distinction between ‘additive’ and ‘intermediate’ scales, which interact differently with the algebraic join operation (realizing sum formation or disjunction, depending on the domain). This distinction is reflected in inferential properties of non-modal adjectives in English, and is also important for the analysis of graded modality in later chapters.


Author(s):  
Patrick Suppes

A conceptual analysis of measurement can properly begin by formulating the two fundamental problems of any measurement procedure. The first problem is that of representation, justifying the assignment of numbers to objects or phenomena. We cannot literally take a number in our hands and ’apply’ it to a physical object. What we can show is that the structure of a set of phenomena under certain empirical operations and relations is the same as the structure of some set of numbers under corresponding arithmetical operations and relations. Solution of the representation problem for a theory of measurement does not completely lay bare the structure of the theory, for there is often a formal difference between the kind of assignment of numbers arising from different procedures of measurement. This is the second fundamental problem, determining the scale type of a given procedure. Counting is an example of an absolute scale. The number of members of a given collection of objects is determined uniquely. In contrast, the measurement of mass or weight is an example of a ratio scale. An empirical procedure for measuring mass does not determine the unit of mass. The measurement of temperature is an example of an interval scale. The empirical procedure of measuring temperature by use of a thermometer determines neither a unit nor an origin. In this sort of measurement the ratio of any two intervals is independent of the unit and zero point of measurement. Still another type of scale is one which is arbitrary except for order. Moh’s hardness scale, according to which minerals are ranked in regard to hardness as determined by a scratch test, and the Beaufort wind scale, whereby the strength of a wind is classified as calm, light air, light breeze, and so on, are examples of ordinal scales. A distinction is made between those scales of measurement which are fundamental and those which are derived. A derived scale presupposes and uses the numerical results of at least one other scale. In contrast, a fundamental scale does not depend on others. Another common distinction is that between extensive and intensive quantities or scales. For extensive quantities like mass or distance an empirical operation of combination can be given which has the structural properties of the numerical operation of addition. Intensive quantities do not have such an operation; typical examples are temperature and cardinal utility. A widespread complaint about this classical foundation of measurement is that it takes too little account of the analysis of variability in the quantity measured. One important source is systematic variability in the empirical properties of the object being measured. Another source lies not in the object but in the procedures of measurement being used. There are also random errors which can arise from variability in the object, the procedures or the conditions surrounding the observations.


1975 ◽  
Vol 41 (1) ◽  
pp. 3-28 ◽  
Author(s):  
Lubomir S. Prytulak

Numerous observations are incongruous with Stevens-tradition theory of scale classification: (a) ratio scales, when available, are not necessarily preferred to interval scales; (b) the same scale changes classification depending on the use to which it is put; (c) a scale considered in isolation cannot be classified; (d) performing an inadmissible transformation on a scale entails no loss of information; and (e) the ratio scales of psychophysics do not qualify as interval scales. These incongruities result from such theorists': (a) belief that they classify scales when they really classify functions between scales; (b) belief that scientists seek new rules for assigning numbers to familiar events when, in fact, they seek new events to assign numbers to using familiar rules; and (c) confusion of function type with judgment type, leading to the erroneous claim that the ratio scales of psychophysics are distinct from other behavioral scales. Implications of the above interpretation of Stevens-tradition theory are that: (a) ranking of scales according to desirability is situation-specific—a situation-free ranking clashes with scientists' frequent preference for “inferior” scales and (b) proscriptions against mathematical manipulations or tests of statistical significance apply not to a single scale but to inferences from one scale to another.


2007 ◽  
Vol 1 (1) ◽  
pp. 122-196
Author(s):  
Thomas L. Saaty

Mathematics applications largely depend on scientific practice. In science measurement depends on the use of scales, most frequently ratio scales. A ratio scale there is applied to measure various physical attributes and assumes a zero and an arbitrary unit used uniformly throughout an application. Different ratio scales are combined by means of formulas. The formulas apply within structures involving variables and their relations under natural law. The meaning and use of the outcome is then interpreted according to the judgment of an expert as to how well it meets understanding and experience or satisfies laws of nature that are always there. Science derives results objectively, but interprets their significance is subjectively. In decision making, there are no set laws to characterize structures in which relations are predetermined for every decision. Understanding is needed to structure a problem and then also to use judgments to represent importance and preference quantitatively so that a best outcome can be derived by combining and trading off different factors or attributes. From numerical representations of judgments, priority scales are derived and synthesized according to given rules of composition. In decision making the priority scales can only be derived objectively after subjective judgments are made. The process is the opposite of what we do in science. This paper summarizes a mathematical theory of measurement in decision making and applies it to real-life examples of complex decisions.


Author(s):  
Masanao Ozawa ◽  
Andrei Khrennikov

We continue to analyze basic constraints on human's decision making from the viewpoint of quantum measurement theory (QMT). As has been found, the conventional QMT based on the projection postulate cannot account for combination of the question order effect (QOE) and the response replicability effect (RRE). This was an alarm signal for quantum-like modeling of decision making. Recently, it was shown that this objection to quantum-like modeling can be removed on the basis of the general QMT based on quantum instruments. In the present paper we analyse the problem of combination of QOE, RRE, and the famous QQ-equality (QQE). This equality was derived by Busemeyer and Wang and it was shown (in the joint paper with Solloway and Shiffrin) that statistical data from many social opinion polls satisfies it. Now, we construct quantum instruments satisfying QOE, RRE, and QQE. The general features of our approach are formalized with postulates which generalize {\it Wang-Busemeyer} postulates for quantum-like modeling of decision making. Moreover, we show that our model closely reproduces the statistics of the famous Clinton-Gore Poll data with a prior belief state independent of the question order. This model successfully removes the order effect from the data to determine the genuine distribution of the opinions in the Poll. The paper also provides a psychologist-friendly introduction to the theory of quantum instruments - the most general mathematical framework for quantum measurements. We hope that this theory will attract attention of psychologists and will stimulate further applications.


Author(s):  
Saratiel Weszerai Musvoto

This study emphasises the fact that the objectives of the financial statements are not compatible with the principles that establish measurement in the social sciences, and that they therefore cannot be considered to be measurement objectives. The concept of measurement presupposes the comprehension of the principal state and consequently the objectives of a measurement discipline only make measurement sense in the presence of a theory of measurement in which they are contained. Currently, accounting is considered to be a measurement discipline with complete measurement objectives, even in the absence of a measurement theory that incorporates the objectives of the measurement process. In this study the principles of the representational theory of measurement (a theory that establishes measurement in the social sciences) are used to emphasise that the objectives of the financial statements are not measurement objectives unless they are supported by a theory of measurement. Hence the financial statements cannot contain measurement information until a theory of measurement is established that incorporates the objectives of the accounting measurement processes.


2018 ◽  
pp. 247-260
Author(s):  
Ivan Moscati

Chapter 15 offers a conclusion to the history of measurement theory by reconstructing the origins of the representational theory of measurement in the early work of Patrick Suppes. In particular, the chapter shows that Suppes’s superseding of the unit-based understanding of measurement that he had embraced in the early 1950s, his endorsement of a liberal definition of measurement à la Stanley Smith Stevens in the mid-1950s, his conceiving of the project of an axiomatic underpinning of this notion of measurement in the late 1950s, and the realization of this project during the 1960s all have their origins in the utility analysis research he conducted from 1953 to 1957 within the Stanford Value Theory Project. The representational theory of measurement received full-fledged expression in Foundations of Measurement (1971), a book coauthored by Suppes, Duncan Luce, David Krantz, and Amos Tversky, which quickly became the dominant theory of measurement.


2018 ◽  
pp. 147-162 ◽  
Author(s):  
Ivan Moscati

Chapter 9 discusses the axiomatic version of expected utility theory (EUT), a theory of decision-making under risk, put forward by John von Neumann and Oskar Morgenstern in their book Theory of Games and Economic Behavior (1944). EUT was a changing factor in the history of utility measurement. In fact, while discussions of the measurability of utility before 1944 focused on the utility used to analyze decision-making between risk-free alternatives, after that year, discussions centered on the utility used to analyze decision-making between risky alternatives. In Theory of Games, the nature of the cardinal utility function u featured in von Neumann and Morgenstern’s EUT, and its relationship with the riskless utility function U of previous utility analysis remained ambiguous. Von Neumann and Morgenstern also put forward an axiomatic theory of measurement, which presents some similarities with Stanley Smith Stevens’s measurement theory but had no immediate impact on utility analysis.


Author(s):  
Charmaine Scrimnger-Christian ◽  
S. Wedzerai Musvoto

The purpose of this study is to discuss a possible way forward in accounting measurement. It also highlights the importance of understanding the lack of appreciation given by the accounting researchers to the distinction between representation measurement theory and the axioms of quantity on which the classical theory of measurement is based. For long, research in measurement theory has classified representational measurement as nothing but applications of the axioms of quantity. It was believed that there is in existence a single approach to measurement theory. However, recent studies in measurement theory have shown that there are two sides to measurement theory; one side at the interface with experimental science which is emphasized in representational measurement and the other side at the interface with quantitative theory which is emphasized in the classical measurement theory. Research in accounting measurement has concentrated on establishing a representational based accounting measurement theory. This has been done under the premise that no measurement theory exists in the discipline. Thus, this viewpoint neglects the concepts of classical measurement theory that already exists in the accounting discipline. Moreover, this created misunderstandings in accounting with regard to whether a theory of measurement exists in the discipline. This study highlights that the accounting concept of measurement was conceived under the principles of the classical measurement theory. Therefore this reason, it is suggested that research and improvements to the accounting measurement concept should be made in the light of the already existing principles of the classical theory of measurement in which the accounting concept of measurement was conceived.


Author(s):  
J. E. Wolff

This chapter introduces the representational theory of measurement as the relevant formal framework for a metaphysics of quantities. After presenting key elements of the representational approach, axioms for different measurement structures are presented and their representation and uniqueness theorems are compared. Particular attention is given to Hölder’s theorem, which in the first instance describes conditions for quantitativeness for additive extensive structures, but which can be generalized to more abstract structures. The last section discusses the relationship between uniqueness, the hierarchy of scales, and the measurement-theoretic notion of meaningfulness. This chapter provides the basis for Chapter 6, which makes use of more abstract results in measurement theory.


Sign in / Sign up

Export Citation Format

Share Document