Thorndike’s Credo: Metaphysics in psychometrics

2020 ◽  
Vol 30 (3) ◽  
pp. 309-328
Author(s):  
Joel Michell

Endorsing a priori the conviction that any science worthy of the name must measure the attributes it investigates, psychometricians adopted a metaphysical paradigm (without acknowledging it as such) to secure its claim that mental tests measure psychological attributes, a claim that was threatened by the inadequacy of test data to secure it. The fundamental axiom of this paradigm was Thorndike’s Credo (“All that exists, exists in some amount and can be measured”; 1918, p. 16), which entails its central lemma, the psychometrician’s fallacy (“All ordered attributes are quantitative”; Michell, 2009, p. 41), and which, in turn, supplies psychometrics’ primary methodological principle (“interval scales can be derived from ordinal data”). Logically, this framework is flawed at every level: Thorndike’s Credo is metaphysical overreach; the psychometrician’s fallacy is just that—a logical fallacy; and their primary methodological principle, a prioristic thinking.

2019 ◽  
Vol 29 (1) ◽  
pp. 138-143 ◽  
Author(s):  
Joel Michell

Trendler’s (2019) critique of conjoint measurement fails because he neglects to distinguish standard sequences (human constructions) from series of equal magnitudes (features of quantitative structures). The latter, not the former, is presumed in conjoint measurement. Furthermore, in so far as some mental tests use humans as measuring instruments, the only questionable assumption involved is that the relevant psychological attributes are quantitative, and that assumption is potentially testable using conjoint measurement. Finally, contrary to Trendler, psychological phenomena can be captured and the structure of psychological attributes investigated using conjoint measurement.


1982 ◽  
Vol 22 (06) ◽  
pp. 933-944 ◽  
Author(s):  
Naelah A. Mousli ◽  
Rajagopal Raghavan ◽  
Heber Cinco-Ley ◽  
Fernando Samaniego-V.

Abstract This paper reviews pressure behavior at an observation well intercepted by a vertical fracture. The active well was assumed either unfractured or intercepted by a fracture parallel to the fracture at the observation well. We show that a vertical fracture at the observation well has a significant influence on the pressure response at that well, and therefore wellbore conditions at the observation well must be considered. New type curves presented can be used to determine the compass orientation of the fracture plane at the observation well. Conditions are delineated under which the fracture at the observation well may influence an interference test. This information should be useful in designing and analyzing tests. The pressure response curve at the observation well has no characteristic features that will reveal the existence of a fracture. The existence of the fracture would have to be known a priori or from independent measurements such as single-well tests. Introduction In this work, we examine interference test data for the influence of a vertical fracture located at the observation well. All studies on the subject of interference testing have been directed toward understanding the effects of reservoir heterogeneity or wellbore conditions at the active (flowing) well. Several correspondents suggested our study because many field tests are conducted when the observation well is fractured. They also indicated that it is not uncommon for both wells (active and observation) to be fractured. To the best of our knowledge, this is the first study to examine the influence of a vertical fracture at the observation well on interference test data. Two conditions at the active well are examined: an active well that is unfractured (plane radial flow) and an active well that intercepts a vertical fracture parallel to the fracture at the observation well. The parameters of interest include effects of the distance between the two wells, compass orientation of the fracture plane with respect to the line joining the two wellbores, and the ratio of the fracture lengths at the active and observation wells if both wells are fractured. The results given here should enable the analystto interpret the pressure response at the fractured observation well.to interpret the pressure response when both the active and the observation wells are fracturedto design tests to account for the existence of a fracture at one or both wells, andto determine quantitatively the orientation and/or length of the fracture at an observation well. We also show that one should not assume a priori that the effect of a fracture on the observation well response will be similar to that of a concentric skin region around the wellbore-i.e., idealizations to incorporate the existence of the fracture, such as the effective wellbore radius concept, may not be applicable. Mathematical Model and Assumptions In this study, we consider the flow of a slightly compressible fluid of constant viscosity in a uniform and homogeneous porous medium of infinite extent. Fluid is produced at a constant surface rate at the active well. Wellbore storage effects are assumed negligible because the main objective of our work is to demonstrate the influence of the fractures. However, note that wellbore storage effects may mask the early-time response at the observation well. Refs. 1 and 2 discuss the influence of wellbore storage on interference test data. We obtained the solutions to the problems considered here by the method of sources and sinks. The fracture at the observation well was assumed to be a plane source of infinite conductivity. SPEJ P. 933^


Author(s):  
Bryce A. Roth ◽  
David L. Doel ◽  
Jeffrey J. Cissell

This paper describes the development of an improved method for reliable, repeatable, and accurate matching of engine performance models to test data. The centerpiece of this approach is a minimum variance estimator algorithm with a priori estimates which addresses both deterministic and probabilistic aspects of the problem. Specific probabilistic aspects include uncertainty in the measurements, prior expectations on model matching parameters, and noise in the power setting parameters. The algorithm is able to produce optimal results using any number of measurements and model matching parameters and can therefore take advantage of all measured data to produce the best possible match. This improves on current matching algorithms which require that the number of measured parameters be equal to the number of model matching parameters. This algorithm has been implemented in the Numerical Propulsion System Simulation (NPSS) and tested on a generic high-bypass turbofan model typical of those used in commercial service. The baseline engine model and simulated test data are described in detail. Several exercises are discussed to illustrate results available from this algorithm including the matching of a typical power calibration data set and matching of a typical production engine data set.


2009 ◽  
Vol 1224 ◽  
Author(s):  
Antonio Rinaldi ◽  
Pedro Peralta ◽  
Cody Friesen ◽  
Dhiraj Nahar ◽  
Silvia Licoccia ◽  
...  

AbstractThe compressive plastic strength of nanosized single crystal metallic pillars is known to depend on the diameter D, but little attention has been given to the pillar height h. The important role of h is analyzed here, observing the suppression of generalized crystal plasticity below a critical value hCR that can be estimated a priori. Novel in-situ compression tests on regular pillars (D = 300-900 nm) as well as nanobuttons (i.e. very short pillars with h less than hCR, such as D = 200 nm and h < 120 nm in this case) show that the latter ones are exceedingly harder than ordinary Ni pillars, withstanding stresses greater than 2 GPa. This h-controlled transition in the plastic behaviour is accompanied by extrinsic plastic effects in the harder nanobuttons. Such effects normally arise as Saint-Venant’s assumption ceases to be accurate. Some bias related to those effects is identified and removed from test data. Our results underline that nanoscale testing is challenging when current methodology and technology are pushed to the limit.


2003 ◽  
Vol 42 (4) ◽  
pp. 515-534 ◽  
Author(s):  
Joel Michell

Five episodes in the history of quantitative science provided the occasions for changes in the understanding of measurement important for attempts at quantification in the social sciences. First, Euclid's generalization of the ancient concept of measure to the concept of ratio provided a clear rationale for the use of numbers in quantitative science, a rationale that has been important through the history of science and one that contradicts the definition of measurement currently fashionable within the social sciences. Second, Duns Scotus's modelling of qualitative change upon quantitative change provided the opportunity to extend measurement from extensive to intensive attributes, a shift that makes it clear that the possibility of measuring qualitative attributes in the social sciences is not one that can be ruled out a priori. Third, Hölder's specification of the character of quantitative attributes showed that quantitative structure is a specific kind of empirical structure, one that is not logically necessary and, therefore, it shows that it is not necessary that any psychological attributes must be quantitative either. Taking the points emanating from Duns Scotus and Hölder together, the issue of whether psychological attributes are quantitative is shown to be an empirical issue. Fourth, Campbell's delineation of the categories of fundamental and derived measurement, and his subsequent critique of psychophysical measurement, showed that attempts at psychological measurement raised new challenges for measurement theory. Fifth, the articulation of the theory of conjoint measurement by Luce and Tukey reveals one way in which those challenges might be met. Taken as a whole, these episodes show that attempts at measurement in the social sciences are continuous with the rest of science in the sense that the issue of whether social science attributes can be measured raises empirical questions that can be answered only in the light of scientific evidence.


Philosophy ◽  
1999 ◽  
Vol 74 (4) ◽  
pp. 587-594 ◽  
Author(s):  
Graham Bird

Anthony Quinton's ‘The Trouble with Kant’ (Philosophy, Vol. 72, no. 279, January, 1997, pp. 5–18) claims to expose radical faults in Kant's epistemology which are not pointed out ‘in the many commentators (he) has studied’ (p. 17). The faults are, initially, that Kant is a ‘wild and intellectually irresponsible arguer’ (p. 5), and finally that Kant's account of a priori intuitions and concepts is erroneous (p. 16). Quinton suggests that his objections are new, but the truth is that they, and the supposedly Kantian views against which they are directed, have formed a frequent response in commentators from Hegel to Strawson. The ‘wild and irresponsible arguer’ charge is, after all, a commonplace among the ‘fighting Kant tooth and nail’ commentators, though it remains to be shown that Kant is markedly worse than other philosophers of the same period. And the central claim against which Quinton directs his assault, that we impose a spatio-temporal-categorial framework on the manifold of sensation (pp. 5 and 7), has provoked fierce and extensive hostility. Throughout his article Quinton assumes an interpretation of Kant's claim in which his task is to show how our minds construct a common, objective, reality from a spatio-temporal-categorial synthesis ‘by applying a piece of mental apparatus ... to ... a manifold of sensation’ (p. 5). The familiar objections to a ‘coloured spectacles’ interpretation of Kant's account of space and time, and Strawson's initial rejection of the ‘imaginary subject of transcendental psychology’ (Bounds of Sense (London; Methuen, 1966) p. 32) belong to the same tradition.


Author(s):  
D. E. Luzzi ◽  
L. D. Marks ◽  
M. I. Buckett

As the HREM becomes increasingly used for the study of dynamic localized phenomena, the development of techniques to recover the desired information from a real image is important. Often, the important features are not strongly scattering in comparison to the matrix material in addition to being masked by statistical and amorphous noise. The desired information will usually involve the accurate knowledge of the position and intensity of the contrast. In order to decipher the desired information from a complex image, cross-correlation (xcf) techniques can be utilized. Unlike other image processing methods which rely on data massaging (e.g. high/low pass filtering or Fourier filtering), the cross-correlation method is a rigorous data reduction technique with no a priori assumptions.We have examined basic cross-correlation procedures using images of discrete gaussian peaks and have developed an iterative procedure to greatly enhance the capabilities of these techniques when the contrast from the peaks overlap.


Author(s):  
H.S. von Harrach ◽  
D.E. Jesson ◽  
S.J. Pennycook

Phase contrast TEM has been the leading technique for high resolution imaging of materials for many years, whilst STEM has been the principal method for high-resolution microanalysis. However, it was demonstrated many years ago that low angle dark-field STEM imaging is a priori capable of almost 50% higher point resolution than coherent bright-field imaging (i.e. phase contrast TEM or STEM). This advantage was not exploited until Pennycook developed the high-angle annular dark-field (ADF) technique which can provide an incoherent image showing both high image resolution and atomic number contrast.This paper describes the design and first results of a 300kV field-emission STEM (VG Microscopes HB603U) which has improved ADF STEM image resolution towards the 1 angstrom target. The instrument uses a cold field-emission gun, generating a 300 kV beam of up to 1 μA from an 11-stage accelerator. The beam is focussed on to the specimen by two condensers and a condenser-objective lens with a spherical aberration coefficient of 1.0 mm.


2019 ◽  
Vol 4 (5) ◽  
pp. 878-892
Author(s):  
Joseph A. Napoli ◽  
Linda D. Vallino

Purpose The 2 most commonly used operations to treat velopharyngeal inadequacy (VPI) are superiorly based pharyngeal flap and sphincter pharyngoplasty, both of which may result in hyponasal speech and airway obstruction. The purpose of this article is to (a) describe the bilateral buccal flap revision palatoplasty (BBFRP) as an alternative technique to manage VPI while minimizing these risks and (b) conduct a systematic review of the evidence of BBFRP on speech and other clinical outcomes. A report comparing the speech of a child with hypernasality before and after BBFRP is presented. Method A review of databases was conducted for studies of buccal flaps to treat VPI. Using the principles of a systematic review, the articles were read, and data were abstracted for study characteristics that were developed a priori. With respect to the case report, speech and instrumental data from a child with repaired cleft lip and palate and hypernasal speech were collected and analyzed before and after surgery. Results Eight articles were included in the analysis. The results were positive, and the evidence is in favor of BBFRP in improving velopharyngeal function, while minimizing the risk of hyponasal speech and obstructive sleep apnea. Before surgery, the child's speech was characterized by moderate hypernasality, and after surgery, it was judged to be within normal limits. Conclusion Based on clinical experience and results from the systematic review, there is sufficient evidence that the buccal flap is effective in improving resonance and minimizing obstructive sleep apnea. We recommend BBFRP as another approach in selected patients to manage VPI. Supplemental Material https://doi.org/10.23641/asha.9919352


Addiction ◽  
1997 ◽  
Vol 92 (12) ◽  
pp. 1671-1698 ◽  
Author(s):  
Project Match Research Group
Keyword(s):  
A Priori ◽  

Sign in / Sign up

Export Citation Format

Share Document