reduce measurement error
Recently Published Documents


TOTAL DOCUMENTS

36
(FIVE YEARS 7)

H-INDEX

13
(FIVE YEARS 1)

2020 ◽  
Author(s):  
Cassandra L. Boness ◽  
Kenneth J. Sher

Background: In order to accurately identify substance use disorders, we must be confident of our ability to define and measure the construct itself. To date, research has demonstrated that the ways in which substance use disorder criteria are operationalized or assessed can significantly impact the information we obtain from these diagnoses. For example, differing operationalizations of the same construct (e.g., impaired control over substance use) can result in markedly different estimates of prevalence. This points to the need for approaches that aim to improve the validity of diagnostic assessments during the measure development phase. Methods: We performed a scoping review of the cognitive interviewing literature, a technique that aims to provide a systematic way to identify and reduce measurement error associated with the structure and content of interview items. Along with this, we present an applied example with alcohol tolerance. Findings: We argue that cognitive interviewing is well suited for improving the validity of substance use disorder assessment items. Conclusions: We suggest that cognitive interviewing be incorporated in the item generation stage of measure development for substance use disorder assessments.


Author(s):  
Dawn Hall ◽  
Joshua D. Shackman

Unlike reflective measurement scales, the steps for development of formative measurement scales tend to behighly subjective and involve mostly the judgment of the researcher. Formative scales have been criticized for this reason.This paper extends Christophersen and Konradt’s (2008) method of joint development of a formative and reflective scale toassess mutual validity of each scale. We utilize a second order method to reduce measurement error in the formative scaleas suggested by Edwards (2011), and test the efficacy of Generalized Structured Component Analysis (GeSCA) for thispurpose. For illustrative purposes, we utilize a sample of formative and reflective job satisfaction survey data both to testour joint formative/reflective scale development technique and to assess which formative aspects of job satisfaction alignwith commonly used reflective job satisfaction scales.


Epidemiology ◽  
2019 ◽  
Vol 30 ◽  
pp. S3-S9
Author(s):  
Neil J. Perkins ◽  
Jennifer Weck ◽  
Sunni L. Mumford ◽  
Lindsey A. Sjaarda ◽  
Emily M. Mitchell ◽  
...  

2019 ◽  
pp. 174569161986379 ◽  
Author(s):  
Ulrich Schimmack

In 1998, Greenwald, McGhee, and Schwartz proposed that the Implicit Association Test (IAT) measures individual differences in implicit social cognition. This claim requires evidence of construct validity. I review the evidence and show that there is insufficient evidence for this claim. Most important, I show that few studies were able to test discriminant validity of the IAT as a measure of implicit constructs. I examine discriminant validity in several multimethod studies and find little or no evidence of discriminant validity. I also show that validity of the IAT as a measure of attitudes varies across constructs. Validity of the self-esteem IAT is low, but estimates vary across studies. About 20% of the variance in the race IAT reflects racial preferences. The highest validity is obtained for measuring political orientation with the IAT (64%). Most of this valid variance stems from a distinction between individuals with opposing attitudes, whereas reaction times contribute less than 10% of variance in the prediction of explicit attitude measures. In all domains, explicit measures are more valid than the IAT, but the IAT can be used as a measure of sensitive attitudes to reduce measurement error by using a multimethod measurement model.


2019 ◽  
Vol 30 (11) ◽  
pp. 1592-1602 ◽  
Author(s):  
Samantha M. W. Wood ◽  
Scott P. Johnson ◽  
Justin N. Wood

What mechanisms underlie learning in newborn brains? Recently, researchers reported that newborn chicks use unsupervised statistical learning to encode the transitional probabilities (TPs) of shapes in a sequence, suggesting that TP-based statistical learning can be present in newborn brains. Using a preregistered design, we attempted to reproduce this finding with an automated method that eliminated experimenter bias and allowed more than 250 times more data to be collected per chick. With precise measurements of each chick’s behavior, we were able to perform individual-level analyses and substantially reduce measurement error for the group-level analyses. We found no evidence that newborn chicks encode the TPs between sequentially presented shapes. None of the chicks showed evidence for this ability. Conversely, we obtained strong evidence that newborn chicks encode the shapes of individual objects, showing that this automated method can produce robust results. These findings challenge the claim that TP-based statistical learning is present in newborn brains.


2019 ◽  
Author(s):  
Samantha Wood ◽  
Justin Wood

The accuracy of science depends on the precision of its methods. When fields produce precise measurements, the scientific method can generate remarkable gains in knowledge. When fields produce noisy measurements, however, the scientific method is not guaranteed to work—in fact, noisy measurements are now regarded as a leading cause of the replication crisis in psychology. Scientists should therefore strive to improve the precision of their methods, especially in fields with noisy measurements. Here, we show that automation can reduce measurement error by ~60% in one domain of developmental psychology: controlled-rearing studies of newborn chicks. Automated studies produce measurements that are 3-4 times more precise than non-automated studies and produce effect sizes that are 3-4 times larger than non-automated studies. Automation also eliminates experimenter bias and allows replications to be performed quickly and easily. We suggest that automation can be a powerful tool for improving measurement precision, producing high powered experiments, and combating the replication crisis.


Sensors ◽  
2018 ◽  
Vol 18 (12) ◽  
pp. 4134 ◽  
Author(s):  
Lander Urgoiti ◽  
David Barrenetxea ◽  
Jose Sánchez ◽  
Iñigo Pombo ◽  
Jorge Álvarez

Workpiece rejection originated by thermal damage is of great concern in high added-value industries, such as automotive or aerospace. Surface temperature control is vital to avoid this kind of damage. Difficulties in empirical measurement of surface temperatures in-process imply the measurement in points other than the ground surface. Indirect estimation of temperatures demands the use of thermal models. Among the numerous temperature measuring techniques, infra-red measurement devices excel for their speed and accurate measurements. With all of this in mind, the current work presents a novel temperature estimation system, capable of accurate measurements below the surface as well as correct interpretation and estimation of temperatures. The estimation system was validated by using a series of tests in different grinding conditions that confirm the hypotheses of the error made when measuring temperatures in the workpiece below the surface in grinding. This method provides a flexible and precise way of estimating surface temperatures in grinding processes and has shown to reduce measurement error by up to 60%.


2018 ◽  
Author(s):  
Björn Harink ◽  
Huy Nguyen ◽  
Kurt Thorn ◽  
Polly Fordyce

AbstractMultiplexed bioassays, in which multiple analytes of interest are probed in parallel within a single small volume, have greatly accelerated the pace of biological discovery. Bead-based multiplexed bioassays have many technical advantages, including near solution-phase kinetics, small sample volume requirements, many within-assay replicates to reduce measurement error, and, for some bead materials, the ability to synthesize analytes directly on beads via solid-phase synthesis. To allow bead-based multiplexing, analytes can be synthesized on spectrally encoded beads with a 1:1 linkage between analyte identity and embedded codes. Bead-bound analyte libraries can then be pooled and incubated with a fluorescently-labeled macromolecule of interest, allowing downstream quantification of interactions between the macromolecule and all analytes simultaneously via imaging alone. Extracting quantitative binding data from these images poses several computational image processing challenges, requiring the ability to identify all beads in each image, quantify bound fluorescent material associated with each bead, and determine their embedded spectral code to reveal analyte identities. Here, we present a novel open-source Python software package (the mrbles analysis package) that provides the necessary tools to: (1) find encoded beads in a bright-field microscopy image; (2) quantify bound fluorescent material associated with bead perimeters; (3) identify embedded ratiometric spectral codes within beads; and (4) return data aggregated by embedded code and for each individual bead. We demonstrate the utility of this package by applying it towards analyzing data generated via multiplexed measurement of calcineurin protein binding to MRBLEs (Microspheres with Ratiometric Barcode Lanthanide Encoding) containing known and mutant binding peptide motifs. We anticipate that this flexible package should be applicable to a wide variety of assays, including simple bead or droplet finding analysis, quantification of binding to non-encoded beads, and analysis of multiplexed assays that use ratiometric, spectrally encoded beads.


Sign in / Sign up

Export Citation Format

Share Document