Validity Evidence in Scale Development: The Application of Cross Validation and Classification-Sequencing Validation

Author(s):  
Tülin ACAR ◽  
Methodology ◽  
2018 ◽  
Vol 14 (4) ◽  
pp. 156-164 ◽  
Author(s):  
Keith A. Markus

Abstract. Bollen and colleagues have advocated the use of formative scales despite the fact that formative scales lack an adequate underlying theory to guide development or validation such as that which underlies reflective scales. Three conceptual impediments impede the development of such theory: the redefinition of measurement restricted to the context of model fitting, the inscrutable notion of conceptual unity, and a systematic conflation of item scores with attributes. Setting aside these impediments opens the door to progress in developing the needed theory to support formative scale use. A broader perspective facilitates consideration of standard scale development concerns as applied to formative scales including scale development, item analysis, reliability, and item bias. While formative scales require a different pattern of emphasis, all five of the traditional sources of validity evidence apply to formative scales. Responsible use of formative scales requires greater attention to developing the requisite underlying theory.


2018 ◽  
Vol 2018 (1) ◽  
pp. 15998
Author(s):  
Karen Van Dam ◽  
Marjolein C.J. Caniels ◽  
Gladys Cools-Tummers ◽  
Heidi Lenearts

2016 ◽  
Vol 28 (4) ◽  
pp. 385-394 ◽  
Author(s):  
Samantha Halman ◽  
Nancy Dudek ◽  
Timothy Wood ◽  
Debra Pugh ◽  
Claire Touchie ◽  
...  

2020 ◽  
Vol 11 ◽  
Author(s):  
Rafael Alarcón ◽  
María J. Blanca

The aim of this research was to develop and validate the Questionnaire for Assessing Educational Podcasts (QAEP), an instrument designed to gather students’ views about four dimensions of educational podcasts: access and use, design and structure, content adequacy, and value as an aid to learning. In study 1 we gathered validity evidence based on test content by asking a panel of experts to rate the clarity and relevance of items. Study 2 examined the psychometric properties of the QAEP, including confirmatory factor analysis with cross-validation to test the factor structure of the questionnaire, as well as item and reliability analysis. The results from study 1 showed that the experts considered the items to be clearly worded and relevant in terms of their content. The results from study 2 showed a factor structure consistent with the underlying dimensions, as well as configural and metric invariance across groups. The item analysis and internal consistency for scores on each factor and for total scores were also satisfactory. The scores obtained on the QAEP provide teachers with direct student feedback and highlight those aspects that need to be enhanced in order to improve the teaching/learning process.


2009 ◽  
Vol 19 (3) ◽  
pp. 343-359 ◽  
Author(s):  
Jeffrey P. Bjorck ◽  
Robert W. Braese ◽  
Joseph T. Tadie ◽  
David D. Gililland

2006 ◽  
Vol 37 (3) ◽  
pp. 1-15 ◽  
Author(s):  
N. S. Terblanche ◽  
C. Boshoff

In this study an attempt is made to develop a generic instrument that could be used to measure customer satisfaction with the controllable elements of the in-store shopping experience. By closely following the most contemporary guidelines for scale development, and involving 11 063 respondents in four different surveys, the authors emerge with a 22-item instrument to measure satisfaction with the in-store shopping experience. The evidence of the psychometric properties of the proposed ISE instrument offered here is compelling in terms of its uni-dimensionality, with-in-method convergent validity, cross-validation of dimensions in a cross-validation sample, reliability of the instrument, its discriminant validity and its nomological validity.


Sign in / Sign up

Export Citation Format

Share Document