scholarly journals Scale Development and Validation: Methodology and Recommendations

2020 ◽  
Vol 27 (2) ◽  
pp. 24-35
Author(s):  
Kevan Lamm ◽  
Alexa Lamm ◽  
Don Edgar

The importance of valid and reliable data and its collection is fundamental to empirical research; however, there remain inconsistent approaches to creating robust scales capable of capturing both valid and reliable data, particularly within international agricultural and extension education contexts. Robust scale development consists of five areas for validation: content, response process, internal structure, external structure, and consequential. The purpose of this guide was to provide methodological recommendations to improve scale development rigor and adoption and to provide a set of functional principles to aid researchers and practitioners interested in capturing data through developed, or adapted, scales. Additionally, the information summarized provide a benchmark upon which to evaluate the rigor and validity of reported scale results. A consistent framework should provide a common lexicon upon which to examine scales and associated results. Proper scale development and validation will help ensure research findings accurately describe intended underlying concepts, particularly within an international agricultural and extension education context. Keywords: scale development, validity, quantitative analysis

2021 ◽  
pp. 003022282110162
Author(s):  
Hakan Cengiz ◽  
Omer Torlak

Although it has been widely discussed in the literature, no scale has yet been developed to measure the consumption aspect of death. This study aims to develop a domain-specific death-related status consumption (DRSC) scale to bridge this gap in the field. Results reveal the following three dimensions of the scale: conspicuousness, planning, and showing respect. In four studies, which collate the views of 1,302 participants, both students and adults, the DRSC demonstrates internal consistency and validity across cultures (Turkey, the U.S., and culturally diverse sample). The importance of such a scale for the field is discussed.


2021 ◽  
pp. 026553222199405
Author(s):  
Ute Knoch ◽  
Bart Deygers ◽  
Apichat Khamboonruang

Rating scale development in the field of language assessment is often considered in dichotomous ways: It is assumed to be guided either by expert intuition or by drawing on performance data. Even though quite a few authors have argued that rating scale development is rarely so easily classifiable, this dyadic view has dominated language testing research for over a decade. In this paper we refine the dominant model of rating scale development by drawing on a corpus of 36 studies identified in a systematic review. We present a model showing the different sources of scale construct in the corpus. In the discussion, we argue that rating scale designers, just like test developers more broadly, need to start by determining the purpose of the test, the relevant policies that guide test development and score use, and the intended score use when considering the design choices available to them. These include considering the impact of such sources on the generalizability of the scores, the precision of the post-test predictions that can be made about test takers’ future performances and scoring reliability. The most important contributions of the model are that it gives rating scale developers a framework to consider prior to starting scale development and validation activities.


Sign in / Sign up

Export Citation Format

Share Document