Survey Item Validation

2020 ◽  
Author(s):  
Melissa Gordon Wolf ◽  
Elliott Daniel Ihm ◽  
andrew maul ◽  
Ann Taves

In the social sciences, validity refers to the adequacy of a survey (or other mode of assessment) for its intended purpose. Validation refers to the activities undertaken during and after the construction of the survey to evaluate and improve validity. Item validation refers here to procedures for evaluating and improving respondents’ understanding of the questions and response options included in a survey. Verbal probing techniques such as cognitive interviews can be used to understand respondents’ response process, that is, what they are thinking as they answer the survey items. Although cognitive interviews can provide evidence for the validity of survey items, they are time-consuming and thus rarely used in practice. The Response Process Evaluation (RPE) method is a newly-developed technique that utilizes open-ended meta-surveys to rapidly collect evidence of validity across a population of interest, make quick revisions to items, and immediately test these revisions across new samples of respondents. Like cognitive interviews, the RPE method focuses on how participants interpret the item and select a response. The chapter demonstrates the process of validating one survey item taken from the Inventory of Non-Ordinary Experiences (INOE).

2019 ◽  
Author(s):  
Melissa Gordon Wolf ◽  
Elliott Daniel Ihm ◽  
andrew maul ◽  
Ann Taves

In the social sciences, validity refers to the adequacy of a survey (or other mode of assessment) for its intended purpose. Validation refers to the activities undertaken during and after the construction of the survey to evaluate and improve validity. Item validation refers here to procedures for evaluating and improving respondents’ understanding of the questions and response options included in a survey. Ver- bal probing techniques such as cognitive interviews can be used to understand respondents’ response process, that is, what they are thinking as they answer the survey items. Although cognitive interviews can provide evidence for the validity of survey items, they are time-consuming and thus rarely used in practice. The Response Process Evaluation (RPE) method is a newly-developed technique that utilizes open-ended meta-surveys to rapidly collect evidence of validity across a population of interest, make quick revisions to items, and immediately test these revisions across new samples of respondents. Like cognitive interviews, the RPE method focuses on how participants interpret the item and select a re- sponse. The chapter demonstrates the process of validating one survey item taken from the Inventory of Non-Ordinary Experiences (INOE).


2020 ◽  
Author(s):  
Brent Donnellan ◽  
Andrew Rakhshani

The Rosenberg Self-Esteem Scale is the most frequently used measure of self-esteem in the social sciences and it is often administered with a different number of response options. However, it is unclear how the number of response options impacts the psychometric properties of this measure. Across three experiments (Ns = 739, 2,358, and 1,461), we evaluated how different response options of the Rosenberg influenced (1) internal consistency estimates, (2) distributions of scores, and (3) associations with criterion-related variables. Internal consistency estimates were lowest for a 2-point format compared to response formats with more options. Using 4 or more response options better approximated a normal distribution. We found no consistent evidence that criterion-related correlations increased with more response options. Collectively, these results suggest that the Rosenberg should be administered with at least four response options and we favor a 5-point Likert-type option for practical reasons.


Methodology ◽  
2019 ◽  
Vol 15 (1) ◽  
pp. 19-30 ◽  
Author(s):  
Knut Petzold ◽  
Tobias Wolbring

Abstract. Factorial survey experiments are increasingly used in the social sciences to investigate behavioral intentions. The measurement of self-reported behavioral intentions with factorial survey experiments frequently assumes that the determinants of intended behavior affect actual behavior in a similar way. We critically investigate this fundamental assumption using the misdirected email technique. Student participants of a survey were randomly assigned to a field experiment or a survey experiment. The email informs the recipient about the reception of a scholarship with varying stakes (full-time vs. book) and recipient’s names (German vs. Arabic). In the survey experiment, respondents saw an image of the same email. This validation design ensured a high level of correspondence between units, settings, and treatments across both studies. Results reveal that while the frequencies of self-reported intentions and actual behavior deviate, treatments show similar relative effects. Hence, although further research on this topic is needed, this study suggests that determinants of behavior might be inferred from behavioral intentions measured with survey experiments.


1984 ◽  
Vol 29 (9) ◽  
pp. 717-718
Author(s):  
Georgia Warnke
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document