interviewer effects
Recently Published Documents


TOTAL DOCUMENTS

126
(FIVE YEARS 38)

H-INDEX

24
(FIVE YEARS 2)

2021 ◽  
pp. 301-314
Author(s):  
Ute Knoch

Achieving scores that adequately reflect the test-takers’ proficiency level, as evidenced in spoken assessment tasks, has been the subject of a large body of research in second language assessment. In this chapter, the author outlines the work that has been undertaken in relation to the scoring of spoken assessments by human raters and automated scoring. The chapter focuses on research on rater effects, rater training and feedback, rater characteristics, interlocutor/interviewer effects, rating scales, and score resolution techniques. The section on automated scoring discusses research on the underlying construct and what limits this puts on the types of tasks that can be used in the assessment. The chapter concludes by setting out some future directions for the scoring of spoken responses.


2021 ◽  
Vol 37 (4) ◽  
pp. 981-1007
Author(s):  
Darina N. Peycheva ◽  
Joseph W. Sakshaug ◽  
Lisa Calderwood

Abstract Coding respondent occupation is one of the most challenging aspects of survey data collection. Traditionally performed manually by office coders post-interview, previous research has acknowledged the advantages of coding occupation during the interview, including reducing costs, processing time and coding uncertainties that are more difficult to address post-interview. However, a number of concerns have been raised as well, including the potential for interviewer effects, the challenge of implementing the coding system in a web survey, in which respondents perform the coding procedure themselves, or the feasibility of implementing the same standardized coding system in a mixed-mode self- and interviewer-administered survey. This study sheds light on these issues by presenting an evaluation of a new occupation coding method administered during the interview in a large-scale sequential mixed-mode (web, telephone, face-to-face) cohort study of young adults in the UK. Specifically, we assess the take-up rates of this new coding method across the different modes and report on several other performance measures thought to impact the quality of the collected occupation data. Furthermore, we identify factors that affect the coding of occupation during the interview, including interviewer effects. The results carry several implications for survey practice and directions for future research.


BMJ Open ◽  
2021 ◽  
Vol 11 (11) ◽  
pp. e047570
Author(s):  
Katy Footman

ObjectivesThe analysis aimed to assess the scale of interviewer effects on abortion survey responses, to compare interviewer effects between different question wordings and between direct and indirect approaches, and to identify interviewer and interview characteristics that explain interviewer effects on abortion reporting.Setting2018 Performance Monitoring for Action nationally representative household surveys from Côte d’Ivoire, Nigeria and Rajasthan, India.ParticipantsSurvey data from 20 016 interviews with reproductive age (15–49) women, selected using multistage stratified cluster sampling. Data from self-administered interviewer surveys and from a sample of health service delivery points that serve the female survey participants were also included.Primary outcome measuresOutcomes were the respondent’s own experience of ever ‘removing a pregnancy’, their closest confidante’s experience of pregnancy removal and the respondent’s own experience of period regulation.ResultsSubstantial interviewer effects were observed, ranging from 7% in Côte d’Ivoire to 24% in Nigeria for pregnancy removal. Interviewer effects for survey questions that were designed to ask about abortion in a less stigmatising way were either similar to (9%–26% for confidante-reporting) or higher than (17%–32% for a question about period regulation) the pregnancy removal question. Interviewer and interview characteristics associated with abortion reporting included respondent–interviewer familiarity, the language of interview and the interviewer’s comfort asking questions about abortion.ConclusionThis study highlights that questions designed to be less stigmatising may increase interviewer effects due to lower comprehension among respondents. Further work is needed to assess question wordings for different contexts. Selecting and training interviewers to ensure comfort asking questions about abortion is important for reproductive health surveys. Challenges for the use of ‘insider’ interviewers and the management of surveys in countries with high linguistic diversity are also identified.


2021 ◽  
pp. 1532673X2110221
Author(s):  
Clinton Jenkins ◽  
Ismail White ◽  
Michael Hanmer ◽  
Antoine Banks

It is now a well-documented fact of survey research that Black survey respondents overreport turning out to vote at higher rates than many of their peers of other racial and ethnic backgrounds. We bring renewed attention to this phenomenon by investigating how the ways in which the race of the interviewer might influence a Back respondent’s propensity to overreport turning out to vote. In this paper, we test two competing mechanisms for African American overreporting and race of interviewer effects: (1) racial group linked fate, and (2) conformity with norms of Black political behavior. We find support that social pressure to conform to group norms of political behavior is behind Black respondent’s overreporting in the presence of a same-race interviewer. These results have significant implications for how we view, analyze, and consider results from such studies.


Author(s):  
Ruben L. Bach

Panel conditioning refers to the phenomenon whereby respondents’ attitudes, behaviour, reporting of behaviour and/or knowledge are changed by repeated participation in a panel survey. Uncovering such effects, however, is difficult due to three major methodological challenges. First, researchers need to disentangle changes in behaviour and attitudes from changes in the reporting of behaviour and attitudes as panel conditioning may result in both, even at the same time and in opposite directions. Second, the identification of the causal effect of panel participation on the various forms of change mentioned above is complicated as it requires comparisons of panel respondents with control groups of people who have not been interviewed before. Third, other sources of error in (panel) surveys may easily be mistaken for panel conditioning if not properly accounted for. Such error sources are panel attrition, mode effects, and interviewer effects. In this chapter the challenges mentioned above are reviewed in detail and a methodological framework for the analysis of panel conditioning effects is provided by identifying the strengths and weaknesses of the various designs that researchers have developed to address the challenges. The chapter concludes with a discussion of a future research agenda on panel conditioning effects in longitudinal surveys.


Field Methods ◽  
2021 ◽  
pp. 1525822X2199723
Author(s):  
Alexandru Cernat ◽  
Joseph W. Sakshaug

Increasingly surveys are using interviewers to collect objective health measures, also known as biomeasures, to replace or supplement traditional self-reported health measures. However, the extent to which interviewers affect the (im)precision of biomeasurements is largely unknown. This article investigates interviewer effects on several biomeasures collected in three waves of the National Social Life, Health, and Aging Project (NSHAP). Overall, we find low levels of interviewer effects, on average. This nevertheless hides important variation with touch sensory tests being especially high with 30% interviewer variation, and smell tests and timed balance/walk/chair stands having moderate interviewer variation of around 10%. Accounting for contextual variables that potentially interact with interviewer performance, including housing unit type and presence of a third person, failed to explain the interviewer variation. A discussion of these findings, their potential causes, and their implications for survey practice is provided.


Author(s):  
David Johann ◽  
Sabrina J Mayer

Abstract This study examines how interviewers’ gender and education affect the measured level of factual political knowledge by drawing on competing theoretical frameworks: stereotype threat theory and interviewer noncompliance with the instructions. Testing these mechanisms using survey data from the Austrian National Election Study (AUTNES) and the German Longitudinal Election Study (GLES), we find no evidence for a stereotype threat effect, but seem to observe interviewer effects resulting from interviewer non-compliance. In Germany, respondents’ measured level of knowledge was significantly higher when a male interviewer, regardless of his education, conducted the interview, compared with low educated female interviewers. This finding has implications for survey-based studies, which measure factual political knowledge, for example attempts to limit such effects should be made during the interviewer briefing.


2021 ◽  
pp. 089443932098525
Author(s):  
Jannes Jacobsen ◽  
Simon Kühne

Panel attrition poses major threats to the survey quality of panel studies. Many features have been introduced to keep panel attrition as low as possible. Based on a random sample of refugees, a highly mobile population, we investigate whether using a mobile phone application improves address quality and response behavior. Various features, including geo-tracking, collecting email addresses and adress changes, are tested. Additionally, we investigate respondent and interviewer effects on the consent to download the app and sharing GPS geo-positions. Our findings show that neither geo-tracking nor the provision of email addresses nor the collection of address changes through the app improves address quality substantially. We further show that interviewers play an important role in convincing the respondents to install and use the app, whereas respondent characteristics are largely insignificant. Our findings provide new insights into the usability of mobile phone applications and help determine whether they are a worthwhile tool to decrease panel attrition.


Sign in / Sign up

Export Citation Format

Share Document