scholarly journals Considerations for conducting and reporting digitally supported cognitive interviews with children and adults

2021 ◽  
Vol 5 (1) ◽  
Author(s):  
Abigail Fry ◽  
Sandra A. Mitchell ◽  
Lori Wiener

Abstract Background Cognitive interviewing is a well-established qualitative method used to develop and refine PRO measures. A range of digital technologies including phone, web conferencing, and electronic survey platforms can be leveraged to support the conduct of cognitive interviewing in both children and adults. These technologies offer a potential solution to enrolling underrepresented populations, including those with rare conditions, functional limitations and geographic or socioeconomic barriers. In the aftermath of the COVID-19 pandemic, the use of digital technologies for qualitative interviewing will remain essential. However, there is limited guidance about adapting cognitive interviewing procedures to allow for remote data capture, especially with children. Methods Synthesizing the literature and our research experiences during the COVID-19 pandemic, we examine considerations for implementing digitally supported cognitive interviews with children, adolescents, and adults. We offer recommendations to optimize data quality and empirical rigor and illustrate the application of these recommendations in an ongoing cognitive interviewing study to develop and refine a new pediatric PRO measure. Results Good research practices must address participant and researcher preparation for study-related procedures and should anticipate and pre-emptively manage technological barriers. Field notes should detail interview context, audio/video cues, and any impact of technological difficulties on data quality. The approaches we recommend have been tested in an ongoing cognitive interviewing study that is enrolling children/adolescents with cGVHD ages 5–17 and their caregivers [NCT 04044365]. The combined use of telephone and videoconferencing to conduct cognitive interviews remotely is feasible and acceptable and yields meaningful data to improve the content validity of our new PRO measure of cGVHD symptom bother. Conclusion Digitally supported cognitive interviewing procedures will be increasingly employed. Remote data collection can accelerate accrual, particularly in multi-site studies, and may allow for interviewer personnel and data management to be centralized within a coordinating center, thus conserving resources. Research is needed to further test and refine techniques for remote cognitive interviewing, particularly in traditionally underrepresented populations, including children and non-English speakers. Expansion of international standards to address digitally supported remote qualitative data capture appears warranted.

2021 ◽  
pp. 004912412110312
Author(s):  
Cornelia E. Neuert ◽  
Katharina Meitinger ◽  
Dorothée Behr

The method of web probing integrates cognitive interviewing techniques into web surveys and is increasingly used to evaluate survey questions. In a usual web probing scenario, probes are administered immediately after the question to be tested (concurrent probing), typically as open-ended questions. A second possibility of administering probes is in a closed format, whereby the response categories for the closed probes are developed during previously conducted qualitative cognitive interviews. Using closed probes has several benefits, such as reduced costs and time efficiency, because this method does not require manual coding of open-ended responses. In this article, we investigate whether the insights gained into item functioning when implementing closed probes are comparable to the insights gained when asking open-ended probes and whether closed probes are equally suitable to capture the cognitive processes for which traditionally open-ended probes are intended. The findings reveal statistically significant differences with regard to the variety of themes, the patterns of interpretation, the number of themes per respondent, and nonresponse. No differences in number of themes across formats by sex and educational level were found.


2018 ◽  
Vol 2 ◽  
pp. e26539 ◽  
Author(s):  
Paul J. Morris ◽  
James Hanken ◽  
David Lowery ◽  
Bertram Ludäscher ◽  
James Macklin ◽  
...  

As curators of biodiversity data in natural science collections, we are deeply concerned with data quality, but quality is an elusive concept. An effective way to think about data quality is in terms of fitness for use (Veiga 2016). To use data to manage physical collections, the data must be able to accurately answer questions such as what objects are in the collections, where are they and where are they from. Some research uses aggregate data across collections, which involves exchange of data using standard vocabularies. Some research uses require accurate georeferences, collecting dates, and current identifications. It is well understood that the costs of data capture and data quality improvement increase with increasing time from the original observation. These factors point towards two engineering principles for software that is intended to maintain or enhance data quality: build small modular data quality tests that can be easily assembled in suites to assess the fitness of use of data for some particular need; and produce tools that can be applied by users with a wide range of technical skill levels at different points in the data life cycle. In the Kurator project, we have produced code (e.g. Wieczorek et al. 2017, Morris 2016) which consists of small modules that can be incorporated into data management processes as small libraries that address particular data quality tests. These modules can be combined into customizable data quality scripts, which can be run on single computers or scalable architecture and can be incorporated into other software, run as command line programs, or run as suites of canned workflows through a web interface. Kurator modules can be integrated into early stage data capture applications, run to help prepare data for aggregation by matching it to standard vocabularies, be run for quality control or quality assurance on data sets, and can report on data quality in terms of a fitness-for-use framework (Veiga et al. 2017). One of our goals is simple tests usable by anyone anywhere.


Field Methods ◽  
2017 ◽  
Vol 29 (4) ◽  
pp. 317-332 ◽  
Author(s):  
Stephanie L. Martin ◽  
Zewdie Birhanu ◽  
Moshood O. Omotayo ◽  
Yohannes Kebede ◽  
Gretel H. Pelto ◽  
...  

Cognitive interviewing is a method to develop culturally appropriate survey questions and scale items. We conducted two rounds of cognitive interviews with 24 pregnant women in Ethiopia and Kenya to assess the appropriateness, acceptability, and comprehension of general and micronutrient supplement adherence-specific social support scales. We stopped the first round of cognitive interviews after receiving negative feedback from interviewers and participants about their distressing and uncomfortable experiences with cognitive probes and challenges related to cultural perspectives on social support. Through an iterative process, we made substantial changes to the cognitive interview guides and items from both social support scales. In the second round, the revised cognitive interviewing process substantially improved interviewer and participant experiences and increased comprehension and appropriateness of both social support scales. This study confirms the importance of cultural adaptation of the cognitive interviewing process as well as social support scales.


SAGE Open ◽  
2016 ◽  
Vol 6 (4) ◽  
pp. 215824401667177 ◽  
Author(s):  
Jennifer Edgar ◽  
Joe Murphy ◽  
Michael Keating

Cognitive interviewing is a common method used to evaluate survey questions. This study compares traditional cognitive interviewing methods with crowdsourcing, or “tapping into the collective intelligence of the public to complete a task.” Crowdsourcing may provide researchers with access to a diverse pool of potential participants in a very timely and cost-efficient way. Exploratory work found that crowdsourcing participants, with self-administered data collection, may be a viable alternative, or addition, to traditional pretesting methods. Using three crowdsourcing designs (TryMyUI, Amazon Mechanical Turk, and Facebook), we compared the participant characteristics, costs, and quantity and quality of data with traditional laboratory-based cognitive interviews. Results suggest that crowdsourcing and self-administered protocols may be a viable way to collect survey pretesting information, as participants were able to complete the tasks and provide useful information; however, complex tasks may require the skills of an interviewer to administer unscripted probes.


2019 ◽  
Vol 35 (2) ◽  
pp. 353-386 ◽  
Author(s):  
Jennifer Dykema ◽  
Dana Garbarski ◽  
Ian F. Wall ◽  
Dorothy Farrar Edwards

Abstract While scales measuring subjective constructs historically rely on agree-disagree (AD) questions, recent research demonstrates that construct-specific (CS) questions clarify underlying response dimensions that AD questions leave implicit and CS questions often yield higher measures of data quality. Given acknowledged issues with AD questions and certain established advantages of CS items, the evidence for the superiority of CS questions is more mixed than one might expect. We build on previous investigations by using cognitive interviewing to deepen understanding of AD and CS response processing and potential sources of measurement error. We randomized 64 participants to receive an AD or CS version of a scale measuring trust in medical researchers. We examine several indicators of data quality and cognitive response processing including: reliability, concurrent validity, recency, response latencies, and indicators of response processing difficulties (e.g., uncodable answers). Overall, results indicate reliability is higher for the AD scale, neither scale is more valid, and the CS scale is more susceptible to recency effects for certain questions. Results for response latencies and behavioral indicators provide evidence that the CS questions promote deeper processing. Qualitative analysis reveals five sources of difficulties with response processing that shed light on under-examined reasons why AD and CS questions can produce different results, with CS not always yielding higher measures of data quality than AD.


2021 ◽  
Vol 937 (3) ◽  
pp. 032044
Author(s):  
M Belyaeva ◽  
O Kitova ◽  
A Popov ◽  
E Chernikova

Abstract The article presents the development of the agro-industrial complex taking into account innovations in the field of digitalization of technological processes of the food industry. The tasks of digitalization, the evolution of the industry are described, digital technologies introduced into the processes of the food industry, the introduction of smart production, Big Data technologies, machine vision, additive technologies are listed. Examples of the introduction of digital technologies in different countries are shown. The factors limiting the introduction of digital technologies, lack of personnel, unstable economy, psychological and organizational factors, and lack of international standards in the field of digital transformation are shown. Production statistics are widely used to identify bottlenecks in production, search for hidden reserves and determine the reasons for reducing the efficiency of equipment. Blockchain technologies are beginning to be used in the production of food. Distributed registry systems help to increase the transparency of all stages. Successful implementation requires the interest of all participants in the production chain (from the farmer to the consumer). Digital transformation in Russia began to pay serious attention relatively recently, most of the projects are under implementation. Various innovations introduced in the food production and the food industry are reflected.


2021 ◽  
Vol 30 (2) ◽  
pp. 100-111
Author(s):  
B. V. Markov

The article presents a philosophical analysis of the humanitarian impact of educational reform due to the influence of new digital technologies. The post-industrial development of society has been characterized as a network society. Specifically, the revolution in media today defines the technological and substantive changes in the educational sector. Its modernization in the 1990s in Russia began through the discussion of the humanitarian mission of the classical university model. However, the process of globalization and the mobility of education demanded the alignment and unification of national educational programs. The Bologna Process was a response. It was implemented in Russia as a two-tiered system, although the society still needed specialists and engineers rather than bachelors and masters. The restructuring of the educational process to comply with international standards then led to the necessity of economic reform. Society needed educated specialists but could not support a large number of educational institutions with a sizable staff. In the manufacturing sector, automatization leads to a growth in workforce productivity whereas in education the traditional pedagogy is mainly practiced. Digital technologies have opened up new opportunities for the increased economic efficiency of centers of education. However, the faculty body, especially in the Humanities sphere, expressed strong criticism. The argument is that digital technologies do not solve pedagogic problems. While discussing the economic efficiency, the main, substantive issue of the meaning and purpose of education has been left aside. Hence the question of how an individual’s education can be embedded into the overall educational process continues to be of high relevance. Resorting to philosophy may be appropriate and reasonable inasmuch as it has accumulated a range of caring and self-preservation practices aimed at the development of social skills of an individuum. Philosophy can also provide an anthropological expertise of on-going reforms, identifying their social and cultural implications.


Author(s):  
Stephen Farrall

What is a “snowball”? For some, a snowball is a drink made of advocaat and lemonade; for others, a mix of heroin and cocaine injected; for yet others, a handful of packed snow commonly thrown at objects or people; for gamblers, it refers to a cash prize that accumulates over successive games; for social scientists, it is a form of sampling. There are other uses for the term in the stock market and further historical usages that refer to stealing things from washing lines or that are racist. Clearly then, different people in different contexts and different times will have used the term “snowball” to refer to various activities or processes. Problems like this—whereby a particular word or phrase may have various meanings or may be interpreted variously—are just one of the issues for which cognitive interviews can offer insights (and possible solutions). Cognitive interviews can also help researchers designing surveys to identify problems with mistranslation of words, or near-translations that do not quite convey the intended meaning. They are also useful for ensuring that terms are understood in the same way by all sections of society, and that they can be used to assess the degree to which organizational structures are similar in different countries (not all jurisdictions have traffic police, for example). They can also assess conceptual equivalence. Among the issues explored here are the following: • What cognitive interviews are • The background to their development • Why they might be used in cross-national crime and victimization surveys • Some of the challenges associated with cross-national surveys • Ways cognitive interviews can help with these challenges • Different approaches to cognitive interviewing (and the advantages of each) • How to undertake cognitive interviews • A “real-world” example of a cognitive interviewing exercise • Whether different probing styles make any difference to the quality of the data derived.


Sign in / Sign up

Export Citation Format

Share Document