question order
Recently Published Documents


TOTAL DOCUMENTS

127
(FIVE YEARS 23)

H-INDEX

19
(FIVE YEARS 1)

2022 ◽  
Author(s):  
Christoph Beuthner ◽  
Florian Keusch ◽  
Henning Silber ◽  
Bernd Weiß ◽  
Jette Schröder

As our modern world has become increasingly digitalized, various types of data from different data domains are available that can enrich survey data. To link survey data to other sources, consent from the survey respondents is required. This article compares consent to data linkage requests for seven data domains: administrative data, smartphone usage data, bank data, biomarkers, Facebook data, health insurance data, and sensor data. We experimentally explore three factors of interest to survey designers seeking to maximize consent rates: consent question order, consent question wording, and incentives. The results of the study using a German online sample (n = 3,374) show that survey respondents have a relatively high probability of consent to share smartphone usage data, Facebook data, and biomarkers, while they are least likely to share their bank data in a survey. Of the three experimental factors, only the consent question order affected consent rates significantly. Additionally, the study investigated the interactions between the three experimental manipulations and the seven data domains, of which only the interaction between the data domains and the consent question order showed a consistent significant effect.


2021 ◽  
Author(s):  
◽  
Gregory Franco

<p>We know that students are more optimistic about their performance after they take a test that progresses from the easiest to hardest questions than after taking one that progresses in the opposite order¹. In fact, these “Easy-Hard” students are more optimistic than “Hard-Easy” students even when the two groups perform equally. The literature explains this question order bias as a result of students’ failing to sufficiently adjust, in the face of new information, their extreme initial impressions about the test. In the first two of six studies, we investigated the possibility that a biased memory for individual questions on the test is an alternative mechanism driving the question order bias. The pattern of results was inconsistent with this mechanism, but fit with the established impression-based mechanism. In the next four studies, we addressed the role that the number of test questions plays in determining the size of the question order bias, discovered that warning students is only a partially effective method for reducing the bias, and established a more precise estimate of the bias’ size. Taken together, this work provides evidence that the question order bias is a robust phenomenon, likely driven by insufficient adjustment from extreme initial impressions.  ¹ Although the research in this thesis is my own, I conducted it in a lab and supervised a team comprised of research assistants and honours students. I also received advice and direction from my supervisors. Therefore, I often use the word “we” in this thesis to reflect these facts.</p>


2021 ◽  
Author(s):  
◽  
Gregory Franco

<p>We know that students are more optimistic about their performance after they take a test that progresses from the easiest to hardest questions than after taking one that progresses in the opposite order¹. In fact, these “Easy-Hard” students are more optimistic than “Hard-Easy” students even when the two groups perform equally. The literature explains this question order bias as a result of students’ failing to sufficiently adjust, in the face of new information, their extreme initial impressions about the test. In the first two of six studies, we investigated the possibility that a biased memory for individual questions on the test is an alternative mechanism driving the question order bias. The pattern of results was inconsistent with this mechanism, but fit with the established impression-based mechanism. In the next four studies, we addressed the role that the number of test questions plays in determining the size of the question order bias, discovered that warning students is only a partially effective method for reducing the bias, and established a more precise estimate of the bias’ size. Taken together, this work provides evidence that the question order bias is a robust phenomenon, likely driven by insufficient adjustment from extreme initial impressions.  ¹ Although the research in this thesis is my own, I conducted it in a lab and supervised a team comprised of research assistants and honours students. I also received advice and direction from my supervisors. Therefore, I often use the word “we” in this thesis to reflect these facts.</p>


2021 ◽  
pp. 1-17
Author(s):  
Kate Sollis ◽  
Patrick Leslie ◽  
Nicholas Biddle ◽  
Marisa Paterson

Question-order effects are known to occur in surveys, particularly those that measure subjective experiences. The presence of context effects will impact the comparability of results if questions have not been presented in a consistent manner. In this study, we examined the influence of question order on how people responded to two gambling scales in the Australian Capital Territory Gambling Prevalence Survey: The Problem Gambling Severity Index and the Short Gambling Harm Screen. The application of these scales in gambling surveys is continuing to grow, the results being compared across time and between jurisdictions, countries, and populations. Here we outline a survey experiment that randomized the question ordering of these two scales. The results show that question-order effects are present for these scales, demonstrating that results from them may not be comparable across jurisdictions if the scales have not been presented consistently across surveys. These findings highlight the importance of testing for the presence of question-order effects, particularly for those scales that measure subjective experiences, and correcting for such effects where they exist by randomizing scale order.


2021 ◽  
Vol 9 (2) ◽  
pp. 204-215
Author(s):  
Harina Fitriyani ◽  
Dwi Astuti

Abstract. The Covid-19 pandemic forces all elements of society to adapt to newactivities that were not usually done before, including online learning. To carry outlearning evaluations during online learning, some teachers still have not optimizedthe use of available Platforms, such as the free Platform like Google Form.Therefore, it is necessary to have community service activities in the form of trainingon the development of online evaluation instruments for teachers. The purpose of thetraining is to improve the professionalism of teachers in the use of the Google Formapplication with the ExtendedForms add-on. The training was carried out virtuallythrough the Zoom Meeting and was attended by fifth-grade teachers of ElementarySchool/Madrasah Ibtidaiyah Muhammadiyah throughout Bantul Regency. Thetraining activity methods include lectures, practice, and questions and answers. Thistraining results were participants who participated in as many as 38 teachers, aspeaker delivered material about online evaluation in online learning, introductionto the Google Form Platform followed by practice compiling quiz questions usingGoogle Form. After participating in this training, participants have understood whatfacilities can be optimized when developing online evaluation instruments using theGoogle Form Platform, such as the shuffle option order, shuffle question order, andlimit timer feature with the Extended Forms add-on. After attending the training,there is an increase in the teacher's ability to maximize the Google Form applicationwith the ExtendedForms add-on.Keywords: Online Evaluation, Google Form, Online LearningAbstrak. Adanya pandemi Covid-19 memaksa semua elemen masyarakat untukberadaptasi dengan kegiatan baru yang tidak biasa dilakukan sebelumnya,diantaranya adalah pembelajaran daring. Untuk melaksanakan evaluasipembelajaran selama pembelajaran daring, beberapa guru masih belummengoptimalkan penggunaan Platform yang tersedia, seperti Platform tak berbayarGoogle Form. Oleh karena itu perlu adanya kegiatan pengabdian kepadamasyarakat berupa pelatihan pengembangan instrumen evaluasi online bagi guru.Adapun tujuan pelatihan adalah untuk meningkatkan profesionalisme guru dalampenggunaan aplikasi Google Form dengan add-on ExtendedForms. Pelatihandilaksanakan secara virtual melalui Zoom Meeting dan diikuti oleh 38 guru kelas VSekolah Dasar/Madrasah Ibtidaiyah Muhammadiyah se-Kabupaten Bantul. Adapunmetode kegiatan pelatihan berupa ceramah, praktek, dan tanya jawab. Hasilkegiatan pelatihan ini yaitu peserta telah memahami fasilitas-fasilitas apa saja yangbisa dioptimalkan ketika mengembangkan instrumen evaluasi online menggunakanPlatform Google Form seperti fitur shuffle option order, shuffle question order, danlimit timer dengan add-on ExtendedForms. Selain itu juga peserta telahmendapatkan pengalaman langsung menggunakan Google Form. Setelah mengikutipelatihan, terdapat peningkatan kemampuan guru dalam memaksimalkan aplikasiGoogle Form dengan add-on ExtendedForms.Kata Kunci: Evaluasi Online, Google Form, Pembelajaran Daring.


Author(s):  
Tobias Rettig ◽  
Annelies G. Blom

A key advantage of longitudinal data collections is the ability to measure change over time by repeatedly asking the same questions to the same respondents. Estimations based on such longitudinal data, as well as other designs that incorporate repetitions of the same questions, generally rely on the assumption that at each point of data collection, respondents answer the questions independently of their previous responses. This assumption implies that respondents either do not remember their previous responses, or that they at least do not use this information in forming their later responses. This is a strong assumption, given that data collections are becoming more and more frequent, giving respondents less time to forget earlier responses. If respondents do, however, remember both being asked the same question and their previous response, they may be influenced by this information. This form of bias is known as a memory effect. In this chapter, we conceptualize the potential role of respondents’ memory when answering survey questions and propose a model of the cognitive response process that takes potential memory effects into account. This is supplemented with the literature on the cognitive response process, the sparse existing research on memory effects, as well as adjacent literature on dependent interviewing and question order effects. We conclude the chapter by identifying gaps in this literature and highlighting areas that require additional research to further our understanding of memory effects in longitudinal survey research.


2021 ◽  
pp. 089443932199277
Author(s):  
Patricia Hadler

Cognitive pretesting is an essential method of piloting questionnaires and ensuring quality of survey data. Web probing has emerged as an innovative method of cognitive pretesting, especially for cross-cultural and web surveys. The order of presenting questions in cognitive pretesting can differ from the order of presentation in the later survey. Yet empirical evidence is missing whether the order of presenting survey questions influences the answers to open-ended probing questions. The present study examines the effect of question order on web probing in the United States and Germany. Results indicate that probe responses are not strongly impacted by question order. However, both content and consistency of probe responses may differ cross-culturally. Implications for cognitive pretesting are discussed.


2020 ◽  
Author(s):  
Marcus Bendtsen ◽  
Claire Garnett ◽  
Paul Toner ◽  
Gillian W Shorter

BACKGROUND A core outcome set (COS) for trials and evaluations of the effectiveness and efficacy of alcohol brief interventions (ABIs) has recently been established through international consensus to address the variability of outcomes evaluated. OBJECTIVE This is a protocol for studies to assess if there are order effects among the questions included in the COS. METHODS The 10 items of the COS are organized into 4 clusters. A factorial design will be used with 24 arms, where each arm represents 1 order of the 4 clusters. Individuals searching online for help will be asked to complete a questionnaire, and consenting participants will be randomized to 1 of the 24 arms (double-blind with equal allocation). Participants will be included if they are 18 years or older. The primary analyses will (1) estimate how the order of the clusters of outcomes affects how participants respond and (2) investigate patterns of abandonment of the questionnaire. RESULTS Data collection is expected to commence in November 2020. A Bayesian group sequential design will be used with interim analyses planned for every 50 participants completing the questionnaire. Data collection will end no more than 24 months after commencement, and the results are expected to be published no later than December 2023. CONCLUSIONS Homogenizing the outcomes evaluated in studies of ABIs is important to support synthesis, and the COS is an important step toward this goal. Determining whether there may be issues with the COS question order may improve confidence in using it and speed up its dissemination in the research community. We encourage others to adopt the protocol as a study within their trial as they adopt the ORBITAL (Outcome Reporting in Brief Intervention Trials: Alcohol) COS to build a worldwide repository and provide materials to support such analysis. CLINICALTRIAL ISRCTN Registry ISRCTN17954645; http://www.isrctn.com/ISRCTN17954645


Sign in / Sign up

Export Citation Format

Share Document