question formats
Recently Published Documents


TOTAL DOCUMENTS

69
(FIVE YEARS 16)

H-INDEX

14
(FIVE YEARS 1)

2021 ◽  
Vol 36 (2) ◽  
pp. 362-394
Author(s):  
Alexander Laube ◽  
Janina Rothmund

Abstract The study investigates language attitudes in The Bahamas, addressing the current status of the local creole in society as well as attitudinal indicators of endonormative reorientation and stabilization. At the heart of the study is a verbal guise test which investigates covert language attitudes among educated Bahamians, mostly current and former university students; this was supplemented by a selection of acceptance rating scales and other direct question formats. The research instrument was specifically designed to look into the complex relationships between Bahamian Creole and local as well as non-local accents of standard English and to test associated solidarity and status effects in informal settings. The results show that the situation in The Bahamas mirrors what is found for other creole-speaking Caribbean countries in that the local vernacular continues to be ‘the language of solidarity, national identity, emotion and humour, and Standard the language of education, religion, and officialdom’ (Youssef 2004: 44). Notably, the study also finds that standard Bahamian English outranks the other metropolitan standards with regard to status traits, suggesting an increase in endonormativity.


Foods ◽  
2021 ◽  
Vol 10 (4) ◽  
pp. 702
Author(s):  
Denis Richard Seninde ◽  
Edgar Chambers

Rate All That Apply (RATA) is a derivative of the popularly used Check-All-That-Apply (CATA) question format. For RATA, consumers select all terms or statements that apply from a given list and then continue to rate those selected based on how much they apply. With Rate All Statements (RATING), a widely used standard format for testing, consumers are asked to rate all terms or statements according to how much they apply. Little is known of how the RATA and RATING question formats compare in terms of aspects such as attribute discrimination and sample differentiation. An online survey using either a RATA or RATING question format was conducted in five countries (Brazil, China, India, Spain, and the USA). Each respondent was randomly assigned one of the two question formats (n = 200 per country per format). Motivations for eating items that belong to five food groups (starch-rich, protein-rich, dairy, fruits and vegetables, and desserts) were assessed. More “apply” responses were found for all eating motivation constructs within RATING data than RATA data. Additionally, the standard indices showed that RATING discriminated more among motivations than RATA. Further, the RATING question format showed better discrimination ability among samples for all motivation constructs than RATA within all five countries. Generally, mean scores for motivations were higher when RATA was used, suggesting that consumers who might choose low numbers in the RATING method decide not to check the term in RATA. More investigation into the validity of RATA and RATING data is needed before use of either question format over the other can be recommended.


2021 ◽  
pp. 1-5
Author(s):  
Karl Scheeres ◽  
Niruj Agrawal ◽  
Stephanie Ewen ◽  
Ian Hall

Many examinations are now delivered online using digital formats, the migration to which has been accelerated by the COVID-19 pandemic. The MRCPsych theory examinations have been delivered in this way since Autumn 2020. The multiple choice question formats currently in use are highly reliable, but other formats enabled by the digital platform, such as very short answer questions (VSAQs), may promote deeper learning. Trainees often ask for a focus on core knowledge, and the absence of cueing with VSAQs could help achieve this. This paper describes the background and evidence base for VSAQs, and how they might be introduced. Any new question formats would be thoroughly piloted before appearing in the examinations and are likely to have a phased introduction alongside existing formats.


Foods ◽  
2020 ◽  
Vol 9 (11) ◽  
pp. 1566
Author(s):  
Denis Richard Seninde ◽  
Edgar Chambers

Check All That Apply (CATA) has become a popular type of questionnaire response in sensory/consumer research in recent years. However, some authors have pointed out potential problems with the method. An online survey using either a Check-All-That-Apply (CATA) or Check-All-Statements (CAS) format for questions was conducted to provide a deeper understanding of the response data using the two question formats. With CATA, respondents select all terms or statements that apply from a given list, while, with CAS, respondents must respond (e.g., yes/no or agree/disagree) to each term or statement to show that it applies or does not apply. Respondents from five countries (Brazil, China, India, Spain, and the USA) were randomly assigned one of the two question formats (N = 200 per country per method). Motivations for eating items that belong to five food groups (starchy, protein, dairy, fruits, and desserts) were assessed. Results showed that CAS had higher percentages of “agree” responses than CATA. Also, the response ratio of CAS and CATA data was different, suggesting that interpretations of the data from each response type would also be different. Respondents in the USA, China, and Spain took longer to complete the CAS questionnaire, while respondents in Brazil and India had similar time durations for the two question formats. Overall, the CATA format was liked slightly more than the CAS format and fewer respondents dropped out of the survey when using the CATA response type. These findings suggest that the CATA format is quick and relatively easy for consumers to complete. However, it provokes fewer “apply” responses, which some psychologists suggest underestimates applicable terms or statements and CATA provides a different interpretation of data than the CAS format that requires consumers to respond to each term or statement. Further, CAS may overestimate the applicable terms. Consumer insights collected using CATA and CAS can lead to different decisions due to differences in data interpretation by researchers (e.g., marketers, nutritionists, product developers, and sensory scientists). More investigation is needed for the CATA and CAS question formats.


2020 ◽  
Vol 30 (6) ◽  
pp. 1763-1781
Author(s):  
Louisa Ha ◽  
Chenjie Zhang ◽  
Weiwei Jiang

PurposeLow response rates in web surveys and the use of different devices in entering web survey responses are the two main challenges to response quality of web surveys. The purpose of this study is to compare the effects of using interviewers to recruit participants in computer-assisted self-administered interviews (CASI) vs computer-assisted personal interviews (CAPI) and smartphones vs computers on participation rate and web survey response quality.Design/methodology/approachTwo field experiments using two similar media use studies on US college students were conducted to compare response quality in different survey modes and response devices.FindingsResponse quality of computer entry was better than smartphone entry in both studies for open-ended and closed-ended question formats. Device effect was only significant on overall completion rate when interviewers were present.Practical implicationsSurvey researchers are given guidance how to conduct online surveys using different devices and choice of question format to maximize survey response quality. The benefits and limitations of using an interviewer to recruit participants and smartphones as web survey response devices are discussed.Social implicationsIt shows how computer-assisted self-interviews and smartphones can improve response quality and participation for underprivileged groups.Originality/valueThis is the first study to compare response quality in different question formats between CASI, e-mailed delivered online surveys and CAPI. It demonstrates the importance of human factor in creating sense of obligation to improve response quality.


2020 ◽  
Vol 3 (3) ◽  
pp. 49 ◽  
Author(s):  
Denis Richard Seninde ◽  
Edgar Chambers

Question formats are critical to the collection of consumer health attitudes, food product characterizations, and perceptions. The information from those surveys provides important insights in the product development process. Four formats based on the same concept have been used for prior studies: Check-All-That-Apply (CATA), Check-All-Statements (CAS), Rate-All-That-Apply (RATA), and Rate-All-Statements (RAS). Data can vary depending on what question format is used in the research, and this can affect the interpretation of the findings and subsequent decisions. This survey protocol compares the four question formats. Using a modified version of the Eating Motivation Survey (EMS) to test consumer eating motivations for five food items, each question format was translated and randomly assigned to respondents (N = 200 per country per format) from Brazil (Portuguese), China (Mandarin Chinese), India (Hindi or English), Spain (Spanish), and the USA (English). The results of this survey should provide more understanding of the differences and similarities in distribution of data for the four scale formats. Also, the translations and findings of this survey can guide marketers, sensory scientists, product developers, dieticians, and nutritionists when designing future consumer studies that will use these question formats.


Author(s):  
Abdullah Musleh Alahmadi

The presented study is aimed at identifying the extent of commitment of mathematics teachers at the middle and high school levels using the quality of the achievement tests. This is performed by taking samples of the evaluation forms provided by mathematics supervisors in the education offices. It is geographically distributed in the teaching area of ​​Madinah for evaluating the questions of mathematics teachers for the academic year of 1438 /1439. 235 evaluation forms were randomly chosen from a total of 591 and all data was analyzed using SPSS program. The results of the middle school mathematics teachers showed a modest quality of the achievement tests after the analysis of the questionnaire achieving(2.03). The general framework of the test for middle school teachers was of interest to them because they obtained high arithmetic average (2.81). Moreover, high averages were obtained in specific aspects as per the following order: multiple choice questions, questions of right and wrong and essay question formats ranging from(1.18 -2.65). In contrast, other aspects showing poorer results included: pairing and filling format questions. However, the results of the high school mathematics teachers showed an average trend (2.16) in the quality of the achievement tests after analyzing the questionnaires. The general framework of the test for high school teachers was of interest to them because they also obtained a high arithmetic average (2.86). High averages obtained as per the following order: multiple choice questions, questions of right and wrong and essay question formats. Conversely, the other aspect of questions showed less significant results mainly in pairing and filling format questions (ranging 1.22-2.83).) Overall, the results indicated that there were no statistically significant differences between teachers of the two stages in the industry (significance value (α=0.05). This study included some recommendations that may improve the quality of the achievement testing specifically to mathematics teachers.


2020 ◽  
Vol 62 (1) ◽  
Author(s):  
Klaus B. Von Pressentin ◽  
Mergan Naidoo ◽  
Wilhelm J. Steinberg ◽  
Lushiku Nkombua ◽  
Tasleem Ras

The series, ‘Mastering your Fellowship’, provides examples of the Different question formats encountered in the written and clinical examinations, that is, Part A of the Fellowship of the College of Family Physicians of South Africa (FCFP [SA]) examination. The series is aimed at helping family medicine registrars prepare for this examination.


Sign in / Sign up

Export Citation Format

Share Document