scholarly journals Gamification of Assessment Test through Multiple Question Paths to Facilitate Participants’ Autonomy and Competence

2020 ◽  
Vol 3 (1) ◽  
pp. 9-17
Author(s):  
Pratama Atmaja ◽  
Eka Mandyartha
Keyword(s):  
2021 ◽  
Vol 25 (1) ◽  
Author(s):  
Alberto Iturbe Herrera ◽  
Noé Alejandro Castro Sánchez ◽  
Dante Mújica Vargas

Author(s):  
Megan Petryk ◽  
Tammy Hopper

Abstract Purpose: The purpose of this study was to investigate the effects of asking open-ended episodic memory questions versus open-ended semantic memory questions on the conversational discourse of individuals with Alzheimer's disease (AD). Methods: Four females diagnosed with probable AD participated in the study. A within-subjects experimental design was employed to assess the effects of the different question types on participants’ spoken language. Transcripts were analyzed using specific discourse measures used in previous research involving individuals with AD. Results: Participants in this study produced more meaningful and relevant statements, as measured by ratios of on-topic utterances, when responding to the semantic memory questions as compared to episodic memory questions. Participants made few negative comments overall; however, more negative self-evaluative statements were made in the episodic memory condition. When considered in conjunction with previous research, the results support the use of multiple question types in conversation with individuals with mild and moderate AD. However, communication partners should limit their use of open-ended questions that primarily tax episodic memory.


ETRI Journal ◽  
2009 ◽  
Vol 31 (4) ◽  
pp. 419-428 ◽  
Author(s):  
Hyo-Jung Oh ◽  
Sung Hyon Myaeng ◽  
Myung-Gil Jang

2019 ◽  
Vol 3 (3) ◽  
pp. 222-248
Author(s):  
Qiong Bu ◽  
Elena Simperl ◽  
Adriane Chapman ◽  
Eddy Maddalena

Purpose Ensuring quality is one of the most significant challenges in microtask crowdsourcing tasks. Aggregation of the collected data from the crowd is one of the important steps to infer the correct answer, but the existing study seems to be limited to the single-step task. This study aims to look at multiple-step classification tasks and understand aggregation in such cases; hence, it is useful for assessing the classification quality. Design/methodology/approach The authors present a model to capture the information of the workflow, questions and answers for both single- and multiple-question classification tasks. They propose an adapted approach on top of the classic approach so that the model can handle tasks with several multiple-choice questions in general instead of a specific domain or any specific hierarchical classifications. They evaluate their approach with three representative tasks from existing citizen science projects in which they have the gold standard created by experts. Findings The results show that the approach can provide significant improvements to the overall classification accuracy. The authors’ analysis also demonstrates that all algorithms can achieve higher accuracy for the volunteer- versus paid-generated data sets for the same task. Furthermore, the authors observed interesting patterns in the relationship between the performance of different algorithms and workflow-specific factors including the number of steps and the number of available options in each step. Originality/value Due to the nature of crowdsourcing, aggregating the collected data is an important process to understand the quality of crowdsourcing results. Different inference algorithms have been studied for simple microtasks consisting of single questions with two or more answers. However, as classification tasks typically contain many questions, the proposed method can be applied to a wide range of tasks including both single- and multiple-question classification tasks.


2018 ◽  
Vol 8 (1) ◽  
Author(s):  
Raven Germain

Soufan, Ziad. STEM Buddies EN. Project Hikaya, 2018. Vers. 1.1.2. Google Play Store, https://play.google.com/store/apps/details?id=com.stem_buddies.en This educational application uses a combination of multimedia elements such as video, audio, and text to create an engaging and interactive storytelling experience that teaches children about Science, Technology, Engineering, and Maths (STEM) topics. The app consists of three parts: an animated story, a short quiz, and downloadable colouring pages. Upon opening the app, the user is prompted to watch the animation first, with learning objectives presented for the chosen topic. After the short, subtitled, five-minute video, in which the viewer has the opportunity to pause and rewind, the user is then directed to either the quiz or the colouring pages, which reflect the material presented in the video. This intuitive and logical organization ensures that the informative video is a precursor for the interactive activities and consequently enables learning through reflection and repetition. Through accessible language, the current module, “Water Cycle,” seamlessly integrates an original, engaging story and memorable characters with pedagogical elements that explain how rain forms (evaporation, condensation, precipitation), the importance of water, and the problems associated with the lack of rain. The simple, five-question quiz contains multiple question types and uses audio, text, and pictures to provide children with multiple avenues for identification and learning. Through the quiz, children are required to make intelligent decisions regarding what they have learned. Feedback is given in the form of gamification, with correct answers being positively reinforced by the attainment of gold stars, and completion of the quiz resulting in a personalized certificate of achievement for that module. A myriad of colouring pages, available for use within the app or for individual download, reflect familiar themes and characters and continue to provide some interactivity after the module has been completed. Available in English and Arabic, this new, free application currently only contains one subject module, with more scheduled to be released in the future. With superior graphic design, no ads, and no in-app purchases, the possibility for distractions and unintended purchases are removed. Despite these desirable features, the video and narrative itself could be more interactive on the textual and visual level by incorporating hotspots for touching, swiping, and exploring. I would recommend it for use in public libraries and by teachers in elementary schools for children aged 5 to 9. Recommended: 3 out of 4 stars Reviewer: Raven Germain Raven Germain is a second year MLIS student at the University of Alberta with a love of children’s literature. When not studying, she enjoys travelling, playing piano, and immersing herself in fantasy novels.


Author(s):  
Parke Wilde

This article reviews food security measurement and its connection to policy responses in developed countries. It focuses on survey-based methods, sometimes called “third generation” measures of food security. This article discusses examples drawn from across a range of developed countries whenever possible. It presents the relationship between food insecurity and hunger definitions. It then moves on to a discussion of advantage and disadvantage of the multiple-question approach. Countries address food security through general economic policies and through more specific food assistance programs. This article deals with general economic policies including anti-poverty programs and interventions to support the low-wage labor market and concludes that developed countries associate food security with symptoms of material deprivation and social exclusion for which the primary response is the income-based social safety net more broadly.


Author(s):  
Olga Zamaraeva ◽  
Guy Emerson

We present an analysis of multiple question fronting in a restricted variant of the HPSG formalism (DELPH-IN) where unification is the only natively defined operation. Analysing multiple fronting in this formalism is challenging, because it requires carefully handling list appends, something that HPSG analyses of question fronting heavily rely on. Our analysis uses the append list type to address this challenge. We focus the testing of our analysis on Russian, although we also integrate it into the Grammar Matrix customization system where it serves as a basis for cross-linguistic modeling. In this context, we discuss the relationship of our analysis to lexical threading and conclude that, while lexical threading has its advantages, modeling multiple extraction cross-linguistically is easier without the lexical threading assumption.


Sign in / Sign up

Export Citation Format

Share Document