Investigating Real-time Predictors of Engagement

Author(s):  
David Sharek ◽  
Eric Wiebe

Engagement is a worthwhile psychological construct to examine in the context of video games and online training. In this context, previous research suggests that the more engaged a person is, the more likely they are to experience overall positive affect while performing at a high level. This research builds on theories of engagement, Flow Theory, and Cognitive Load Theory, to operationalize engagement in terms of cognitive load, affect, and performance. An adaptive algorithm was then developed to test the proposed operationalization of engagement. Using a puzzle-based video game, player performance and engagement was compared across three conditions: adaptive gameplay, a traditional linear gameplay, and choice-based gameplay. Results show that those in the adaptive gameplay condition performed higher compared to those in the other two conditions without any degradation of overall affect or self-report of engagement.

2013 ◽  
Vol 10 (1) ◽  
pp. 133-141 ◽  
Author(s):  
Karen L. Hessler ◽  
Ann M. Henderson

AbstractThe purpose of this research was to investigate the effectiveness of interactive self-paced computerized case study compared to traditional hand-written paper case study on the outcomes of student knowledge, attitude, and retention of the content delivered. Cognitive load theory (CLT) provided the theoretical framework for the study. A quasi-experimental pre-test post-test design with random group assignment was used to measure by self-report survey student cognitive load and interactivity level of the intervention. Student scores on quizzes in semester 1 and post-test follow-up quizzes in semester 3 were assessed for the intervention’s effects on knowledge retention. While no significant statistical differences were found between groups, the students in the interactive case study group rated their case study as more fun and interactive. These students also scored consistently higher on the post-test quiz items in their third semester, showing the viability of using CLT to improve student retention of nursing curricula information.


2016 ◽  
Vol 4 ◽  
pp. 205031211668225 ◽  
Author(s):  
John Q Young ◽  
Christy K Boscardin ◽  
Savannah M van Dijk ◽  
Ruqayyah Abdullah ◽  
David M Irby ◽  
...  

Background: Advancing patient safety during handoffs remains a public health priority. The application of cognitive load theory offers promise, but is currently limited by the inability to measure cognitive load types. Objective: To develop and collect validity evidence for a revised self-report inventory that measures cognitive load types during a handoff. Methods: Based on prior published work, input from experts in cognitive load theory and handoffs, and a think-aloud exercise with residents, a revised Cognitive Load Inventory for Handoffs was developed. The Cognitive Load Inventory for Handoffs has items for intrinsic, extraneous, and germane load. Students who were second- and sixth-year students recruited from a Dutch medical school participated in four simulated handoffs (two simple and two complex cases). At the end of each handoff, study participants completed the Cognitive Load Inventory for Handoffs, Paas’ Cognitive Load Scale, and one global rating item for intrinsic load, extraneous load, and germane load, respectively. Factor and correlational analyses were performed to collect evidence for validity. Results: Confirmatory factor analysis yielded a single factor that combined intrinsic and germane loads. The extraneous load items performed poorly and were removed from the model. The score from the combined intrinsic and germane load items associated, as predicted by cognitive load theory, with a commonly used measure of overall cognitive load (Pearson’s r = 0.83, p < 0.001), case complexity (beta = 0.74, p < 0.001), level of experience (beta = −0.96, p < 0.001), and handoff accuracy (r = −0.34, p < 0.001). Conclusion: These results offer encouragement that intrinsic load during handoffs may be measured via a self-report measure. Additional work is required to develop an adequate measure of extraneous load.


2020 ◽  
Vol 32 (4) ◽  
pp. 1003-1027 ◽  
Author(s):  
Katharina Scheiter ◽  
Rakefet Ackerman ◽  
Vincent Hoogerheide

Abstract A central factor in research guided by the Cognitive Load Theory (CLT) is the mental effort people invest in performing a task. Mental effort is commonly assessed by asking people to report their effort throughout performing, learning, or problem-solving tasks. Although this measurement is considered reliable and valid in CLT research, metacognitive research provides robust evidence that self-appraisals of performance are often biased. In this review, we consider the possibility that mental effort appraisals may also be biased. In particular, we review signs for covariations and mismatches between subjective and objective measures of effort. Our review suggests that subjective and most objective effort measures appear reliable and valid when evaluated in isolation, because they discriminate among tasks of varying complexity. However, not much is known about their mutual correspondence—that is, whether subjective measures covariate with objective measures. Moreover, there is evidence that people utilize heuristic cues when appraising their effort, similar to utilization of heuristic cues underlying metacognitive appraisals of performance. These cues are identified by exposing biases—mismatch in effects of cue variations on appraisals and performance. The review concludes with a research agenda in which we suggest applying the well-established methodologies for studying biases in self-appraisals of performance in metacognitive research to investigating effort appraisals. One promising method could be to determine the covariation of effort appraisals and objective effort measures as an indicator of the resolution of effort appraisals.


2019 ◽  
Vol 1 (1) ◽  
Author(s):  
Jamie Lee Jensen ◽  
Andrea J. Phillips ◽  
Jace C. Briggs

         Bloom’s taxonomy is widely used in educational research to categorize the cognitive skills required to answer exam questions. For this study, we analyzed how students categorize exam questions (high-level question or low-level question,) gathered data as to their rationale for categorization, and compared their categorizations to those of experts. We found that students consistently rank high-level questions incorrectly. We analyzed student reasons for their categorizations, and found that for many of the incorrectly categorized questions the students referred to reasons related to Cognitive Load Theory. This shows that cognitive load prevents students from accurately assessing the cognitive level of an exam question. Thus, extra cognitive load in exam questions may prevent those questions from accurately measuring the skills and knowledge of the student. This points to the need for instructors to eliminate cognitive load from their exams.


Sensors ◽  
2020 ◽  
Vol 20 (11) ◽  
pp. 3219
Author(s):  
Ji Hyeok Jeong ◽  
Hyun-Jung Park ◽  
Sang-Hoon Yeo ◽  
Hyungmin Kim

This study aims to bridge the gap between the discrepant views of existing studies in different modalities on the cognitive effect of video game play. To this end, we conducted a set of tests with different modalities within each participant: (1) Self-Reports Analyses (SRA) consisting of five popular self-report surveys, and (2) a standard Behavioral Experiment (BE) using pro- and antisaccade paradigms, and analyzed how their results vary between Video Game Player (VGP) and Non-Video Game Player (NVGP) participant groups. Our result showed that (1) VGP scored significantly lower in Behavioral Inhibition System (BIS) than NVGP (p = 0.023), and (2) VGP showed significantly higher antisaccade error rate than NVGP (p = 0.005), suggesting that results of both SRA and BE support the existing view that video game play has a maleficent impact on the cognition by increasing impulsivity. However, the following correlation analysis on the results across individual participants found no significant correlation between SRA and BE, indicating a complex nature of the cognitive effect of video game play.


2019 ◽  
Vol 1 (01) ◽  
Author(s):  
Andrea J. Phillips ◽  
Jace C. Briggs ◽  
Jamie Lee Jensen

         Bloom’s taxonomy is widely used in educational research to categorize the cognitive skills required to answer exam questions. For this study, we analyzed how students categorize exam questions (high-level question or low-level question,) gathered data as to their rationale for categorization, and compared their categorizations to those of experts. We found that students consistently rank high-level questions incorrectly. We analyzed student reasons for their categorizations, and found that for many of the incorrectly categorized questions the students referred to reasons related to Cognitive Load Theory. This shows that cognitive load prevents students from accurately assessing the cognitive level of an exam question. Thus, extra cognitive load in exam questions may prevent those questions from accurately measuring the skills and knowledge of the student. This points to the need for instructors to eliminate cognitive load from their exams.


Author(s):  
Roland Brünken ◽  
Susan Steinbacher ◽  
Jan L. Plass ◽  
Detlev Leutner

Abstract. In two pilot experiments, a new approach for the direct assessment of cognitive load during multimedia learning was tested that uses dual-task methodology. Using this approach, we obtained the same pattern of cognitive load as predicted by cognitive load theory when applied to multimedia learning: The audiovisual presentation of text-based and picture-based learning materials induced less cognitive load than the visual-only presentation of the same material. The findings confirm the utility of dual-task methodology as a promising approach for the assessment of cognitive load induced by complex multimedia learning systems.


2013 ◽  
Author(s):  
Lori B. Stone ◽  
Abigail Lundquist ◽  
Stefan Ganchev ◽  
Nora Ladjahasan

Sign in / Sign up

Export Citation Format

Share Document