scholarly journals Measuring Cognitive Load: Are There More Valid Alternatives to Likert Rating Scales?

2021 ◽  
Vol 6 ◽  
Author(s):  
Kim Ouwehand ◽  
Avalon van der Kroef ◽  
Jacqueline Wong ◽  
Fred Paas

Cognitive load researchers have used varying subjective techniques based on rating scales to quantify experienced cognitive load. Although it is generally assumed that subjects can introspect on their cognitive processes and have no difficulty in assigning numerical values to the imposed cognitive load, little is known about how visual characteristics of the rating scales influence the validity of the cognitive load measure. In this study we look at validity of four subjective rating scales (within groups) differing in visual appearance by participants rating perceived difficulty and invested mental effort in response to working on simple and complex weekday problems. We used two numerical scales (the nine-point Likert scale most often used in Cognitive load theory research and a Visual Analogue Scale ranging between 0–100%) and two pictorial scales (a scale consisting of emoticons ranging from a relaxed blue-colored face to a stressed red-colored face and an “embodied” scale picturing nine depicted weights from 1–9 kg). Results suggest that numerical scales better reflect cognitive processes underlying complex problem solving while pictorial scales Underlying simple problem solving. This study adds to the discussion on the challenges to quantify cognitive load through various measurement methods and whether subtleties in measurements could influence research findings.

2014 ◽  
Vol 43 (1) ◽  
pp. 93-114 ◽  
Author(s):  
Annett Schmeck ◽  
Maria Opfermann ◽  
Tamara van Gog ◽  
Fred Paas ◽  
Detlev Leutner

2009 ◽  
Vol 23 (2) ◽  
pp. 129-138 ◽  
Author(s):  
Florian Schmidt-Weigand ◽  
Martin Hänze ◽  
Rita Wodzinski

How can worked examples be enhanced to promote complex problem solving? N = 92 students of the 8th grade attended in pairs to a physics problem. Problem solving was supported by (a) a worked example given as a whole, (b) a worked example presented incrementally (i.e. only one solution step at a time), or (c) a worked example presented incrementally and accompanied by strategic prompts. In groups (b) and (c) students self-regulated when to attend to the next solution step. In group (c) each solution step was preceded by a prompt that suggested strategic learning behavior (e.g. note taking, sketching, communicating with the learning partner, etc.). Prompts and solution steps were given on separate sheets. The study revealed that incremental presentation lead to a better learning experience (higher feeling of competence, lower cognitive load) compared to a conventional presentation of the worked example. However, only if additional strategic learning behavior was prompted, students remembered the solution more correctly and reproduced more solution steps.


Author(s):  
Slava Kalyuga ◽  
Jan L. Plass

This chapter provides an overview of our cognitive architecture and its implications for the design of game-based learning environments. Design of educational technologies should take into account how the human mind works and what its cognitive limitations are. Processing limitations of working memory, which becomes overloaded if more than a few chunks of information are processed simultaneously, represent a major factor influencing the effectiveness of learning in educational games. The chapter describes different types and sources of cognitive load and the specific demands of games on cognitive resources. It outlines information presentation design methods for dealing with potential cognitive overload, and presents some techniques (subjective rating scales, dual-task techniques, and concurrent verbal protocols) that could be used for evaluating cognitive load in electronic gaming in education.


Author(s):  
Slava Kalyuga

Availability of valid and usable measures of cognitive load involved in learning is essential for providing support for cognitive load-based explanations of the effects predicted and described in cognitive load theory as well as for general evaluation of learning conditions. Besides, the evaluation of cognitive load may provide another indicator of levels of learner expertise in addition to performance scores. As mentioned before, due to the available schematic knowledge base, more knowledgeable learners are expected to perform their tasks with lower mental effort than novices. Even though simple subjective rating scales remain the most often used measures of cognitive load imposed by instructional materials, new more sophisticated techniques are being developed, especially in multimodal environments associated with performance of complex cognitive tasks. This chapter provides a brief overview of traditional, as well as some novel methods for measuring and evaluating cognitive load. Some recently developed approaches to using these measures in estimating instructional efficiency of learning environments are also discussed.


Author(s):  
Hyun Joo ◽  
Jinju Lee ◽  
Dongsik Kim

This research investigated the effects of focus (inference vs. inference followed by integration) and level (low vs. middle vs. high) in self-explanation prompts on both cognitive load and learning outcomes. To achieve this goal, a 2*3 experiment design was employed. A total of 199 South Korean high school students were randomly assigned to one of six conditions. The two-way MANOVA was used to analyse the effects of the self-explanation prompts on learning outcomes. Results showed that there was an interaction effect between focus and level of self-explanation prompts on delayed conceptual knowledge, suggesting that the focus of self-explanation prompts could be varied depending on their level. Second, learners who were given a high level of prompts scored higher on the immediate conceptual knowledge test than those who received a low level of prompts. A two-way ANOVA was conducted to analyse the effects of the self-explanation prompts on cognitive load and showed no significant interaction effect. However, there was a main effect in the level of the prompt that a high level of self-explanation prompts imposed a lower cognitive load compared to a low level of prompts. In sum, the design and development of self-explanation prompts should consider both focus and level, especially to improve complex problem-solving skills.


Author(s):  
Montha Chumsukon

Problem-solving was necessary skill during the 21st century. According to the advanced social change, the traditional knowledge management focusing on the lecture which did not facilitate the problem thinking skill. The problem-based knowledge management was the instructional model, which could enhance the students’ problem-solving skill. The objectives of this research were: 1) to develop students’ problem-solving skill by using the Problem-Based Learning in Economics in School Course so that not less than 70% of students would have scores passing the specified criterion 70% of full score, and 2) to study the students’ satisfaction on Problem-Based Learning in Economics in School Course. The target group of this study was 32 second-year students who enrolled in Learning in Economics in School Course during the first semester of the 2017 academic year. There were 3 kinds of research instrument: 1) the instrument using for action including 9 problem-based plans, 27 hours, 2) the instrument using for reflecting research findings including the teachers’ teaching behavioral observation, the students’ learning behavioral observation, and 3 essay items of evaluation form in problem-solving skill at the end of cycle, and 3) the instrument used for evaluating the action performance including 5 multiple choice items of problem-solving skill, and 15 items of 5 level of rating scales for evaluating the students’ satisfaction. Data were 2 it is analyzed by using the statistics including percentage, mean, and standard deviation. The research findings found that: For the students’ problem-solving skill through problem-based learning in Economics in School Course, there were 25 students or 78% passing the specified criterion out of 32 students who were higher than the specified standard 70%. Also, the mean score was 42.7 9 points or 71.33% out of 60 points, passing the specified criterion 70%. For the students’ satisfaction in problem-based learning in Economics in School Couse, in overall, it was in “High” level (Very Satisfied). This research was classroom action research. It is beneficial for improving social studies teachers to develop future students. The students can continuously learn by themselves; it allows them the opportunity to achieve the goal of life-long learning and to become a person of quality for the 21st century.


Author(s):  
Paramita Bhattacharya

Recent research findings indicate the need to transform the way human capital is utilized, given the technological disruption in the current business environment. This chapter aims to discuss the fundamental prerequisites necessary to bring this change, for instance, higher order critical thinking, complex problem solving, focusing on fluid intelligence, and adaptability, among others. The author also provides insights into how these changes can be successfully incorporated through cognitive diversity, hybrid competencies, and understanding millennia's changing values and integrate them in the learning process.


2021 ◽  
Vol 6 ◽  
Author(s):  
Michael Thees ◽  
Sebastian Kapp ◽  
Kristin Altmeyer ◽  
Sarah Malone ◽  
Roland Brünken ◽  
...  

Cognitive load theory is considered universally applicable to all kinds of learning scenarios. However, instead of a universal method for measuring cognitive load that suits different learning contexts or target groups, there is a great variety of assessment approaches. Particularly common are subjective rating scales, which even allow for measuring the three assumed types of cognitive load in a differentiated way. Although these scales have been proven to be effective for various learning tasks, they might not be an optimal fit for the learning demands of specific complex environments such as technology-enhanced STEM laboratory courses. The aim of this research was therefore to examine and compare the existing rating scales in terms of validity for this learning context and to identify options for adaptation, if necessary. For the present study, the two most common subjective rating scales that are known to differentiate between load types (the cognitive load scale by Leppink et al. and the naïve rating scale by Klepsch et al.) were slightly adapted to the context of learning through structured hands-on experimentation where elements such as measurement data, experimental setups, and experimental tasks affect knowledge acquisition. N = 95 engineering students performed six experiments examining basic electric circuits where they had to explore fundamental relationships between physical quantities based on the observed data. Immediately after the experimentation, the students answered both adapted scales. Various indicators of validity, which considered the scales’ internal structure and their relation to variables such as group allocation as participants were randomly assigned to two conditions with a contrasting spatial arrangement of the measurement data, were analyzed. For the given dataset, the intended three-factorial structure could not be confirmed, and most of the a priori-defined subscales showed insufficient internal consistency. A multitrait–multimethod analysis suggests convergent and discriminant evidence between the scales which could not be confirmed sufficiently. The two contrasted experimental conditions were expected to result in different ratings for the extraneous load, which was solely detected by one adapted scale. As a further step, two new scales were assembled based on the overall item pool and the given dataset. They revealed a three-factorial structure in accordance with the three types of load and seemed to be promising new tools, although their subscales for extraneous load still suffer from low reliability scores.


Sign in / Sign up

Export Citation Format

Share Document