The Assessment of Workload: Dual Task Methodology

1983 ◽  
Vol 27 (3) ◽  
pp. 229-233 ◽  
Author(s):  
Arthur D. Fisk ◽  
William L. Derrick ◽  
Walter Schneider

The present paper outlines three major assumptions often implicitly assumed in dual task experiments conducted to assess operator workload. These assumptions are shown to be incorrect. Three criteria which should be met in dual task experiments that draw inferences from secondary task decrements are discussed. An experiment, meeting the proposed criteria, was conducted which demonstrated that when the criteria are met secondary task performance can be predictive of primary task difficulty. However, the data also indicate that a simple assessment of effort alone will not predict total task performance.

1992 ◽  
Vol 36 (18) ◽  
pp. 1398-1402
Author(s):  
Pamela S. Tsang ◽  
Tonya L. Shaner

The secondary task technique was used to test two alternative explanations of dual task decrement: outcome conflict and resource allocation. Subjects time-shared a continuous tracking task and a discrete Sternberg memory task. The memory probes were presented under three temporal predictability conditions. Dual task performance decrements in both the tracking and memory tasks suggested that the two tasks competed for some common resources, processes, or mechanisms. Although performance decrements were consistent with both the outcome conflict and resource allocation explanations, the two explanations propose different mechanisms by which the primary task could be protected from interference from the concurrent secondary task. The primary task performance could be protected by resource allocation or by strategic sequencing of the processing of the two tasks in order to avoid outcome conflict. In addition to examining the global trial means, moment-by-moment tracking error time-locked to the memory probe was also analyzed. There was little indication that the primary task was protected by resequencing of the processing of the two tasks. This together with the suggestion that predictable memory probes led to better protected primary task performance than less predictable memory probes lend support for the resource explanation.


1982 ◽  
Vol 26 (1) ◽  
pp. 21-24
Author(s):  
David F. Johnson ◽  
Robert C. Haygood ◽  
William M. Olson

This paper describes two methodological innovations in the study of adaptive training. The first is the use of a yoked design to insure that the average level of task difficulty for fixed-difficulty subjects is the same as the average level of difficulty reached by adaptive subjects. The second is the demonstration of the feasibility of using a secondary (subsidiary, non-loading) task to furnish the adaptive criterion for changing the difficulty level of the primary task. The results of two experiments are reported. Both experiments demonstrate the feasibility and utility of yoked design and adaptation on secondary task performance in adaptive training.


Author(s):  
Bradley Chase ◽  
Holly M. Irwin-Chase ◽  
Jaclyn T. Sonico

Individual differences in human performance is an issue that confounds many studies and has not been properly controlled in the ergonomics/human factors literature. This paper examines the concept of individual differences in performance primarily from the perspective of cognitive performance. A study was designed to test the effect of a secondary visual task on a primary visual task. In one condition, participants performed the dual task, while assigning no weight to the secondary task. In the second condition, the primary task was performed simultaneously with the secondary task. The effect of the added workload was measured via the effect on primary task performance. In the baseline portion of the task participants had their baseline (80–90% accuracy) of performance collected by adjusting the stimulus duration. The individual participant stimulus duration was then used as the experimental stimulus duration and the effect of secondary task performance on primary task performance was measured.


Perception ◽  
2020 ◽  
Vol 49 (5) ◽  
pp. 515-538
Author(s):  
Dacey Nguyen ◽  
James Brown ◽  
David Alais

This study examines dual-task performance in the tactile modality and tests whether dual-task cost depends on task type. Experiment 1 involved competing tasks of the same type, using a primary localisation task on the left hand and a secondary localisation task on the right hand. In Experiment 2, the primary task on the left hand remained the same, while an intensity discrimination task was used as the secondary task on the right hand. Subjects in both experiments completed three conditions: the primary task alone, a dual-task condition, and the primary task with the secondary stimulus present but no response required. Across both experiments, performance on the primary task was best when it was presented alone, and there was a performance decrement when the secondary stimulus was present but not responded to. Performance on the primary task was further decreased when participants had to respond to the secondary stimulus, and the decrease was larger when the secondary task was localisation rather than discrimination. This result indicates that task type in the tactile modality may modulate the attentional cost of dual-task performance and implies partially shared resources underlie localisation and intensity discrimination.


2019 ◽  
Author(s):  
Michal Olszanowski ◽  
Natalia Szostak

This study explored whether the control mechanisms recruited for optimising performance are similar for dual-task and interference-task settings. We tested whether the frequency of appearance of a secondary task resulted in an adjustment of anticipatory and reflexive forms of attentional control, as has been observed with other interference tasks (e.g. stroop and flanker). The results of two experiments demonstrated a proportion congruency effect (PCE): when a secondary task frequently appeared, primary task performance was slower. Additionally, there was a relative slowdown of dual-task performance in blocks wherein the secondary task appeared infrequently compared to blocks wherein it appeared frequently. However, this slowdown occurred when the primary task entailed a low level of control (Experiment 1) but was absent when it demanded a high level of control (Experiment 2). Overall, the results suggest that level of control can be adjusted to task demands related to the frequency of the secondary task.


Author(s):  
Robert S. Owen

The notion that the human information processing system has a limit in resource capacity has been used for over 100 years as the basis for the investigation of a variety of constructs and processes, such as mental workload, mental effort, attention, elaboration, information overload, and such. The dual task or secondary task technique presumes that the consumption of processing capacity by one task will leave less capacity available for the processing of a second concurrent task. When both tasks attempt to consume more capacity than is available, the performance of one or both tasks must suffer, and this will presumably result in the observation of degraded task performance. Consider, for example, the amount of mental effort devoted to solving a difficult arithmetic problem. If a person is asked to tap a pattern with a finger while solving the problem, we might be able to discover the more difficult parts of the problem solving process by observing changes in the performance of the secondary task of finger tapping. While a participant is reading a chapter of text in a book or on a Web browser, we might be able to use this same technique to find the more interesting, involving, or confusing passages of the text. Many implementations of the secondary task technique have been used for more than a century, such as the maintenance of hand pressure (Lechner, Bradbury, & Bradley, 1998; Welch, 1898), the maintenance of finger tapping patterns (Friedman, Polson, & Dafoe, 1988; Jastrow, 1892; Kantowitz & Knight, 1976), the performance of mental arithmetic (Bahrick, Noble, & Fitts, 1954; Wogalter & Usher, 1999), and the speed of reaction time to an occasional flash of light, a beep, or a clicking sound (e.g., Bourdin, Teasdale, & Nourgier, 1998; Owen, Lord, & Cooper, 1995; Posener & Bois, 1971). In using the secondary task technique, the participant is asked to perform a secondary task, such as tapping a finger in a pattern, while performing the primary task of interest. By tracking changes in secondary task performance (e.g., observing erratic finger tapping), we can track changes in processing resources being consumed by the primary task. This technique has been used in a wide variety of disciplines and situations. It has been used in advertising to study the effects of more or less suspenseful parts of a TV program on commercials (Owen et al., 1995) and in studying the effects of time-compressed audio commercials (Moore, Hausknecht, & Thamodaran, 1986). It has been used in sports to detect attention demands during horseshoe pitching (Prezuhy & Etnier, 2001) and rock climbing (Bourdin et al., 1998), while others have used it to study attention associated with posture control in patients who are older or suffering from brain disease (e.g., Maylor & Wing, 1996; Muller, Redfern, Furman, & Jennings, 2004). Murray, Holland, and Beason (1998) used a dual task study to detect the attention demands of speaking in people who suffer from aphasia after a stroke. Others have used the secondary task technique to study the attention demands of automobile driving (e.g., Baron & Kalsher, 1998), including the effects of distractions such as mobile telephones (Patten, Kircher, Ostlund, & Nilsson, 2004) and the potential of a fragrance to improve alertness (Schieber, Werner, & Larsen, 2000). Koukounas and McCabe (2001) and Koukounas and Over (1999) have used it to study the allocation of attention resources during sexual arousal. The notion of decreased secondary task performance due to a limited-capacity processing system is not simply a laboratory curiosity. Consider, for example, the crash of a Jetstream 3101 airplane as it was approaching for landing, killing all on board.


Author(s):  
Ricky E. Savage ◽  
Walter W. Wierwille ◽  
Richard E. Cordes

Problems have been encountered in previous research in developing a secondary task measure of mental workload that is both sensitive and stable. Ordinarily a single measure of secondary task is analyzed as an indicator of difference in workload. The purpose of the experiment reported here was to determine whether alternate measures taken from a single secondary task might prove more sensitive. Twelve subjects participated in the experiment involving a primary task (meter pointer nulling) and a secondary task (reading random digits aloud). The independent variable (primary task difficulty level) was adjusted by changing the number of meters that had to be monitored (two, three, or four meters). Dependent measures were taken on the (1) number of random digits spoken (usual workload formula), (2) longest interval between spoken responses, (3) longest consecutive string of spoken digits, and (4) the number of “triplets” spoken. Results show that the dependent measures (1), (3), and (4) were significant with (1) being the most sensitive.


1987 ◽  
Vol 31 (7) ◽  
pp. 847-851 ◽  
Author(s):  
Yili Liu ◽  
Christopher D. Wickens

We report here the first experiment of a series studying the effect of task structure and difficulty demand on time-sharing performance and workload in both automated and corresponding manual systems. The experimental task involves manual control time-shared with spatial and verbal decisions tasks of two levels of difficulty and two modes of response (voice or manual). The results provide strong evidence that tasks and processes competing for common processing resources are time shared less effectively and have higher workload than tasks competing for separate resources. Subjective measures and the structure of multiple resources are used in conjunction to predict dual task performance. The evidence comes from both single task and from dual task performance.


1981 ◽  
Vol 25 (1) ◽  
pp. 43-47 ◽  
Author(s):  
Arthur Kramer ◽  
Christopher Wickens ◽  
Linda Vanasse ◽  
Earle Heffley ◽  
Emanuel Donchin

The utility of the Event Related Brain Potential for the evaluation of task load was investigated. Subjects performed a discrete step tracking task with either first or second order control dynamics. In different conditions, the subject covertly counted auditory probes, visual probes, or tracking target steps presented in a Bernoulli series. In a fourth experimental condition subjects performed the primary tracking task without a secondary task. In the auditory condition, an increase in the difficulty of the primary task produced a decrease in the amplitude of the P300 elicited by the secondary count task. The introduction of the primary task in the visual condition resulted in an initial reduction in P300 amplitude but increasing task difficulty failed to attenuate the P300 further. A positive relationship between primary task difficulty and P300 amplitude was obtained in the step conditions. Furthermore, this effect did not require that the step changes be counted. The results are addressed in terms of the relative advantages of primary and secondary ERP workload assessment techniques.


2009 ◽  
Vol 2009 ◽  
pp. 1-9 ◽  
Author(s):  
Peter Schmutz ◽  
Silvia Heinz ◽  
Yolanda Métrailler ◽  
Klaus Opwis

Guidelines for designing usable interfaces recommend reducing short term memory load. Cognitive load, that is, working memory demands during problem solving, reasoning, or thinking, may affect users' general satisfaction and performance when completing complex tasks. Whereas in design guidelines numerous ways of reducing cognitive load in interactive systems are described, not many attempts have been made to measure cognitive load in Web applications, and few techniques exist. In this study participants' cognitive load was measured while they were engaged in searching for several products in four different online book stores. NASA-TLX and dual-task methodology were used to measure subjective and objective mental workload. The dual-task methodology involved searching for books as the primary task and a visual monitoring task as the secondary task. NASA-TLX scores differed significantly among the shops. Secondary task reaction times showed no significant differences between the four shops. Strong correlations between NASA-TLX, primary task completion time, and general satisfaction suggest that NASA-TLX can be used as a valuable additional measure of efficiency. Furthermore, strong correlations were found between browse/search preference and NASA-TLX as well as between search/browse preference and user satisfaction. Thus we suggest browse/search preference as a promising heuristic assessment method of cognitive load.


Sign in / Sign up

Export Citation Format

Share Document