scholarly journals Measuring Collaboration Load With Pupillary Responses - Implications for the Design of Instructions in Task-Oriented HRI

2021 ◽  
Vol 12 ◽  
Author(s):  
Dimosthenis Kontogiorgos ◽  
Joakim Gustafson

In face-to-face interaction, speakers establish common ground incrementally, the mutual belief of understanding. Instead of constructing “one-shot” complete utterances, speakers tend to package pieces of information in smaller fragments (what Clark calls “installments”). The aim of this paper was to investigate how speakers' fragmented construction of utterances affect the cognitive load of the conversational partners during utterance production and comprehension. In a collaborative furniture assembly, participants instructed each other how to build an IKEA stool. Pupil diameter was measured as an outcome of effort and cognitive processing in the collaborative task. Pupillometry data and eye-gaze behaviour indicated that more cognitive resources were required by speakers to construct fragmented rather than non-fragmented utterances. Such construction of utterances by audience design was associated with higher cognitive load for speakers. We also found that listeners' cognitive resources were decreased in each new speaker utterance, suggesting that speakers' efforts in the fragmented construction of utterances were successful to resolve ambiguities. The results indicated that speaking in fragments is beneficial for minimising collaboration load, however, adapting to listeners is a demanding task. We discuss implications for future empirical research on the design of task-oriented human-robot interactions, and how assistive social robots may benefit from the production of fragmented instructions.

2020 ◽  
Vol 10 (5) ◽  
pp. 92
Author(s):  
Ramtin Zargari Marandi ◽  
Camilla Ann Fjelsted ◽  
Iris Hrustanovic ◽  
Rikke Dan Olesen ◽  
Parisa Gazerani

The affective dimension of pain contributes to pain perception. Cognitive load may influence pain-related feelings. Eye tracking has proven useful for detecting cognitive load effects objectively by using relevant eye movement characteristics. In this study, we investigated whether eye movement characteristics differ in response to pain-related feelings in the presence of low and high cognitive loads. A set of validated, control, and pain-related sounds were applied to provoke pain-related feelings. Twelve healthy young participants (six females) performed a cognitive task at two load levels, once with the control and once with pain-related sounds in a randomized order. During the tasks, eye movements and task performance were recorded. Afterwards, the participants were asked to fill out questionnaires on their pain perception in response to the applied cognitive loads. Our findings indicate that an increased cognitive load was associated with a decreased saccade peak velocity, saccade frequency, and fixation frequency, as well as an increased fixation duration and pupil dilation range. Among the oculometrics, pain-related feelings were reflected only in the pupillary responses to a low cognitive load. The performance and perceived cognitive load decreased and increased, respectively, with the task load level and were not influenced by the pain-related sounds. Pain-related feelings were lower when performing the task compared with when no task was being performed in an independent group of participants. This might be due to the cognitive engagement during the task. This study demonstrated that cognitive processing could moderate the feelings associated with pain perception.


Author(s):  
Virginia Clinton ◽  
Jennifer L. Cooper ◽  
Joseph E. Michaelis ◽  
Martha W. Alibali ◽  
Mitchell J. Nathan

Mathematics curricula are frequently rich with visuals, but these visuals are often not designed for optimal use of students' limited cognitive resources. The authors of this study revised the visuals in a mathematics lesson based on instructional design principles. The purpose of this study is to examine the effects of these revised visuals on students' cognitive load, cognitive processing, learning, and interest. Middle-school students (N = 62) read a lesson on early algebra with original or revised visuals while their eye movements were recorded. Students in the low prior knowledge group had less cognitive load and cognitive processing with the revised lesson than the original lesson. However, the reverse was true for students in the middle prior knowledge group. There were no effects of the revisions on learning. The findings are discussed in the context of the expertise reversal effect as well as the cognitive theory of multimedia learning and cognitive load theory.


2019 ◽  
Author(s):  
Ester Navarro ◽  
Brooke N Macnamara ◽  
Sam Glucksberg ◽  
Andrew R. A. Conway

The underlying cognitive mechanisms explaining why speakers sometimes make communication errors are not well understood. Some scholars have theorized that audience design engages automatic processes when a listener is present; others argue that it relies on effortful resources, regardless of listener presence. We hypothesized that (a) working memory is engaged during communicative audience design and (b) the extent to which working memory is engaged relies on individual differences in cognitive abilities and concurrent amount of resources available. In Experiment 1, participants completed a referential task under high, low, or no cognitive load with a present listener, whose perspective differed from the speaker’s. Speakers made few referential errors under no and low load, but errors increased when cognitive load was highest. In Experiment 2, the listener was absent. Speakers made few referential errors under no and low load, but errors increased when cognitive load was highest, suggesting that audience design is still effortful under high cognitive load, regardless of the presence of a listener. Experiment 3 tested whether cognitive abilities predicted communication performance. Participants with higher fluid intelligence and working memory capacity made fewer communication errors. Our findings suggest that communication relies on available cognitive resources, and therefore errors occur as a function of factors like cognitive load, and individual differences.


2014 ◽  
Vol 23 (3) ◽  
pp. 132-139 ◽  
Author(s):  
Lauren Zubow ◽  
Richard Hurtig

Children with Rett Syndrome (RS) are reported to use multiple modalities to communicate although their intentionality is often questioned (Bartolotta, Zipp, Simpkins, & Glazewski, 2011; Hetzroni & Rubin, 2006; Sigafoos et al., 2000; Sigafoos, Woodyatt, Tuckeer, Roberts-Pennell, & Pittendreigh, 2000). This paper will present results of a study analyzing the unconventional vocalizations of a child with RS. The primary research question addresses the ability of familiar and unfamiliar listeners to interpret unconventional vocalizations as “yes” or “no” responses. This paper will also address the acoustic analysis and perceptual judgments of these vocalizations. Pre-recorded isolated vocalizations of “yes” and “no” were presented to 5 listeners (mother, father, 1 unfamiliar, and 2 familiar clinicians) and the listeners were asked to rate the vocalizations as either “yes” or “no.” The ratings were compared to the original identification made by the child's mother during the face-to-face interaction from which the samples were drawn. Findings of this study suggest, in this case, the child's vocalizations were intentional and could be interpreted by familiar and unfamiliar listeners as either “yes” or “no” without contextual or visual cues. The results suggest that communication partners should be trained to attend to eye-gaze and vocalizations to ensure the child's intended choice is accurately understood.


Author(s):  
Bastien Trémolière ◽  
Marie-Ève Gagnon ◽  
Isabelle Blanchette

Abstract. Although the detrimental effect of emotion on reasoning has been evidenced many times, the cognitive mechanism underlying this effect remains unclear. In the present paper, we explore the cognitive load hypothesis as a potential explanation. In an experiment, participants solved syllogistic reasoning problems with either neutral or emotional contents. Participants were also presented with a secondary task, for which the difficult version requires the mobilization of cognitive resources to be correctly solved. Participants performed overall worse and took longer on emotional problems than on neutral problems. Performance on the secondary task, in the difficult version, was poorer when participants were reasoning about emotional, compared to neutral contents, consistent with the idea that processing emotion requires more cognitive resources. Taken together, the findings afford evidence that the deleterious effect of emotion on reasoning is mediated by cognitive load.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Isabell Hubert Lyall ◽  
Juhani Järvikivi

AbstractResearch suggests that listeners’ comprehension of spoken language is concurrently affected by linguistic and non-linguistic factors, including individual difference factors. However, there is no systematic research on whether general personality traits affect language processing. We correlated 88 native English-speaking participants’ Big-5 traits with their pupillary responses to spoken sentences that included grammatical errors, "He frequently have burgers for dinner"; semantic anomalies, "Dogs sometimes chase teas"; and statements incongruent with gender stereotyped expectations, such as "I sometimes buy my bras at Hudson's Bay", spoken by a male speaker. Generalized additive mixed models showed that the listener's Openness, Extraversion, Agreeableness, and Neuroticism traits modulated resource allocation to the three different types of unexpected stimuli. No personality trait affected changes in pupil size across the board: less open participants showed greater pupil dilation when processing sentences with grammatical errors; and more introverted listeners showed greater pupil dilation in response to both semantic anomalies and socio-cultural clashes. Our study is the first one demonstrating that personality traits systematically modulate listeners’ online language processing. Our results suggest that individuals with different personality profiles exhibit different patterns of the allocation of cognitive resources during real-time language comprehension.


Author(s):  
Ding Ding ◽  
Mark A Neerincx ◽  
Willem-Paul Brinkman

Abstract Virtual cognitions (VCs) are a stream of simulated thoughts people hear while emerged in a virtual environment, e.g. by hearing a simulated inner voice presented as a voice over. They can enhance people’s self-efficacy and knowledge about, for example, social interactions as previous studies have shown. Ownership and plausibility of these VCs are regarded as important for their effect, and enhancing both might, therefore, be beneficial. A potential strategy for achieving this is the synchronization of the VCs with people’s eye fixation using eye-tracking technology embedded in a head-mounted display. Hence, this paper tests this idea in the context of a pre-therapy for spider and snake phobia to examine the ability to guide people’s eye fixation. An experiment with 24 participants was conducted using a within-subjects design. Each participant was exposed to two conditions: one where the VCs were adapted to eye gaze of the participant and the other where they were not adapted, i.e. the control condition. The findings of a Bayesian analysis suggest that credibly more ownership was reported and more eye-gaze shift behaviour was observed in the eye-gaze-adapted condition than in the control condition. Compared to the alternative of no or negative mediation, the findings also give some more credibility to the hypothesis that ownership, at least partly, positively mediates the effect eye-gaze-adapted VCs have on eye-gaze shift behaviour. Only weak support was found for plausibility as a mediator. These findings help improve insight into how VCs affect people.


Gesture ◽  
2005 ◽  
Vol 4 (2) ◽  
pp. 157-195 ◽  
Author(s):  
Jennifer Gerwing ◽  
Janet Bavelas

Hand gestures in face-to-face dialogue are symbolic acts, integrated with speech. Little is known about the factors that determine the physical form of these gestures. When the gesture depicts a previous nonsymbolic action, it obviously resembles this action; however, such gestures are not only noticeably different from the original action but, when they occur in a series, are different from each other. This paper presents an experiment with two separate analyses (one quantitative, one qualitative) testing the hypothesis that the immediate communicative function is a determinant of the symbolic form of the gesture. First, we manipulated whether the speaker was describing the previous action to an addressee who had done the same actions and therefore shared common ground or to one who had done different actions and therefore did not share common ground. The common ground gestures were judged to be significantly less complex, precise, or informative than the latter, a finding similar to the effects of common ground on words. In the qualitative analysis, we used the given versus new principle to analyze a series of gestures about the same actions by the same speaker. The speaker emphasized the new information in each gesture by making it larger, clearer, etc. When this information became given, a gesture for the same action became smaller or less precise, which is similar to findings for given versus new information in words. Thus the immediate communicative function (e.g., to convey information that is common ground or that is new) played a major role in determining the physical form of the gestures.


Sign in / Sign up

Export Citation Format

Share Document