multimodal feedback
Recently Published Documents


TOTAL DOCUMENTS

108
(FIVE YEARS 25)

H-INDEX

13
(FIVE YEARS 2)

2022 ◽  
Vol 3 ◽  
Author(s):  
Agnes Axelsson ◽  
Gabriel Skantze

Feedback is an essential part of all communication, and agents communicating with humans must be able to both give and receive feedback in order to ensure mutual understanding. In this paper, we analyse multimodal feedback given by humans towards a robot that is presenting a piece of art in a shared environment, similar to a museum setting. The data analysed contains both video and audio recordings of 28 participants, and the data has been richly annotated both in terms of multimodal cues (speech, gaze, head gestures, facial expressions, and body pose), as well as the polarity of any feedback (negative, positive, or neutral). We train statistical and machine learning models on the dataset, and find that random forest models and multinomial regression models perform well on predicting the polarity of the participants' reactions. An analysis of the different modalities shows that most information is found in the participants' speech and head gestures, while much less information is found in their facial expressions, body pose and gaze. An analysis of the timing of the feedback shows that most feedback is given when the robot makes pauses (and thereby invites feedback), but that the more exact timing of the feedback does not affect its meaning.


2021 ◽  
Author(s):  
Jun Momose ◽  
Yuta Koda ◽  
Hideki Mori ◽  
Morio Kakiuchi ◽  
Kotaro Imamura ◽  
...  
Keyword(s):  

Author(s):  
Daniele Di Mitri ◽  
Jan Schneider ◽  
Hendrik Drachsler

AbstractThis paper describes the CPR Tutor, a real-time multimodal feedback system for cardiopulmonary resuscitation (CPR) training. The CPR Tutor detects training mistakes using recurrent neural networks. The CPR Tutor automatically recognises and assesses the quality of the chest compressions according to five CPR performance indicators. It detects training mistakes in real-time by analysing a multimodal data stream consisting of kinematic and electromyographic data. Based on this assessment, the CPR Tutor provides audio feedback to correct the most critical mistakes and improve the CPR performance. The mistake detection models of the CPR Tutor were trained using a dataset from 10 experts. Hence, we tested the validity of the CPR Tutor and the impact of its feedback functionality in a user study involving additional 10 participants. The CPR Tutor pushes forward the current state of the art of real-time multimodal tutors by providing: (1) an architecture design, (2) a methodological approach for delivering real-time feedback using multimodal data and (3) a field study on real-time feedback for CPR training. This paper details the results of a field study by quantitatively measuring the impact of the CPR Tutor feedback on the performance indicators and qualitatively analysing the participants’ questionnaire answers.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Yi-Qian Hu ◽  
Tian-Hao Gao ◽  
Jie Li ◽  
Jia-Chao Tao ◽  
Yu-Long Bai ◽  
...  

Background. Recently, the brain-computer interface (BCI) has seen rapid development, which may promote the recovery of motor function in chronic stroke patients. Methods. Twelve stroke patients with severe upper limb and hand motor impairment were enrolled and randomly assigned into two groups: motor imagery (MI)-based BCI training with multimodal feedback (BCI group, n = 7) and classical motor imagery training (control group, n = 5). Motor function and electrophysiology were evaluated before and after the intervention. The Fugl-Meyer assessment-upper extremity (FMA-UE) is the primary outcome measure. Secondary outcome measures include an increase in wrist active extension or surface electromyography (the amplitude and cocontraction of extensor carpi radialis during movement), the action research arm test (ARAT), the motor status scale (MSS), and Barthel index (BI). Time-frequency analysis and power spectral analysis were used to reflect the electroencephalogram (EEG) change before and after the intervention. Results. Compared with the baseline, the FMA-UE score increased significantly in the BCI group ( p  = 0.006). MSS scores improved significantly in both groups, while ARAT did not improve significantly. In addition, before the intervention, all patients could not actively extend their wrists or just had muscle contractions. After the intervention, four patients regained the ability to extend their paretic wrists (two in each group). The amplitude and area under the curve of extensor carpi radialis improved to some extent, but there was no statistical significance between the groups. Conclusion. MI-based BCI combined with sensory and visual feedback might improve severe upper limb and hand impairment in chronic stroke patients, showing the potential for application in rehabilitation medicine.


2021 ◽  
Author(s):  
Cigdem Turan ◽  
Dorothea Koert ◽  
Karl David Neergaard ◽  
Rudolf Lioutikov

2021 ◽  
Vol 8 ◽  
Author(s):  
Joseph Bolarinwa ◽  
Iveta Eimontaite ◽  
Tom Mitchell ◽  
Sanja Dogramadzi ◽  
Praminda Caleb-Solly

A key challenge in achieving effective robot teleoperation is minimizing teleoperators’ cognitive workload and fatigue. We set out to investigate the extent to which gaze tracking data can reveal how teleoperators interact with a system. In this study, we present an analysis of gaze tracking, captured as participants completed a multi-stage task: grasping and emptying the contents of a jar into a container. The task was repeated with different combinations of visual, haptic, and verbal feedback. Our aim was to determine if teleoperation workload can be inferred by combining the gaze duration, fixation count, task completion time, and complexity of robot motion (measured as the sum of robot joint steps) at different stages of the task. Visual information of the robot workspace was captured using four cameras, positioned to capture the robot workspace from different angles. These camera views (aerial, right, eye-level, and left) were displayed through four quadrants (top-left, top-right, bottom-left, and bottom-right quadrants) of participants’ video feedback computer screen, respectively. We found that the gaze duration and the fixation count were highly dependent on the stage of the task and the feedback scenario utilized. The results revealed that combining feedback modalities reduced the cognitive workload (inferred by investigating the correlation between gaze duration, fixation count, task completion time, success or failure of task completion, and robot gripper trajectories), particularly in the task stages that require more precision. There was a significant positive correlation between gaze duration and complexity of robot joint movements. Participants’ gaze outside the areas of interest (distractions) was not influenced by feedback scenarios. A learning effect was observed in the use of the controller for all participants as they repeated the task with different feedback combination scenarios. To design a system for teleoperation, applicable in healthcare, we found that the analysis of teleoperators’ gaze can help understand how teleoperators interact with the system, hence making it possible to develop the system from the teleoperators’ stand point.


2021 ◽  
Vol 17 (33) ◽  
pp. 66
Author(s):  
Nato Pachuashvili

Providing feedback to students’ written work has always been a challenging experience for English as a foreign language (EFL) teachers and learners. High-quality feedback promotes students’ engagement in learning processes and enhances writing performance. Traditional written corrective feedback has often been criticized for not being able to achieve its purpose. 21st-century technological development brought the necessity to provide audio and video feedback through screencast technology. The letter enables EFL teachers to provide multimodal feedback by recording the teacher’s screen while commenting on a student’s written work. Although there have been some studies conducted in the field of oral feedback via screencast, video feedback is still relatively new in many educational settings. For this reason, the paper aims to provide a brief overview of screencast video feedback, potential affordances and challenges faced by EFL teachers and learners. For this article, recent research studies have been collected to review the use of screencast feedback in EFL class and discuss its implications on EFL students’ writing. Furthermore, the paper provides an overview of the most widely-used screencast software in educational settings and concludes with some practical guidelines for the effective implementation of screencast technology.


2021 ◽  
Vol 5 (8) ◽  
pp. 44
Author(s):  
Pekka Kallioniemi ◽  
Alisa Burova ◽  
John Mäkelä ◽  
Tuuli Keskinen ◽  
Kimmo Ronkainen ◽  
...  

Developments in sensor technology, artificial intelligence, and network technologies like 5G has made remote operation a valuable method of controlling various types of machinery. The benefits of remote operations come with an opportunity to access hazardous environments. The major limitation of remote operation is the lack of proper sensory feedback from the machine, which in turn negatively affects situational awareness and, consequently, may risk remote operations. This article explores how to improve situational awareness via multimodal feedback (visual, auditory, and haptic) and studies how it can be utilized to communicate warnings to remote operators. To reach our goals, we conducted a controlled, within-subjects experiment in eight conditions with twenty-four participants on a simulated remote driving system. Additionally, we gathered further insights with a UX questionnaire and semi-structured interviews. Gathered data showed that the use of multimodal feedback positively affected situational awareness when driving remotely. Our findings indicate that the combination of added haptic and visual feedback was considered the best feedback combination to communicate the slipperiness of the road. We also found that the feeling of presence is an important aspect of remote driving tasks, and a requested one, especially by those with more experience in operating real heavy machinery.


2021 ◽  
Vol 5 (3) ◽  
pp. 12
Author(s):  
Léa Pillette ◽  
Bernard N’Kaoua ◽  
Romain Sabau ◽  
Bertrand Glize ◽  
Fabien Lotte

By performing motor-imagery tasks, for example, imagining hand movements, Motor-Imagery based Brain-Computer Interfaces (MI-BCIs) users can control digital technologies, for example, neuroprosthesis, using their brain activity only. MI-BCI users need to train, usually using a unimodal visual feedback, to produce brain activity patterns that are recognizable by the system. The literature indicates that multimodal vibrotactile and visual feedback is more effective than unimodal visual feedback, at least for short term training. However, the multi-session influence of such multimodal feedback on MI-BCI user training remained unknown, so did the influence of the order of presentation of the feedback modalities. In our experiment, 16 participants trained to control a MI-BCI during five sessions with a realistic visual feedback and five others with both a realistic visual feedback and a vibrotactile one. training benefits from a multimodal feedback, in terms of performances and self-reported mindfulness. There is also a significant influence of the order presentation of the modality. Participants who started training with a visual feedback had higher performances than those who started training with a multimodal feedback. We recommend taking into account the order of presentation for future experiments assessing the influence of several modalities of feedback.


Sign in / Sign up

Export Citation Format

Share Document