Dialogue Situation Recognition for Everyday Conversation Using Multimodal Information

Author(s):  
Yuya Chiba ◽  
Ryuichiro Higashinaka
2021 ◽  
pp. 009365022199531
Author(s):  
Tess van der Zanden ◽  
Maria B. J. Mos ◽  
Alexander P. Schouten ◽  
Emiel J. Krahmer

This study investigates how online dating profiles, consisting of both pictures and texts, are visually processed, and how both components affect impression formation. The attractiveness of the profile picture was varied systematically, and texts either included language errors or not. By collecting eye tracking and perception data, we investigated whether picture attractiveness determines attention to the profile text and if the text plays a secondary role. Eye tracking results revealed that pictures are more likely to attract initial attention and that more attractive pictures receive more attention. Texts received attention regardless of the picture’s attractiveness. Moreover, perception data showed that both the pictorial and textual cues affect impression formation, but that they affect different dimensions of perceived attraction differently. Based on our results, a new multimodal information processing model is proposed, which suggests that pictures and texts are processed independently and lead to separate assessments of cue attractiveness before impression formation.


Author(s):  
Krishnanand Kaipa ◽  
Carlos Morato ◽  
Boxuan Zhao ◽  
Satyandra K. Gupta

This paper presents the design of an instruction generation system that can be used to automatically generate instructions for complex assembly operations performed by humans on factory shop floors. Multimodal information—text, graphical annotations, and 3D animations—is used to create easy-to-follow instructions. This thereby reduces learning time and eliminates the possibility of assembly errors. An automated motion planning subsystem computes a collision-free path for each part from its initial posture in a crowded scene onto its final posture in the current subassembly. Visualization of this computed motion results in generation of 3D animations. The system also consists of an automated part identification module that enables the human to identify, and pick, the correct part from a set of similar looking parts. The system’s ability to automatically translate assembly plans into instructions enables a significant reduction in the time taken to generate instructions and update them in response to design changes.


Sign in / Sign up

Export Citation Format

Share Document