multimodal user interactions
Recently Published Documents


TOTAL DOCUMENTS

5
(FIVE YEARS 1)

H-INDEX

1
(FIVE YEARS 0)

Author(s):  
Alan L. V. Guedes ◽  
Sergio Colcher

Multimedia languages traditionally, they focus on synchronizing a multimedia presentation (based on media and time abstractions) and on supporting user interactions for a single user, usually limited to keyboard and mouse input. Recent advances in recognition technologies, however, have given rise to a new class of multimodal user interfaces (MUIs). In short, MUIs process two or more combined user input modalities (e.g. speech, pen, touch, gesture, gaze, and head and body movements) in a coordinated manner with output modalities . An individual input modality corresponds to a specific type of user-generated information captured by input devices (e.g. speech, pen) or sensors (e.g. motion sensor). An individual output modality corresponds to user-consumed information through stimuli captured by human senses. The computer system produces those stimuli through audiovisual or actuation devices (e.g. tactile feedback). In this proposal, we aim at extending the NCL multimedia language to take advantage of multimodal features.


2016 ◽  
Vol 76 (4) ◽  
pp. 5691-5720 ◽  
Author(s):  
Álan Lívio Vasconcelos Guedes ◽  
Roberto Gerson de Albuquerque Azevedo ◽  
Simone Diniz Junqueira Barbosa

Sign in / Sign up

Export Citation Format

Share Document