multimodal user interface
Recently Published Documents


TOTAL DOCUMENTS

47
(FIVE YEARS 5)

H-INDEX

7
(FIVE YEARS 0)

Electronics ◽  
2020 ◽  
Vol 9 (12) ◽  
pp. 2093
Author(s):  
Dmitry Ryumin ◽  
Ildar Kagirov ◽  
Alexandr Axyonov ◽  
Nikita Pavlyuk ◽  
Anton Saveliev ◽  
...  

This paper presents the research and development of the prototype of the assistive mobile information robot (AMIR). The main features of the presented prototype are voice and gesture-based interfaces with Russian speech and sign language recognition and synthesis techniques and a high degree of robot autonomy. AMIR prototype’s aim is to be used as a robotic cart for shopping in grocery stores and/or supermarkets. Among the main topics covered in this paper are the presentation of the interface (three modalities), the single-handed gesture recognition system (based on a collected database of Russian sign language elements), as well as the technical description of the robotic platform (architecture, navigation algorithm). The use of multimodal interfaces, namely the speech and gesture modalities, make human-robot interaction natural and intuitive, as well as sign language recognition allows hearing-impaired people to use this robotic cart. AMIR prototype has promising perspectives for real usage in supermarkets, both due to its assistive capabilities and its multimodal user interface.


2020 ◽  
Vol 10 (17) ◽  
pp. 6144
Author(s):  
Carlos Veiga Almagro ◽  
Giacomo Lunghi ◽  
Mario Di Castro ◽  
Diego Centelles Beltran ◽  
Raúl Marín Prades ◽  
...  

The use of remote robotic systems for inspection and maintenance in hazardous environments is a priority for all tasks potentially dangerous for humans. However, currently available robotic systems lack that level of usability which would allow inexperienced operators to accomplish complex tasks. Moreover, the task’s complexity increases drastically when a single operator is required to control multiple remote agents (for example, when picking up and transporting big objects). In this paper, a system allowing an operator to prepare and configure cooperative behaviours for multiple remote agents is presented. The system is part of a human–robot interface that was designed at CERN, the European Center for Nuclear Research, to perform remote interventions in its particle accelerator complex, as part of the CERNTAURO project. In this paper, the modalities of interaction with the remote robots are presented in detail. The multimodal user interface enables the user to activate assisted cooperative behaviours according to a mission plan. The multi-robot interface has been validated at CERN in its Large Hadron Collider (LHC) mockup using a team of two mobile robotic platforms, each one equipped with a robotic manipulator. Moreover, great similarities were identified between the CERNTAURO and the TWINBOT projects, which aim to create usable robotic systems for underwater manipulations. Therefore, the cooperative behaviours were validated within a multi-robot pipe transport scenario in a simulated underwater environment, experimenting more advanced vision techniques. The cooperative teleoperation can be coupled with additional assisted tools such as vision-based tracking and grasping determination of metallic objects, and communication protocols design. The results show that the cooperative behaviours enable a single user to face a robotic intervention with more than one robot in a safer way.


2019 ◽  
Vol 23 (4) ◽  
pp. 5-18
Author(s):  
Włodzimierz Kasprzak ◽  
Wojciech Szynkiewicz ◽  
Maciej Stefańczyk ◽  
Wojciech Dudek ◽  
Maksym Figat ◽  
...  

2019 ◽  
Vol 23 (3) ◽  
pp. 41-54
Author(s):  
Włodzimierz Kasprzak ◽  
Wojciech Szynkiewicz ◽  
Maciej Stefańczyk ◽  
Wojciech Dudek ◽  
Maksym Figat ◽  
...  

2018 ◽  
Vol 1 (2) ◽  
pp. e23 ◽  
Author(s):  
Giuseppe La Tona ◽  
Antonio Petitti ◽  
Adele Lorusso ◽  
Roberto Colella ◽  
Annalisa Milella ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document