Design of Multimodal User Interface using Speech and Gesture Recognition for Wearable Watch Platform

2015 ◽  
Vol 21 (6) ◽  
pp. 418-423 ◽  
Author(s):  
Ki Eun Seong ◽  
Yu Jin Park ◽  
Soon Ju Kang
2021 ◽  
Vol 297 ◽  
pp. 01030
Author(s):  
Issam Elmagrouni ◽  
Abdelaziz Ettaoufik ◽  
Siham Aouad ◽  
Abderrahim Maizate

Gesture recognition technology based on visual detection to acquire gestures information is obtained in a non-contact manner. There are two types of gesture recognition: independent and continuous gesture recognition. The former aims to classify videos or other types of gesture sequences that only contain one isolated gesture instance in each sequence (e.g., RGB-D or skeleton data). In this study, we review existing research methods of visual gesture recognition and will be grouped according to the following family: static, dynamic, based on the supports (Kinect, Leap…etc), works that focus on the application of gesture recognition on robots and works on dealing with gesture recognition at the browser level. Following that, we take a look at the most common JavaScript-based deep learning frameworks. Then we present the idea of defining a process for improving user interface control based on gesture recognition to streamline the implementation of this mechanism.


1998 ◽  
Author(s):  
Su-Hwan Kim ◽  
Hyunil Choi ◽  
Ji-Beom Yoo ◽  
Phill-Kyu Rhee ◽  
Y. C. Park

2018 ◽  
Vol 1 (2) ◽  
pp. e23 ◽  
Author(s):  
Giuseppe La Tona ◽  
Antonio Petitti ◽  
Adele Lorusso ◽  
Roberto Colella ◽  
Annalisa Milella ◽  
...  

2008 ◽  
Vol 2 (2) ◽  
pp. 105-116 ◽  
Author(s):  
Savvas Argyropoulos ◽  
Konstantinos Moustakas ◽  
Alexey A. Karpov ◽  
Oya Aran ◽  
Dimitrios Tzovaras ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document