Online Gesture Recognition for User Interface on Accelerometer Built-in Mobile Phones

Author(s):  
BongWhan Choe ◽  
Jun-Ki Min ◽  
Sung-Bae Cho
2015 ◽  
Vol 77 (29) ◽  
Author(s):  
Ahmed Sheikh Abdullah Al-Aidaroos ◽  
Ariffin Abdul Mutalib

Nowadays, mobile phones provide not just voice call and messaging services, but plethora of other services. Those computational capabilities allow mobile phones to serve people in various areas including education, banking, commerce, travelling, and other daily life aspects. Meanwhile, the number of mobile phone users has increased dramatically in the last decade. On the other hand, the usability of an application can usually be verified through the user interface. Therefore, this paper aims to design a measurement tool to evaluate the usability of mobile applications based on the usability attributes and dimensions that must be considered in the interface. To obtain the appropriate attributes, a Systematic Literature Review (SLR) has been conducted and the Goal Question Metric (GQM) has been used to design the tool. From 261 related works only 18 most relevant ones were selected, through four SLR. 25 dimensions were found through the SLR, but some of these dimensions are synonymous or a part of other dimensions. Consequently, three dimensions must be included in any usability evaluation instrument, which is broken down into ten sub dimensions.


2021 ◽  
Vol 297 ◽  
pp. 01030
Author(s):  
Issam Elmagrouni ◽  
Abdelaziz Ettaoufik ◽  
Siham Aouad ◽  
Abderrahim Maizate

Gesture recognition technology based on visual detection to acquire gestures information is obtained in a non-contact manner. There are two types of gesture recognition: independent and continuous gesture recognition. The former aims to classify videos or other types of gesture sequences that only contain one isolated gesture instance in each sequence (e.g., RGB-D or skeleton data). In this study, we review existing research methods of visual gesture recognition and will be grouped according to the following family: static, dynamic, based on the supports (Kinect, Leap…etc), works that focus on the application of gesture recognition on robots and works on dealing with gesture recognition at the browser level. Following that, we take a look at the most common JavaScript-based deep learning frameworks. Then we present the idea of defining a process for improving user interface control based on gesture recognition to streamline the implementation of this mechanism.


1998 ◽  
Author(s):  
Su-Hwan Kim ◽  
Hyunil Choi ◽  
Ji-Beom Yoo ◽  
Phill-Kyu Rhee ◽  
Y. C. Park

Sign in / Sign up

Export Citation Format

Share Document