scholarly journals The classification of gesture interactions and the study of their ergonomic effect on hands

2021 ◽  
Author(s):  
Yu Wai Chau

In order to investigate gestural behavior during human-computer interactions, an investigation into the designs of current interaction methods is conducted. This information is then compared to current emerging databases to observe if the gesture designs follow guidelines discovered in the above investigation. The comparison will also observe common trends in the currently developed gesture databases such as similar gesture for specific commands. In order to investigate gestural behavior during interactions with computer interfaces, an experiment has been devised to observe and record gestures in use for gesture databases through the use of a hardware sensor device. It was discovered that factors such as opposing adjacent fingers and gestures that simulated object manipulation are factors in user comfort. The results of this study will create guidelines for creating new gestures for hand gesture interfaces.

2021 ◽  
Author(s):  
Yu Wai Chau

In order to investigate gestural behavior during human-computer interactions, an investigation into the designs of current interaction methods is conducted. This information is then compared to current emerging databases to observe if the gesture designs follow guidelines discovered in the above investigation. The comparison will also observe common trends in the currently developed gesture databases such as similar gesture for specific commands. In order to investigate gestural behavior during interactions with computer interfaces, an experiment has been devised to observe and record gestures in use for gesture databases through the use of a hardware sensor device. It was discovered that factors such as opposing adjacent fingers and gestures that simulated object manipulation are factors in user comfort. The results of this study will create guidelines for creating new gestures for hand gesture interfaces.


2014 ◽  
Vol 2014 ◽  
pp. 1-9 ◽  
Author(s):  
S. Mala ◽  
K. Latha

Activity recognition is needed in different requisition, for example, reconnaissance system, patient monitoring, and human-computer interfaces. Feature selection plays an important role in activity recognition, data mining, and machine learning. In selecting subset of features, an efficient evolutionary algorithm Differential Evolution (DE), a very efficient optimizer, is used for finding informative features from eye movements using electrooculography (EOG). Many researchers use EOG signals in human-computer interactions with various computational intelligence methods to analyze eye movements. The proposed system involves analysis of EOG signals using clearness based features, minimum redundancy maximum relevance features, and Differential Evolution based features. This work concentrates more on the feature selection algorithm based on DE in order to improve the classification for faultless activity recognition.


2020 ◽  
Vol 17 (4) ◽  
pp. 497-506
Author(s):  
Sunil Patel ◽  
Ramji Makwana

Automatic classification of dynamic hand gesture is challenging due to the large diversity in a different class of gesture, Low resolution, and it is performed by finger. Due to a number of challenges many researchers focus on this area. Recently deep neural network can be used for implicit feature extraction and Soft Max layer is used for classification. In this paper, we propose a method based on a two-dimensional convolutional neural network that performs detection and classification of hand gesture simultaneously from multimodal Red, Green, Blue, Depth (RGBD) and Optical flow Data and passes this feature to Long-Short Term Memory (LSTM) recurrent network for frame-to-frame probability generation with Connectionist Temporal Classification (CTC) network for loss calculation. We have calculated an optical flow from Red, Green, Blue (RGB) data for getting proper motion information present in the video. CTC model is used to efficiently evaluate all possible alignment of hand gesture via dynamic programming and check consistency via frame-to-frame for the visual similarity of hand gesture in the unsegmented input stream. CTC network finds the most probable sequence of a frame for a class of gesture. The frame with the highest probability value is selected from the CTC network by max decoding. This entire CTC network is trained end-to-end with calculating CTC loss for recognition of the gesture. We have used challenging Vision for Intelligent Vehicles and Applications (VIVA) dataset for dynamic hand gesture recognition captured with RGB and Depth data. On this VIVA dataset, our proposed hand gesture recognition technique outperforms competing state-of-the-art algorithms and gets an accuracy of 86%


2021 ◽  
Vol 11 (11) ◽  
pp. 4922
Author(s):  
Tengfei Ma ◽  
Wentian Chen ◽  
Xin Li ◽  
Yuting Xia ◽  
Xinhua Zhu ◽  
...  

To explore whether the brain contains pattern differences in the rock–paper–scissors (RPS) imagery task, this paper attempts to classify this task using fNIRS and deep learning. In this study, we designed an RPS task with a total duration of 25 min and 40 s, and recruited 22 volunteers for the experiment. We used the fNIRS acquisition device (FOIRE-3000) to record the cerebral neural activities of these participants in the RPS task. The time series classification (TSC) algorithm was introduced into the time-domain fNIRS signal classification. Experiments show that CNN-based TSC methods can achieve 97% accuracy in RPS classification. CNN-based TSC method is suitable for the classification of fNIRS signals in RPS motor imagery tasks, and may find new application directions for the development of brain–computer interfaces (BCI).


Leonardo ◽  
2009 ◽  
Vol 42 (5) ◽  
pp. 439-442 ◽  
Author(s):  
Eduardo R. Miranda ◽  
John Matthias

Music neurotechnology is a new research area emerging at the crossroads of neurobiology, engineering sciences and music. Examples of ongoing research into this new area include the development of brain-computer interfaces to control music systems and systems for automatic classification of sounds informed by the neurobiology of the human auditory apparatus. The authors introduce neurogranular sampling, a new sound synthesis technique based on spiking neuronal networks (SNN). They have implemented a neurogranular sampler using the SNN model developed by Izhikevich, which reproduces the spiking and bursting behavior of known types of cortical neurons. The neurogranular sampler works by taking short segments (or sound grains) from sound files and triggering them when any of the neurons fire.


2017 ◽  
Vol 27 (08) ◽  
pp. 1750033 ◽  
Author(s):  
Alborz Rezazadeh Sereshkeh ◽  
Robert Trott ◽  
Aurélien Bricout ◽  
Tom Chau

Brain–computer interfaces (BCIs) for communication can be nonintuitive, often requiring the performance of hand motor imagery or some other conversation-irrelevant task. In this paper, electroencephalography (EEG) was used to develop two intuitive online BCIs based solely on covert speech. The goal of the first BCI was to differentiate between 10[Formula: see text]s of mental repetitions of the word “no” and an equivalent duration of unconstrained rest. The second BCI was designed to discern between 10[Formula: see text]s each of covert repetition of the words “yes” and “no”. Twelve participants used these two BCIs to answer yes or no questions. Each participant completed four sessions, comprising two offline training sessions and two online sessions, one for testing each of the BCIs. With a support vector machine and a combination of spectral and time-frequency features, an average accuracy of [Formula: see text] was reached across participants in the online classification of no versus rest, with 10 out of 12 participants surpassing the chance level (60.0% for [Formula: see text]). The online classification of yes versus no yielded an average accuracy of [Formula: see text], with eight participants exceeding the chance level. Task-specific changes in EEG beta and gamma power in language-related brain areas tended to provide discriminatory information. To our knowledge, this is the first report of online EEG classification of covert speech. Our findings support further study of covert speech as a BCI activation task, potentially leading to the development of more intuitive BCIs for communication.


Many hand-controlled robots are developed for visually impaired people in order to make them live confidently. This project work proposes a Human Computer Interactions with the help of gestures recognition wireless to help physically handicapped persons to move robot in desired direction lives. The project work is framed into three stages. First, gesture capturing and recognition – gesture capturing uses a laptop or pc camera that takes input from our hands and gesture recognition based on the finger count algorithm. Secondly, Transmission of data wireless – ZigBEE Module is used for serial transmission of data, Finally, Movement of Robot - The robot will move based on the fingers opened or fingers closed and displays the direction in laptop or pc in which direction the robot is moving. This project work can be able to insist the physically enabled people in their daily life. The entire process will run on Arduino Uno, ZigBEE Module, L293D Motor Driver


Sign in / Sign up

Export Citation Format

Share Document