Classification of gaze preference decision for human-machine interaction using eye tracking device

Author(s):  
Sota Shimizu ◽  
Takumi Hashizume
2021 ◽  
Vol 12 ◽  
pp. 180-189
Author(s):  
Ata Jedari Golparvar ◽  
Murat Kaya Yapici

The study of eye movements and the measurement of the resulting biopotential, referred to as electrooculography (EOG), may find increasing use in applications within the domain of activity recognition, context awareness, mobile human–computer and human–machine interaction (HCI/HMI), and personal medical devices; provided that, seamless sensing of eye activity and processing thereof is achieved by a truly wearable, low-cost, and accessible technology. The present study demonstrates an alternative to the bulky and expensive camera-based eye tracking systems and reports the development of a graphene textile-based personal assistive device for the first time. This self-contained wearable prototype comprises a headband with soft graphene textile electrodes that overcome the limitations of conventional “wet” electrodes, along with miniaturized, portable readout electronics with real-time signal processing capability that can stream data to a remote device over Bluetooth. The potential of graphene textiles in wearable eye tracking and eye-operated remote object interaction is demonstrated by controlling a mouse cursor on screen for typing with a virtual keyboard and enabling navigation of a four-wheeled robot in a maze, all utilizing five different eye motions initiated with a single channel EOG acquisition. Typing speeds of up to six characters per minute without prediction algorithms and guidance of the robot in a maze with four 180° turns were successfully achieved with perfect pattern detection accuracies of 100% and 98%, respectively.


2021 ◽  
Vol 8 ◽  
Author(s):  
Franz A. Van-Horenbeke ◽  
Angelika Peer

Recognizing the actions, plans, and goals of a person in an unconstrained environment is a key feature that future robotic systems will need in order to achieve a natural human-machine interaction. Indeed, we humans are constantly understanding and predicting the actions and goals of others, which allows us to interact in intuitive and safe ways. While action and plan recognition are tasks that humans perform naturally and with little effort, they are still an unresolved problem from the point of view of artificial intelligence. The immense variety of possible actions and plans that may be encountered in an unconstrained environment makes current approaches be far from human-like performance. In addition, while very different types of algorithms have been proposed to tackle the problem of activity, plan, and goal (intention) recognition, these tend to focus in only one part of the problem (e.g., action recognition), and techniques that address the problem as a whole have been not so thoroughly explored. This review is meant to provide a general view of the problem of activity, plan, and goal recognition as a whole. It presents a description of the problem, both from the human perspective and from the computational perspective, and proposes a classification of the main types of approaches that have been proposed to address it (logic-based, classical machine learning, deep learning, and brain-inspired), together with a description and comparison of the classes. This general view of the problem can help on the identification of research gaps, and may also provide inspiration for the development of new approaches that address the problem in a unified way.


2021 ◽  
pp. 1-9
Author(s):  
Harshadkumar B. Prajapati ◽  
Ankit S. Vyas ◽  
Vipul K. Dabhi

Face expression recognition (FER) has gained very much attraction to researchers in the field of computer vision because of its major usefulness in security, robotics, and HMI (Human-Machine Interaction) systems. We propose a CNN (Convolutional Neural Network) architecture to address FER. To show the effectiveness of the proposed model, we evaluate the performance of the model on JAFFE dataset. We derive a concise CNN architecture to address the issue of expression classification. Objective of various experiments is to achieve convincing performance by reducing computational overhead. The proposed CNN model is very compact as compared to other state-of-the-art models. We could achieve highest accuracy of 97.10% and average accuracy of 90.43% for top 10 best runs without any pre-processing methods applied, which justifies the effectiveness of our model. Furthermore, we have also included visualization of CNN layers to observe the learning of CNN.


Sign in / Sign up

Export Citation Format

Share Document