Real time eye tracking for human computer interfaces

Author(s):  
S. Amarnag ◽  
R.S. Kumaran ◽  
J.N. Gowdy
2014 ◽  
Vol 7 (3) ◽  
Author(s):  
Andreas Bulling ◽  
Roman Bednarik

Latest developments in remote and head-mounted eye tracking and automated eye movement analysis point the way toward unobtrusive eye-based human-computer interfaces that will become pervasively usable in everyday life. We call this new paradigm pervasive eye tracking – continuous eye monitoring and analysis 24/7. Pervasive Eye Tracking and Mobile Eye-Based Interaction (PETMEI) is a workshop series that revolves around the theme of pervasive eye-tracking as a trailblazer for pervasive eye-based human-computer interaction and eye-based context-awareness. This special issue is composed from extended versions of the top-scoring papers from the 3rd workshop in the PETMEI series held in 2013.


2021 ◽  
Author(s):  
Sai Chaitanya Cherukumilli

Human-computer interaction systems have been providing new ways for amateurs to compose music using traditional computer peripherals as well as gesture interfaces. Vibro-tactile patterns, which are a vibrational art form similar to auditory music, can also be composed using human-computer interfaces. This thesis discusses the gesture interface system called the Vibro-Motion, which facilitates the composition of vibro-tactile patterns in real-time on an existing tactile sensory substitution system called the Emoti-Chair. The Vibro-Motion allows users to control the pitch, magnitude of the vibration as well as the position of the vibration. A usability evaluation of Vibro-Motion system showed it to be intuitive, comfortable and enjoyable for the participants.


Author(s):  
Tanoy Debnath ◽  
Md. Mahfuz Reza ◽  
Anichur Rahman ◽  
Shahab Band ◽  
Hamid Alinejad Rokny

Emotion recognition defined as identifying human emotion and is directly related to different fields such as human-computer interfaces, human emotional processing, irrational analysis, medical diagnostics, data-driven animation, human-robot communi- cation and many more. The purpose of this study is to propose a new facial emotional recognition model using convolutional neural network. Our proposed model, “ConvNet”, detects seven specific emotions from image data including anger, disgust, fear, happiness, neutrality, sadness, and surprise. This research focuses on the model’s training accuracy in a short number of epoch which the authors can develop a real-time schema that can easily fit the model and sense emotions. Furthermore, this work focuses on the mental or emotional stuff of a man or woman using the behavioral aspects. To complete the training of the CNN network model, we use the FER2013 databases, and we test the system’s success by identifying facial expressions in the real-time. ConvNet consists of four layers of convolution together with two fully connected layers. The experimental results show that the ConvNet is able to achieve 96% training accuracy which is much better than current existing models. ConvNet also achieved validation accuracy of 65% to 70% (considering different datasets used for experiments), resulting in a higher classification accuracy compared to other existing models. We also made all the materials publicly accessible for the research community at: https://github.com/Tanoy004/Emotion-recognition-through-CNN.


2011 ◽  
Author(s):  
Thomas Carroll ◽  
Aaron Rogers ◽  
Dimitrios Charalampidis ◽  
Huimin Chen

2021 ◽  
Author(s):  
Tanoy Debnath ◽  
Md. Mahfuz Reza ◽  
Anichur Rahman ◽  
Shahab S. Band ◽  
Hamid Alinejad-Rokny

Abstract Emotion recognition defined as identifying human emotion and is directly related to different fields such as human-computer interfaces, human emotional processing, irrational analysis, medical diagnostics, data-driven animation, human-robot communi- cation and many more. The purpose of this study is to propose a new facial emotional recognition model using convolutional neural network. Our proposed model, “ConvNet”, detects seven specific emotions from image data including anger, disgust, fear, happiness, neutrality, sadness, and surprise. This research focuses on the model’s training accuracy in a short number of epoch which the authors can develop a real-time schema that can easily fit the model and sense emotions. Furthermore, this work focuses on the mental or emotional stuff of a man or woman using the behavioral aspects. To complete the training of the CNN network model, we use the FER2013 databases, and we test the system’s success by identifying facial expressions in the real-time. ConvNet consists of four layers of convolution together with two fully connected layers. The experimental results show that the ConvNet is able to achieve 96% training accuracy which is much better than current existing models. ConvNet also achieved validation accuracy of 65% to 70% (considering different datasets used for experiments), resulting in a higher classification accuracy compared to other existing models. We also made all the materials publicly accessible for the research community at: https://github.com/Tanoy004/Emotion-recognition-through-CNN.


2021 ◽  
Author(s):  
Sai Chaitanya Cherukumilli

Human-computer interaction systems have been providing new ways for amateurs to compose music using traditional computer peripherals as well as gesture interfaces. Vibro-tactile patterns, which are a vibrational art form similar to auditory music, can also be composed using human-computer interfaces. This thesis discusses the gesture interface system called the Vibro-Motion, which facilitates the composition of vibro-tactile patterns in real-time on an existing tactile sensory substitution system called the Emoti-Chair. The Vibro-Motion allows users to control the pitch, magnitude of the vibration as well as the position of the vibration. A usability evaluation of Vibro-Motion system showed it to be intuitive, comfortable and enjoyable for the participants.


Sign in / Sign up

Export Citation Format

Share Document