scholarly journals Keystroke Dynamics Patterns While Writing Positive and Negative Opinions

Sensors ◽  
2021 ◽  
Vol 21 (17) ◽  
pp. 5963
Author(s):  
Agata Kołakowska ◽  
Agnieszka Landowska

This paper deals with analysis of behavioural patterns in human–computer interaction. In the study, keystroke dynamics were analysed while participants were writing positive and negative opinions. A semi-experiment with 50 participants was performed. The participants were asked to recall the most negative and positive learning experiences (subject and teacher) and write an opinion about it. Keystroke dynamics were captured and over 50 diverse features were calculated and checked against the ability to differentiate positive and negative opinions. Moreover, classification of opinions was performed providing accuracy slightly above the random guess level. The second classification approach used self-report labels of pleasure and arousal and showed more accurate results. The study confirmed that it was possible to recognize positive and negative opinions from the keystroke patterns with accuracy above the random guess; however, combination with other modalities might produce more accurate results.

Photonics ◽  
2019 ◽  
Vol 6 (3) ◽  
pp. 90 ◽  
Author(s):  
Bosworth ◽  
Russell ◽  
Jacob

Over the past decade, the Human–Computer Interaction (HCI) Lab at Tufts University has been developing real-time, implicit Brain–Computer Interfaces (BCIs) using functional near-infrared spectroscopy (fNIRS). This paper reviews the work of the lab; we explore how we have used fNIRS to develop BCIs that are based on a variety of human states, including cognitive workload, multitasking, musical learning applications, and preference detection. Our work indicates that fNIRS is a robust tool for the classification of brain-states in real-time, which can provide programmers with useful information to develop interfaces that are more intuitive and beneficial for the user than are currently possible given today’s human-input (e.g., mouse and keyboard).


2021 ◽  
Author(s):  
Céline Jost ◽  
Brigitte Le Pévédic ◽  
Gérard Uzan

This paper aims at discussing the interest to use multisensory technologies for humans cognition training. First it introduces multisensory interactions making a focus on advancement in two fields: Human-Computer Interaction and mulsemedia. Second, it presents two different multisensory systems resulting from Robadom and StimSense projects that could be adapted for the community. Then, this paper defines the concept of scenagram and gives its application scopes, boundaries and use cases, offering a first classification of this new concept.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Junhao Huang ◽  
Zhicheng Zhang ◽  
Guoping Xie ◽  
Hui He

Noncontact human-computer interaction has an important value in wireless sensor networks. This work is aimed at achieving accurate interaction on a computer based on auto eye control, using a cheap webcam as the video source. A real-time accurate human-computer interaction system based on eye state recognition, rough gaze estimation, and tracking is proposed. Firstly, binary classification of the eye states (opening or closed) is carried on using the SVM classification algorithm with HOG features of the input eye image. Second, rough appearance-based gaze estimation is implemented based on a simple CNN model. And the head pose is estimated to judge whether the user is facing the screen or not. Based on these recognition results, noncontact mouse control and character input methods are designed and developed to replace the standard mouse and keyboard hardware. Accuracy and speed of the proposed interaction system are evaluated by four subjects. The experimental results show that users can use only a common monocular camera to achieve gaze estimation and tracking and to achieve most functions of real-time precise human-computer interaction on the basis of auto eye control.


Sensors ◽  
2020 ◽  
Vol 20 (9) ◽  
pp. 2599
Author(s):  
Leire Francés-Morcillo ◽  
Paz Morer-Camo ◽  
María Isabel Rodríguez-Ferradas ◽  
Aitor Cazón-Martín

Wearable electronics make it possible to monitor human activity and behavior. Most of these devices have not taken into account human factors and they have instead focused on technological issues. This fact could not only affect human–computer interaction and user experience but also the devices’ use cycle. Firstly, this paper presents a classification of wearable design requirements that have been carried out by combining a quantitative and a qualitative methodology. Secondly, we present some evaluation procedures based on design methodologies and human–computer interaction measurement tools. Thus, this contribution aims to provide a roadmap for wearable designers and researchers in order to help them to find more efficient processes by providing a classification of the design requirements and evaluation tools. These resources represent time and resource-saving contributions. Therefore designers and researchers do not have to review the literature. It will no be necessary to carry out exploratory studies for the purposes of identifying requirements or evaluation tools either.


2013 ◽  
Vol 8 (1) ◽  
pp. 5-16 ◽  
Author(s):  
Martin Schels ◽  
Markus Kächele ◽  
Michael Glodek ◽  
David Hrabal ◽  
Steffen Walter ◽  
...  

2018 ◽  
Vol 18 ◽  
pp. 02001 ◽  
Author(s):  
Andrei Lukyanchikov ◽  
Alexei Melnikov ◽  
Oleg Lukyanchikov

One of the most accurate and effective ways to control gestures is to control muscle activity, which occurs with any movement. Electromyography (EMG) is used to record such activity. This article compares SVM classification algorithms, perceptron, random trees and the method of density of probability in relation to the EMG signal. Arduino Leonardo with a single-channel Shield EMG is used to record the signal. The aim of this paper is to prove the possibility of creating a cheap and accessible biointerface based on EMG signal.


Sign in / Sign up

Export Citation Format

Share Document