scholarly journals Designing and Evaluating an Interface for the Composition of Vibro-Tactile Patterns Using Gestures

2021 ◽  
Author(s):  
Sai Chaitanya Cherukumilli

Human-computer interaction systems have been providing new ways for amateurs to compose music using traditional computer peripherals as well as gesture interfaces. Vibro-tactile patterns, which are a vibrational art form similar to auditory music, can also be composed using human-computer interfaces. This thesis discusses the gesture interface system called the Vibro-Motion, which facilitates the composition of vibro-tactile patterns in real-time on an existing tactile sensory substitution system called the Emoti-Chair. The Vibro-Motion allows users to control the pitch, magnitude of the vibration as well as the position of the vibration. A usability evaluation of Vibro-Motion system showed it to be intuitive, comfortable and enjoyable for the participants.

2021 ◽  
Author(s):  
Sai Chaitanya Cherukumilli

Human-computer interaction systems have been providing new ways for amateurs to compose music using traditional computer peripherals as well as gesture interfaces. Vibro-tactile patterns, which are a vibrational art form similar to auditory music, can also be composed using human-computer interfaces. This thesis discusses the gesture interface system called the Vibro-Motion, which facilitates the composition of vibro-tactile patterns in real-time on an existing tactile sensory substitution system called the Emoti-Chair. The Vibro-Motion allows users to control the pitch, magnitude of the vibration as well as the position of the vibration. A usability evaluation of Vibro-Motion system showed it to be intuitive, comfortable and enjoyable for the participants.


2019 ◽  
Author(s):  
Jamie E. Poole ◽  
Jhon P. C. Casas ◽  
Roberto A. Bolli ◽  
Hermano I. Krebs

Photonics ◽  
2019 ◽  
Vol 6 (3) ◽  
pp. 90 ◽  
Author(s):  
Bosworth ◽  
Russell ◽  
Jacob

Over the past decade, the Human–Computer Interaction (HCI) Lab at Tufts University has been developing real-time, implicit Brain–Computer Interfaces (BCIs) using functional near-infrared spectroscopy (fNIRS). This paper reviews the work of the lab; we explore how we have used fNIRS to develop BCIs that are based on a variety of human states, including cognitive workload, multitasking, musical learning applications, and preference detection. Our work indicates that fNIRS is a robust tool for the classification of brain-states in real-time, which can provide programmers with useful information to develop interfaces that are more intuitive and beneficial for the user than are currently possible given today’s human-input (e.g., mouse and keyboard).


2011 ◽  
Vol 2 (2) ◽  
pp. 1
Author(s):  
Andreas M. Kunz

The most common working situation is standing or sitting at a table, and performing daily business work. Although this situation is very intuitive to the user, a computer support can hardly be found in this area, mainly because of missing or inadequate human-computer interfaces that do not meet the expected requirements of the user. However, ongoing research in this particular field of human-computer interaction addresses this problem. The following paper shows an exemplaric research progress and gives and outlook on future research.


2021 ◽  
Author(s):  
Mehdi Rahimi ◽  
Yantao Shen ◽  
Zhiming Liu ◽  
Fang Jiang

This paper presents our recent development on a portable and refreshable text reading and sensory substitution system for the blind or visually impaired (BVI), called Finger-eye. The system mainly consists of an opto-text processing unit and a compact electro-tactile based display that can deliver text-related electrical signals to the fingertip skin through a wearable and Braille-dot patterned electrode array and thus delivers the electro-stimulation based Braille touch sensations to the fingertip. To achieve the goal of aiding BVI to read any text not written in Braille through this portable system, in this work, a Rapid Optical Character Recognition (R-OCR) method is firstly developed for real-time processing text information based on a Fisheye imaging device mounted at the finger-wearable electro-tactile display. This allows real-time translation of printed text to electro-Braille along with natural movement of user's fingertip as if reading any Braille display or book. More importantly, an electro-tactile neuro-stimulation feedback mechanism is proposed and incorporated with the R-OCR method, which facilitates a new opto-electrotactile feedback based text line tracking control approach that enables text line following by user fingertip during reading. Multiple experiments were designed and conducted to test the ability of blindfolded participants to read through and follow the text line based on the opto-electrotactile-feedback method. The experiments show that as the result of the opto-electrotactile-feedback, the users were able to maintain their fingertip within a 2mm distance of the text while scanning a text line. This research is a significant step to aid the BVI users with a portable means to translate and follow to read any printed text to Braille, whether in the digital realm or physically, on any surface.


Author(s):  
Tanoy Debnath ◽  
Md. Mahfuz Reza ◽  
Anichur Rahman ◽  
Shahab Band ◽  
Hamid Alinejad Rokny

Emotion recognition defined as identifying human emotion and is directly related to different fields such as human-computer interfaces, human emotional processing, irrational analysis, medical diagnostics, data-driven animation, human-robot communi- cation and many more. The purpose of this study is to propose a new facial emotional recognition model using convolutional neural network. Our proposed model, “ConvNet”, detects seven specific emotions from image data including anger, disgust, fear, happiness, neutrality, sadness, and surprise. This research focuses on the model’s training accuracy in a short number of epoch which the authors can develop a real-time schema that can easily fit the model and sense emotions. Furthermore, this work focuses on the mental or emotional stuff of a man or woman using the behavioral aspects. To complete the training of the CNN network model, we use the FER2013 databases, and we test the system’s success by identifying facial expressions in the real-time. ConvNet consists of four layers of convolution together with two fully connected layers. The experimental results show that the ConvNet is able to achieve 96% training accuracy which is much better than current existing models. ConvNet also achieved validation accuracy of 65% to 70% (considering different datasets used for experiments), resulting in a higher classification accuracy compared to other existing models. We also made all the materials publicly accessible for the research community at: https://github.com/Tanoy004/Emotion-recognition-through-CNN.


Author(s):  
Rebecca A. Fiebrink ◽  
Baptiste Caramiaux

Machine learning is the capacity of a computational system to learn structure from data in order to make predictions on new data. This chapter draws on music, machine learning, and human-computer interaction to elucidate an understanding of machine learning algorithms as creative tools for music and the sonic arts. It motivates a new understanding of learning algorithms as human-computer interfaces: like other interfaces, learning algorithms can be characterized by the ways their affordances intersect with goals of human users. The chapter also argues that the nature of interaction between users and algorithms impacts the usability and usefulness of those algorithms in profound ways. This human-centred view of machine learning motivates a concluding discussion of what it means to employ machine learning as a creative tool.


Sign in / Sign up

Export Citation Format

Share Document