scholarly journals Eye Movement Signal Classification for Developing Human-Computer Interface Using Electrooculogram

2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
M. Thilagaraj ◽  
B. Dwarakanath ◽  
S. Ramkumar ◽  
K. Karthikeyan ◽  
A. Prabhu ◽  
...  

Human-computer interfaces (HCI) allow people to control electronic devices, such as computers, mouses, wheelchairs, and keyboards, by bypassing the biochannel without using motor nervous system signals. These signals permit communication between people and electronic-controllable devices. This communication is due to HCI, which facilitates lives of paralyzed patients who do not have any problems with their cognitive functioning. The major plan of this study is to test out the feasibility of nine states of HCI by using modern techniques to overcome the problem faced by the paralyzed. Analog Digital Instrument T26 with a five-electrode system was used in this method. Voluntarily twenty subjects participated in this study. The extracted signals were preprocessed by applying notch filter with a range of 50 Hz to remove the external interferences; the features were extracted by applying convolution theorem. Afterwards, extracted features were classified using Elman and distributed time delay neural network. Average classification accuracy with 90.82% and 90.56% was achieved using two network models. The accuracy of the classifier was analyzed by single-trial analysis and performances of the classifier were observed using bit transfer rate (BTR) for twenty subjects to check the feasibility of designing the HCI. The achieved results showed that the ERNN model has a greater potential to classify, identify, and recognize the EOG signal compared with distributed time delay network for most of the subjects. The control signal generated by classifiers was applied as control signals to navigate the assistive devices such as mouse, keyboard, and wheelchair activities for disabled people.

2019 ◽  
Vol 9 (1) ◽  
pp. 30-46
Author(s):  
William J. Gibbs

In this article, the author examines fundamental principles or characteristics (e.g., programmability, modularity, variability) of digital media that make much of today's digital innovations possible. These precepts offer context for understanding the rapid and pervasive innovation currently taking place in society and, more specifically, how this innovation impacts trends in human computer interfaces. A focus of the article will be news-orientated interfaces. This article contrasts traditional informational sources such as newspapers and television news with digital interfaces. Finally, this article makes several observations regarding technology innovation that have bearing on the interaction experience of news consumers. This article categorized these observations broadly as rapid innovation, interaction, social interaction, scale, convergence, and Internet of Things and data.


2019 ◽  
Vol 3 (1) ◽  
pp. 4
Author(s):  
Sharmila Sreetharan ◽  
Michael Schutz

Quality care for patients requires effective communication amongst medical teams. Increasingly, communication is required not only between team members themselves, but between members and the medical devices monitoring and managing patient well-being. Most human–computer interfaces use either auditory or visual displays, and despite significant experimentation, they still elicit well-documented concerns. Curiously, few interfaces explore the benefits of multimodal communication, despite extensive documentation of the brain’s sensitivity to multimodal signals. New approaches built on insights from basic audiovisual integration research hold the potential to improve future human–computer interfaces. In particular, recent discoveries regarding the acoustic property of amplitude envelope illustrate that it can enhance audiovisual integration while also lowering annoyance. Here, we share key insights from recent research with the potential to inform applications related to human–computer interface design. Ultimately, this could lead to a cost-effective way to improve communication in medical contexts—with signification implications for both human health and the burgeoning medical device industry.


1986 ◽  
Vol 30 (14) ◽  
pp. 1349-1353
Author(s):  
Deborah Hix

The goal of this research was to empirically evaluate the usefulness of an interactive environment for developing human-computer interfaces. In particular, it focused on a set of interactive tools, called the Author's Interactive Dialogue Environment (AIDE), for human-computer interface implementation. AIDE is used by an interface design specialist, called a dialogue author, to implement an interface by directly manipulating and defining its objects, rather than by the traditional method of writing source code. In a controlled experiment, a group of dialogue author subjects used AIDE 1.0 to implement a predefined interface, and a group of application programmer subjects implemented the identical interface using programming code. Dialogue author subjects performed the task more than three times faster than the application programmer subjects. This study empirically supports, possibly for the first time, the long-standing claim that interactive tools for interface development can improve productivity and reduce frustration in developing interfaces over traditional programming techniques for interface development.


Author(s):  
I. Scott Mackenzie

One enduring trait of computing systems is the presence of the human operator. At the human-computer interface, the nature of computing has witnessed dramatic transformations—from feeding punched cards into a reader to manipulating 3D virtual objects with an input glove. The technology at our fingertips today transcends by orders of magnitude that in the behemoth calculators of the 1940s. Yet technology must co-exist with the human interface of the day. Not surprisingly, themes on keeping pace with advances in technology in the human-computer interface and, hopefully, getting ahead, underlie many chapters in this book. The present chapter is no exception. Input devices and interaction techniques are the human operator’s baton. They set, constrain, and elicit a spectrum of actions and responses, and in a large way inject a personality on the entire human-machine system. In this chapter, we will present and explore the major issues in “input,” focusing on devices, their properties and parameters, and the possibilities for exploiting devices in advanced human-computer interfaces. To place input devices in perspective, we illustrate a classical human-factors interpretation of the human-machine interface (e.g., Chapanis, 1965, p. 20). Figure 11-1 simplifies the human and machine to three components each. The internal states of each interact in a closed-loop system through controls and displays (the machine interface) and motor-sensory behaviour (the human interface). The terms “input” and “output” are, by convention, with respect to the machine; so input devices are inputs to the machine controlled or manipulated by human “outputs.” Traditionally human outputs are our limbs—the hands, arms, legs, feet, or head—but speech and eye motions can also act as human output. Some other human output channels are breath and electrical body signals (important for disabled users). Interaction takes place at the interface (dashed line in Figure 11-1) through an output channel—displays stimulating human senses—and the input channel. In the present chapter, we are primarily interested in controls, or input devices; but, by necessity, the other components in Figure 11-1 will to some extent participate in our discussion.


2021 ◽  
Vol 18 (3) ◽  
pp. 1-22
Author(s):  
Charlotte M. Reed ◽  
Hong Z. Tan ◽  
Yang Jiao ◽  
Zachary D. Perez ◽  
E. Courtenay Wilson

Stand-alone devices for tactile speech reception serve a need as communication aids for persons with profound sensory impairments as well as in applications such as human-computer interfaces and remote communication when the normal auditory and visual channels are compromised or overloaded. The current research is concerned with perceptual evaluations of a phoneme-based tactile speech communication device in which a unique tactile code was assigned to each of the 24 consonants and 15 vowels of English. The tactile phonemic display was conveyed through an array of 24 tactors that stimulated the dorsal and ventral surfaces of the forearm. Experiments examined the recognition of individual words as a function of the inter-phoneme interval (Study 1) and two-word phrases as a function of the inter-word interval (Study 2). Following an average training period of 4.3 hrs on phoneme and word recognition tasks, mean scores for the recognition of individual words in Study 1 ranged from 87.7% correct to 74.3% correct as the inter-phoneme interval decreased from 300 to 0 ms. In Study 2, following an average of 2.5 hours of training on the two-word phrase task, both words in the phrase were identified with an accuracy of 75% correct using an inter-word interval of 1 sec and an inter-phoneme interval of 150 ms. Effective transmission rates achieved on this task were estimated to be on the order of 30 to 35 words/min.


Sign in / Sign up

Export Citation Format

Share Document