Innovation, Programmable Media and the Human Computer Interface

2019 ◽  
Vol 9 (1) ◽  
pp. 30-46
Author(s):  
William J. Gibbs

In this article, the author examines fundamental principles or characteristics (e.g., programmability, modularity, variability) of digital media that make much of today's digital innovations possible. These precepts offer context for understanding the rapid and pervasive innovation currently taking place in society and, more specifically, how this innovation impacts trends in human computer interfaces. A focus of the article will be news-orientated interfaces. This article contrasts traditional informational sources such as newspapers and television news with digital interfaces. Finally, this article makes several observations regarding technology innovation that have bearing on the interaction experience of news consumers. This article categorized these observations broadly as rapid innovation, interaction, social interaction, scale, convergence, and Internet of Things and data.

2019 ◽  
Vol 3 (1) ◽  
pp. 4
Author(s):  
Sharmila Sreetharan ◽  
Michael Schutz

Quality care for patients requires effective communication amongst medical teams. Increasingly, communication is required not only between team members themselves, but between members and the medical devices monitoring and managing patient well-being. Most human–computer interfaces use either auditory or visual displays, and despite significant experimentation, they still elicit well-documented concerns. Curiously, few interfaces explore the benefits of multimodal communication, despite extensive documentation of the brain’s sensitivity to multimodal signals. New approaches built on insights from basic audiovisual integration research hold the potential to improve future human–computer interfaces. In particular, recent discoveries regarding the acoustic property of amplitude envelope illustrate that it can enhance audiovisual integration while also lowering annoyance. Here, we share key insights from recent research with the potential to inform applications related to human–computer interface design. Ultimately, this could lead to a cost-effective way to improve communication in medical contexts—with signification implications for both human health and the burgeoning medical device industry.


1986 ◽  
Vol 30 (14) ◽  
pp. 1349-1353
Author(s):  
Deborah Hix

The goal of this research was to empirically evaluate the usefulness of an interactive environment for developing human-computer interfaces. In particular, it focused on a set of interactive tools, called the Author's Interactive Dialogue Environment (AIDE), for human-computer interface implementation. AIDE is used by an interface design specialist, called a dialogue author, to implement an interface by directly manipulating and defining its objects, rather than by the traditional method of writing source code. In a controlled experiment, a group of dialogue author subjects used AIDE 1.0 to implement a predefined interface, and a group of application programmer subjects implemented the identical interface using programming code. Dialogue author subjects performed the task more than three times faster than the application programmer subjects. This study empirically supports, possibly for the first time, the long-standing claim that interactive tools for interface development can improve productivity and reduce frustration in developing interfaces over traditional programming techniques for interface development.


Author(s):  
I. Scott Mackenzie

One enduring trait of computing systems is the presence of the human operator. At the human-computer interface, the nature of computing has witnessed dramatic transformations—from feeding punched cards into a reader to manipulating 3D virtual objects with an input glove. The technology at our fingertips today transcends by orders of magnitude that in the behemoth calculators of the 1940s. Yet technology must co-exist with the human interface of the day. Not surprisingly, themes on keeping pace with advances in technology in the human-computer interface and, hopefully, getting ahead, underlie many chapters in this book. The present chapter is no exception. Input devices and interaction techniques are the human operator’s baton. They set, constrain, and elicit a spectrum of actions and responses, and in a large way inject a personality on the entire human-machine system. In this chapter, we will present and explore the major issues in “input,” focusing on devices, their properties and parameters, and the possibilities for exploiting devices in advanced human-computer interfaces. To place input devices in perspective, we illustrate a classical human-factors interpretation of the human-machine interface (e.g., Chapanis, 1965, p. 20). Figure 11-1 simplifies the human and machine to three components each. The internal states of each interact in a closed-loop system through controls and displays (the machine interface) and motor-sensory behaviour (the human interface). The terms “input” and “output” are, by convention, with respect to the machine; so input devices are inputs to the machine controlled or manipulated by human “outputs.” Traditionally human outputs are our limbs—the hands, arms, legs, feet, or head—but speech and eye motions can also act as human output. Some other human output channels are breath and electrical body signals (important for disabled users). Interaction takes place at the interface (dashed line in Figure 11-1) through an output channel—displays stimulating human senses—and the input channel. In the present chapter, we are primarily interested in controls, or input devices; but, by necessity, the other components in Figure 11-1 will to some extent participate in our discussion.


2021 ◽  
Vol 5 (10) ◽  
pp. 64
Author(s):  
Miguel Angel Garcia-Ruiz ◽  
Bill Kapralos ◽  
Genaro Rebolledo-Mendez

This paper describes an overview of olfactory displays (human–computer interfaces that generate and diffuse an odor to a user to stimulate their sense of smell) that have been proposed and researched for supporting education and training. Past research has shown that olfaction (the sense of smell) can support memorization of information, stimulate information recall, and help immerse learners and trainees into educational virtual environments, as well as complement and/or supplement other human sensory channels for learning. This paper begins with an introduction to olfaction and olfactory displays, and a review of techniques for storing, generating and diffusing odors at the computer interface. The paper proceeds with a discussion on educational theories that support olfactory displays for education and training, and a literature review on olfactory displays that support learning and training. Finally, the paper summarizes the advantages and challenges regarding the development and application of olfactory displays for education and training.


2022 ◽  
pp. 165-182
Author(s):  
Emma Yann Zhang

With advances in HCI and AI, and increasing prevalence of commercial social robots and chatbots, humans are communicating with computer interfaces for various applications in a wide range of settings. Kissenger is designed to bring HCI to the populist masses. In order to investigate the role of robotic kissing using the Kissenger device in HCI, the authors conducted a modified version of the imitation game described by Alan Turing by including the use of the kissing machine. Results show that robotic kissing has no effect on the winning rates of the male and female players during human-human communication, but it increases the winning rate of the female player when a chatbot is involved in the game.


Proceedings ◽  
2018 ◽  
Vol 2 (18) ◽  
pp. 1179 ◽  
Author(s):  
Francisco Laport ◽  
Francisco J. Vazquez-Araujo ◽  
Paula M. Castro ◽  
Adriana Dapena

A brain-computer interface for controlling elements commonly used at home is presented in this paper. It includes the electroencephalography device needed to acquire signals associated to the brain activity, the algorithms for artefact reduction and event classification, and the communication protocol.


Author(s):  
Yujia Peng

As a new way of implementing human-computer interface, brain-computer interfaces (BCI) dramatically changed the user experiences and have broad applications in cyber behavior research. This chapter aims to provide an overall picture of the BCI science and its role in cyberpsychology. The chapter starts with an introduction of the concept, components, and the history and development of BCI. It is then followed by an overview of neuroimaging technologies and signals commonly used in BCI. Then, different applications of BCI on both the clinical population and the general population are summarized in connection with cyberpsychology. Specifically, applications include communication, rehabilitation, entertainments, learning, marketing, and authentication. The chapter concludes with the future directions of BCI.


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
M. Thilagaraj ◽  
B. Dwarakanath ◽  
S. Ramkumar ◽  
K. Karthikeyan ◽  
A. Prabhu ◽  
...  

Human-computer interfaces (HCI) allow people to control electronic devices, such as computers, mouses, wheelchairs, and keyboards, by bypassing the biochannel without using motor nervous system signals. These signals permit communication between people and electronic-controllable devices. This communication is due to HCI, which facilitates lives of paralyzed patients who do not have any problems with their cognitive functioning. The major plan of this study is to test out the feasibility of nine states of HCI by using modern techniques to overcome the problem faced by the paralyzed. Analog Digital Instrument T26 with a five-electrode system was used in this method. Voluntarily twenty subjects participated in this study. The extracted signals were preprocessed by applying notch filter with a range of 50 Hz to remove the external interferences; the features were extracted by applying convolution theorem. Afterwards, extracted features were classified using Elman and distributed time delay neural network. Average classification accuracy with 90.82% and 90.56% was achieved using two network models. The accuracy of the classifier was analyzed by single-trial analysis and performances of the classifier were observed using bit transfer rate (BTR) for twenty subjects to check the feasibility of designing the HCI. The achieved results showed that the ERNN model has a greater potential to classify, identify, and recognize the EOG signal compared with distributed time delay network for most of the subjects. The control signal generated by classifiers was applied as control signals to navigate the assistive devices such as mouse, keyboard, and wheelchair activities for disabled people.


Author(s):  
Francisco Laport ◽  
Francisco J. Vazquez-Araujo ◽  
Paula M. Castro ◽  
Adriana Dapena

A brain-computer interface for controlling elements commonly used at home is presented in this paper. It includes the electroencephalography device needed to acquire signals associated to the brain activity, the algorithms for artefact reduction and event classification, and the communication protocol.


Sign in / Sign up

Export Citation Format

Share Document