A Conceptual Framework for Interoperability of Mobile User Interfaces with Ambient Computing Environments

Author(s):  
Andreas Lorenz

The use of mobile and hand-held devices is a desirable option for implementation of user interaction with remote services from a distance, whereby the user should be able to select the input device depending on personal preferences, capabilities and availability of interaction devices. Because of the heterogeneity of available devices and interaction styles, the interoperability needs particular attention by the developer. This paper describes the design of a general solution to enable mobile devices to have control on services at remote hosts. The applied approach enhances the idea of separating the user interface from the application logic, leading to the definition of virtual or logical input devices physically separated from the controlled services.

2010 ◽  
Vol 2 (3) ◽  
pp. 58-73 ◽  
Author(s):  
Andreas Lorenz

The use of mobile and hand-held devices is a desirable option for implementation of user interaction with remote services from a distance, whereby the user should be able to select the input device depending on personal preferences, capabilities and availability of interaction devices. Because of the heterogeneity of available devices and interaction styles, the interoperability needs particular attention by the developer. This paper describes the design of a general solution to enable mobile devices to have control on services at remote hosts. The applied approach enhances the idea of separating the user interface from the application logic, leading to the definition of virtual or logical input devices physically separated from the controlled services.


Author(s):  
Richard Pekelney ◽  
Robin Chu

The rapid growth of graphical user interfaces on personal computers has led to the mouse input device playing a prominent and central role in the control of computer applications. As their use increases, mouse design and comfort issues are becoming more and more critical. This report describes the ergonomic design criteria and resulting product attributes of a commercially successful mouse computer input device. Although well-founded ergonomic principles were incorporated into the design criteria, very little ergonomic research has been published on the design of mice. There is a need for additional research on the ergonomics computer mouse input devices.


Author(s):  
John Fulcher

Much has changed in computer interfacing since the early days of computing—or has it? Admittedly, gone are the days of punched cards and/or paper tape readers as input devices; likewise, monitors (displays) have superseded printers as the primary output device. Nevertheless, the QWERTY keyboard shows little sign of falling into disuse—this is essentially the same input device as those used on the earliest (electromechanical) TeleTYpewriters, in which the “worst” key layout was deliberately chosen to slow down user input (i.e., fast typists). The three major advances since the 1950s have been (1) the rise of low cost (commodity off-theshelf) CRT monitors in the 1960s (and in more recent times, LCD ones), (2) the replacement of (text-based) command line interfaces with graphical user interfaces in the 1980s, and (3) the rise of the Internet/World Wide Web during the 1990s. In recent times, while speech recognition (and synthesis) has made some inroads (i.e., McTeal, 2002; O’Shaughnessy, 2003), the QWERTY keyboard and mouse remain the dominant input modalities.


2001 ◽  
Vol 10 (1) ◽  
pp. 96-108 ◽  
Author(s):  
Doug A. Bowman ◽  
Ernst Kruijff ◽  
Joseph J. LaViola ◽  
Ivan Poupyrev

Three-dimensional user interface design is a critical component of any virtual environment (VE) application. In this paper, we present a broad overview of 3-D interaction and user interfaces. We discuss the effect of common VE hardware devices on user interaction, as well as interaction techniques for generic 3-D tasks and the use of traditional 2-D interaction styles in 3-D environments. We divide most user-interaction tasks into three categories: navigation, selection/manipulation, and system control. Throughout the paper, our focus is on presenting not only the available techniques but also practical guidelines for 3-D interaction design and widely held myths. Finally, we briefly discuss two approaches to 3-D interaction design and some example applications with complex 3-D interaction requirements. We also present an annotated online bibliography as a reference companion to this article.


2020 ◽  
Vol 6 (3) ◽  
pp. 127-130
Author(s):  
Max B. Schäfer ◽  
Kent W. Stewart ◽  
Nico Lösch ◽  
Peter P. Pott

AbstractAccess to systems for robot-assisted surgery is limited due to high costs. To enable widespread use, numerous issues have to be addressed to improve and/or simplify their components. Current systems commonly use universal linkage-based input devices, and only a few applicationoriented and specialized designs are used. A versatile virtual reality controller is proposed as an alternative input device for the control of a seven degree of freedom articulated robotic arm. The real-time capabilities of the setup, replicating a system for robot-assisted teleoperated surgery, are investigated to assess suitability. Image-based assessment showed a considerable system latency of 81.7 ± 27.7 ms. However, due to its versatility, the virtual reality controller is a promising alternative to current input devices for research around medical telemanipulation systems.


Author(s):  
Luis A. Leiva ◽  
Yunfei Xue ◽  
Avya Bansal ◽  
Hamed R. Tavakoli ◽  
Tuðçe Köroðlu ◽  
...  

Author(s):  
Shannon K. T. Bailey ◽  
Daphne E. Whitmer ◽  
Bradford L. Schroeder ◽  
Valerie K. Sims

Human-computer interfaces are changing to meet the evolving needs of users and overcome limitations of previous generations of computer systems. The current state of computers consists largely of graphical user interfaces (GUI) that incorporate windows, icons, menus, and pointers (WIMPs) as visual representations of computer interactions controlled via user input on a mouse and keyboard. Although this model of interface has dominated human-computer interaction for decades, WIMPs require an extra step between the user’s intent and the computer action, imposing both limitations on the interaction and introducing cognitive demands (van Dam, 1997). Alternatively, natural user interfaces (NUI) employ input methods such as speech, touch, and gesture commands. With NUIs, users can interact directly with the computer without using an intermediary device (e.g., mouse, keyboard). Using the body as an input device may be more “natural” because it allows the user to apply existing knowledge of how to interact with the world (Roupé, Bosch-Sijtsema, & Johansson, 2014). To utilize the potential of natural interfaces, research must first determine what interactions can be considered natural. For the purpose of this paper, we focus on the naturalness of gesture-based interfaces. The purpose of this study was to determine how people perform natural gesture-based computer actions. To answer this question, we first narrowed down potential gestures that would be considered natural for an action. In a previous study, participants ( n=17) were asked how they would gesture to interact with a computer to complete a series of actions. After narrowing down the potential natural gestures by calculating the most frequently performed gestures for each action, we asked participants ( n=188) to rate the naturalness of the gestures in the current study. Participants each watched 26 videos of gestures (3-5 seconds each) and were asked how natural or arbitrary they interpreted each gesture for the series of computer commands (e.g., move object left, shrink object, select object, etc.). The gestures in these videos included the 17 gestures that were most often performed in the previous study in which participants were asked what gesture they would naturally use to complete the computer actions. Nine gestures were also included that were created arbitrarily to act as a comparison to the natural gestures. By analyzing the ratings on a continuum from “Completely Arbitrary” to “Completely Natural,” we found that the natural gestures people produced in the first study were also interpreted as the intended action by this separate sample of participants. All the gestures that were rated as either “Mostly Natural” or “Completely Natural” by participants corresponded to how the object manipulation would be performed physically. For example, the gesture video that depicts a fist closing was rated as “natural” by participants for the action of “selecting an object.” All of the gestures that were created arbitrarily were interpreted as “arbitrary” when they did not correspond to the physical action. Determining how people naturally gesture computer commands and how people interpret those gestures is useful because it can inform the development of NUIs and contributes to the literature on what makes gestures seem “natural.”


1985 ◽  
Vol 29 (5) ◽  
pp. 470-474 ◽  
Author(s):  
Paul Green ◽  
Lisa Wei-Haas

The Wizard of Oz technique is an efficient way to examine user interaction with computers and facilitate rapid iterative development of dialog wording and logic. The technique requires two machines linked together, one for the subject and one for the experimenter. In this implementation the experimenter (the “Wizard”), pretending to be a computer, types in complete replies to user queries or presses function keys to which common messages have been assigned (e.g., Fl=“Help is not available”). The software automatically records the dialog and its timing. This paper provides a detailed description of the first implementation of the Oz paradigm for the IBM Personal Computer. It also includes application guidelines, information which is currently missing from the literature.


Sign in / Sign up

Export Citation Format

Share Document