Open user interface standards - towards coherent, task-oriented and scalabe user interfaces in home environments

Author(s):  
G. Zimmermann
Author(s):  
Maura Mengoni ◽  
Lorenzo Cavalieri ◽  
Margherita Peruzzini ◽  
Damiano Raponi

Accessibility to graphical user interfaces by visually impaired persons is generally enabled through systems, which reproduce the lexical structure of the user interface to a non-visual form, mainly employing 3D audio output techniques. Two main critical issues have been identified: (i) most interfaces address the needs and abilities of sighted users and consequently the reproduction is only a translation from one language to another; (ii) blind users are generally not involved in the development stage due to the cost of prototyping. The present work proposes an interactive user interface to control a multi-sensory shower accessible by both sighted and blind users and able to adapt its control knob to reproduce Braille texts. Such function is realized by the integration of an electrotactile feedback device and adopts soft touch finishing to better stimulate touch sensations. Haptic technologies have been exploited to create a virtual high-fidelity prototype to assess individual end-users’ response during the user interface design process. The paper illustrates the designed interface to assist blind users in home environments and the adopted virtual prototyping technique to address the above-mentioned issues.


Information ◽  
2021 ◽  
Vol 12 (4) ◽  
pp. 162
Author(s):  
Soyeon Kim ◽  
René van Egmond ◽  
Riender Happee

In automated driving, the user interface plays an essential role in guiding transitions between automated and manual driving. This literature review identified 25 studies that explicitly studied the effectiveness of user interfaces in automated driving. Our main selection criterion was how the user interface (UI) affected take-over performance in higher automation levels allowing drivers to take their eyes off the road (SAE3 and SAE4). We categorized user interface (UI) factors from an automated vehicle-related information perspective. Short take-over times are consistently associated with take-over requests (TORs) initiated by the auditory modality with high urgency levels. On the other hand, take-over requests directly displayed on non-driving-related task devices and augmented reality do not affect take-over time. Additional explanations of take-over situation, surrounding and vehicle information while driving, and take-over guiding information were found to improve situational awareness. Hence, we conclude that advanced user interfaces can enhance the safety and acceptance of automated driving. Most studies showed positive effects of advanced UI, but a number of studies showed no significant benefits, and a few studies showed negative effects of advanced UI, which may be associated with information overload. The occurrence of positive and negative results of similar UI concepts in different studies highlights the need for systematic UI testing across driving conditions and driver characteristics. Our findings propose future UI studies of automated vehicle focusing on trust calibration and enhancing situation awareness in various scenarios.


Author(s):  
Randall Spain ◽  
Jason Saville ◽  
Barry Lui ◽  
Donia Slack ◽  
Edward Hill ◽  
...  

Because advances in broadband capabilities will soon allow first responders to access and use many forms of data when responding to emergencies, it is becoming critically important to design heads-up displays to present first responders with information in a manner that does not induce extraneous mental workload or cause undue interaction errors. Virtual reality offers a unique medium for envisioning and testing user interface concepts in a realistic and controlled environment. In this paper, we describe a virtual reality-based emergency response scenario that was designed to support user experience research for evaluating the efficacy of intelligent user interfaces for firefighters. We describe the results of a usability test that captured firefighters’ feedback and reactions to the VR scenario and the prototype intelligent user interface that presented them with task critical information through the VR headset. The paper concludes with lessons learned from our development process and a discussion of plans for future research.


2021 ◽  
Vol 5 (EICS) ◽  
pp. 1-29
Author(s):  
Arthur Sluÿters ◽  
Jean Vanderdonckt ◽  
Radu-Daniel Vatavu

Intra-platform plasticity regularly assumes that the display of a computing platform remains fixed and rigid during interactions with the platform in contrast to reconfigurable displays, which can change form depending on the context of use. In this paper, we present a model-based approach for designing and deploying graphical user interfaces that support intra-platform plasticity for reconfigurable displays. We instantiate the model for E3Screen, a new device that expands a conventional laptop with two slidable, rotatable, and foldable lateral displays, enabling slidable user interfaces. Based on a UML class diagram as a domain model and a SCRUD list as a task model, we define an abstract user interface as interaction units with a corresponding master-detail design pattern. We then map the abstract user interface to a concrete user interface by applying rules for the reconfiguration, concrete interaction, unit allocation, and widget selection and implement it in JavaScript. In a first experiment, we determine display configurations most preferred by users, which we organize in the form of a state-transition diagram. In a second experiment, we address reconfiguration rules and widget selection rules. A third experiment provides insights into the impact of the lateral displays on a visual search task.


Author(s):  
Henry Larkin

Purpose – The purpose of this paper is to investigate the feasibility of creating a declarative user interface language suitable for rapid prototyping of mobile and Web apps. Moreover, this paper presents a new framework for creating responsive user interfaces using JavaScript. Design/methodology/approach – Very little existing research has been done in JavaScript-specific declarative user interface (UI) languages for mobile Web apps. This paper introduces a new framework, along with several case studies that create modern responsive designs programmatically. Findings – The fully implemented prototype verifies the feasibility of a JavaScript-based declarative user interface library. This paper demonstrates that existing solutions are unwieldy and cumbersome to dynamically create and adjust nodes within a visual syntax of program code. Originality/value – This paper presents the Guix.js platform, a declarative UI library for rapid development of Web-based mobile interfaces in JavaScript.


Robotica ◽  
2007 ◽  
Vol 25 (5) ◽  
pp. 521-527 ◽  
Author(s):  
Harsha Medicherla ◽  
Ali Sekmen

SUMMARYAn understanding of how humans and robots can successfully interact to accomplish specific tasks is crucial in creating more sophisticated robots that may eventually become an integral part of human societies. A social robot needs to be able to learn the preferences and capabilities of the people with whom it interacts so that it can adapt its behaviors for more efficient and friendly interaction. Advances in human– computer interaction technologies have been widely used in improving human–robot interaction (HRI). It is now possible to interact with robots via natural communication means such as speech. In this paper, an innovative approach for HRI via voice-controllable intelligent user interfaces is described. The design and implementation of such interfaces are described. The traditional approaches for human–robot user interface design are explained and the advantages of the proposed approach are presented. The designed intelligent user interface, which learns user preferences and capabilities in time, can be controlled with voice. The system was successfully implemented and tested on a Pioneer 3-AT mobile robot. 20 participants, who were assessed on spatial reasoning ability, directed the robot in spatial navigation tasks to evaluate the effectiveness of the voice control in HRI. Time to complete the task, number of steps, and errors were collected. Results indicated that spatial reasoning ability and voice-control were reliable predictors of efficiency of robot teleoperation. 75% of the subjects with high spatial reasoning ability preferred using voice-control over manual control. The effect of spatial reasoning ability in teleoperation with voice-control was lower compared to that of manual control.


2021 ◽  
Vol 7 (1) ◽  
pp. 61-70
Author(s):  
Henderi Henderi ◽  
Praditya Aliftiar ◽  
Alwan Hibatullah

Information technology has developed rapidly from time to time. One of the technologies commonly owned by many people today is smartphones with the Android and IOS platforms. By knowing this factor, mobile developers compete with each other to design applications with attractive user interfaces so that users are interested in using them. At this stage in mobile application development, starting from designing a user interface prototype. This stage aims to visualize user needs, improve user experience and simplify the coding process by programmers. In this study, researchers applied the prototype method. This research produces a prototype design for the e-learning application user interface which consists of a high fidelity prototype.


2021 ◽  
Vol 17 (4) ◽  
pp. e1008887
Author(s):  
Alex Baranski ◽  
Idan Milo ◽  
Shirley Greenbaum ◽  
John-Paul Oliveria ◽  
Dunja Mrdjen ◽  
...  

Mass Based Imaging (MBI) technologies such as Multiplexed Ion Beam Imaging by time of flight (MIBI-TOF) and Imaging Mass Cytometry (IMC) allow for the simultaneous measurement of the expression levels of 40 or more proteins in biological tissue, providing insight into cellular phenotypes and organization in situ. Imaging artifacts, resulting from the sample, assay or instrumentation complicate downstream analyses and require correction by domain experts. Here, we present MBI Analysis User Interface (MAUI), a series of graphical user interfaces that facilitate this data pre-processing, including the removal of channel crosstalk, noise and antibody aggregates. Our software streamlines these steps and accelerates processing by enabling real-time and interactive parameter tuning across multiple images.


2016 ◽  
Vol 10 (2) ◽  
pp. 128-147
Author(s):  
Pavel Koukal

In this paper the author addresses the issue of collective administration of graphical user interfaces according to the impact of the CJEU decision in BSA v. Ministry of Culture on the case-law in one of EU Member states (Czech Republic). The author analyses the decision of the Czech Supreme Court where this Court concluded that visitors of Internet cafés use graphical user interface actively, which represents relevant usage of a copyrighted works within the meaning of Art. 18 the Czech Copyright Act. In this paper, attention is first paid to the definition of graphical user interface, its brief history and possible regimes of intellectual property protection. Subsequently, the author focuses on copyright protection of graphical user interfaces in the Czech law and interprets the BSA decision from the perspective of collective administration of copyright. Although the graphical user interfaces are independent objects of the copyright protection, if they are used while running the computer program the legal regulation of computer programs has priority. Based on conclusions reached by the Supreme Administrative Court of the Czech Republic in the BSA case, the author claims that collective administration of graphical user interfaces is neither reasonable nor effective.


Sign in / Sign up

Export Citation Format

Share Document