Runtime Personalization of Multi-Device User Interfaces: Enhanced Accessibility for Media Consumption in Heterogeneous Environments by User Interface Adaptation

Author(s):  
Carl Bruninx ◽  
Chris Raymaekers ◽  
Kris Luyten ◽  
Karin Coninx
Author(s):  
ANTHONY SAVIDIS ◽  
MARGHERITA ANTONA ◽  
CONSTANTINE STEPHANIDIS

In automatic user interface adaptation, developers pursue the delivery of best-fit user interfaces according to the runtime-supplied profiles of individual end users and usage contexts. Software engineering of automatic interface adaptability entails: (a) storage and processing of user and usage-context profiles; (b) design and implementation of alternative interface components, to optimally support the various user activities and interface operations for different users and usage contexts; and (c) runtime decision-making, to choose on the fly the most appropriate alternative interface components, given the particular user and context profile. In automatic interface adaptation, the decision making process plays a key role in optimal on-the-fly interface assembly, engaging consolidated design wisdom in a computable form. A verifiable language has been designed and implemented which is particularly suited for the specification of adaptation-oriented decision-making logic, while also being easily deployable and usable by interface designers. This paper presents the language, its contextual role in adapted interface delivery and the automatic verification method. The employment of the language in an adaptation-design support tool is discussed, the latter automatically generating language rules by relying upon adaptation rule patterns. Finally, the deployment methodology of the language in supporting dynamic interface assembly is discussed, further generalizing towards dynamic software assembly, by introducing architectural contexts and polymorphic architectural containment.


2019 ◽  
Vol Volume 8, Issue 1, Special... ◽  
Author(s):  
Tanguy Giuffrida ◽  
Eric Céret ◽  
Sophie Dupuy-Chessa ◽  
Jean-Philippe Poli

International audience With the massive spread of Internet use, the accessibility of user interfaces (UI) is an ever more pressing need. Much work has been developed on this subject in order to define generic or situational accessibility recommendations and to propose tools for user interface adaptation. However, difficulties remain, particularly related to the complexity of possible contexts of use, such as the multiplicity of characteristics of the context of use, the imprecision of the values assigned to these characteristics and the combination of multiple adaptation rules. This article shows how a dynamic adaptation engine based on fuzzy logic can be used to implement accessibility recommendations. We show how this approach makes it possible to overcome these difficulties through fuzzy logic with the capacity to manage combinatorial rules, making it possible to take into account potentially complex contexts of use. This approach is illustrated with a concrete example. Avec la diffusion massive de l'utilisation d'Internet, l'accessibilité des interfaces est un besoin toujours plus prégnant. De nombreux travaux se sont penchés sur ce sujet afin de définir des recommandations d'accessibilité génériques ou situationnelles, et proposer des outils d'adaptation des interfaces utilisateurs. Cependant, des difficultés, notamment liées à la complexité des contextes d'usage possibles, demeurent tels que la multiplicité des caractéristiques du contexte d'usage, l'imprécision des valeurs attribuées à ces caractéristiques et la combinaison de multiples règles d'adaptation. Cet article montre comment un moteur d'adaptation dynamique basé sur la logique floue peut être utilisé pour implémenter les préconisations en accessibilité. Il montre comment cette approche permet de dépasser ces verrous grâce à la logique floue et sa gestion de la combinatoire des règles, permettant de prendre en compte un contexte d'usage potentiellement complexe que nous illustrons avec un exemple concret.


Information ◽  
2021 ◽  
Vol 12 (4) ◽  
pp. 162
Author(s):  
Soyeon Kim ◽  
René van Egmond ◽  
Riender Happee

In automated driving, the user interface plays an essential role in guiding transitions between automated and manual driving. This literature review identified 25 studies that explicitly studied the effectiveness of user interfaces in automated driving. Our main selection criterion was how the user interface (UI) affected take-over performance in higher automation levels allowing drivers to take their eyes off the road (SAE3 and SAE4). We categorized user interface (UI) factors from an automated vehicle-related information perspective. Short take-over times are consistently associated with take-over requests (TORs) initiated by the auditory modality with high urgency levels. On the other hand, take-over requests directly displayed on non-driving-related task devices and augmented reality do not affect take-over time. Additional explanations of take-over situation, surrounding and vehicle information while driving, and take-over guiding information were found to improve situational awareness. Hence, we conclude that advanced user interfaces can enhance the safety and acceptance of automated driving. Most studies showed positive effects of advanced UI, but a number of studies showed no significant benefits, and a few studies showed negative effects of advanced UI, which may be associated with information overload. The occurrence of positive and negative results of similar UI concepts in different studies highlights the need for systematic UI testing across driving conditions and driver characteristics. Our findings propose future UI studies of automated vehicle focusing on trust calibration and enhancing situation awareness in various scenarios.


2021 ◽  
pp. 1-13
Author(s):  
Ana Dominguez ◽  
Julian Florez ◽  
Alberto Lafuente ◽  
Stefano Masneri ◽  
Inigo Tamayo ◽  
...  

Author(s):  
Randall Spain ◽  
Jason Saville ◽  
Barry Lui ◽  
Donia Slack ◽  
Edward Hill ◽  
...  

Because advances in broadband capabilities will soon allow first responders to access and use many forms of data when responding to emergencies, it is becoming critically important to design heads-up displays to present first responders with information in a manner that does not induce extraneous mental workload or cause undue interaction errors. Virtual reality offers a unique medium for envisioning and testing user interface concepts in a realistic and controlled environment. In this paper, we describe a virtual reality-based emergency response scenario that was designed to support user experience research for evaluating the efficacy of intelligent user interfaces for firefighters. We describe the results of a usability test that captured firefighters’ feedback and reactions to the VR scenario and the prototype intelligent user interface that presented them with task critical information through the VR headset. The paper concludes with lessons learned from our development process and a discussion of plans for future research.


2021 ◽  
Vol 5 (EICS) ◽  
pp. 1-29
Author(s):  
Arthur Sluÿters ◽  
Jean Vanderdonckt ◽  
Radu-Daniel Vatavu

Intra-platform plasticity regularly assumes that the display of a computing platform remains fixed and rigid during interactions with the platform in contrast to reconfigurable displays, which can change form depending on the context of use. In this paper, we present a model-based approach for designing and deploying graphical user interfaces that support intra-platform plasticity for reconfigurable displays. We instantiate the model for E3Screen, a new device that expands a conventional laptop with two slidable, rotatable, and foldable lateral displays, enabling slidable user interfaces. Based on a UML class diagram as a domain model and a SCRUD list as a task model, we define an abstract user interface as interaction units with a corresponding master-detail design pattern. We then map the abstract user interface to a concrete user interface by applying rules for the reconfiguration, concrete interaction, unit allocation, and widget selection and implement it in JavaScript. In a first experiment, we determine display configurations most preferred by users, which we organize in the form of a state-transition diagram. In a second experiment, we address reconfiguration rules and widget selection rules. A third experiment provides insights into the impact of the lateral displays on a visual search task.


Author(s):  
Henry Larkin

Purpose – The purpose of this paper is to investigate the feasibility of creating a declarative user interface language suitable for rapid prototyping of mobile and Web apps. Moreover, this paper presents a new framework for creating responsive user interfaces using JavaScript. Design/methodology/approach – Very little existing research has been done in JavaScript-specific declarative user interface (UI) languages for mobile Web apps. This paper introduces a new framework, along with several case studies that create modern responsive designs programmatically. Findings – The fully implemented prototype verifies the feasibility of a JavaScript-based declarative user interface library. This paper demonstrates that existing solutions are unwieldy and cumbersome to dynamically create and adjust nodes within a visual syntax of program code. Originality/value – This paper presents the Guix.js platform, a declarative UI library for rapid development of Web-based mobile interfaces in JavaScript.


Robotica ◽  
2007 ◽  
Vol 25 (5) ◽  
pp. 521-527 ◽  
Author(s):  
Harsha Medicherla ◽  
Ali Sekmen

SUMMARYAn understanding of how humans and robots can successfully interact to accomplish specific tasks is crucial in creating more sophisticated robots that may eventually become an integral part of human societies. A social robot needs to be able to learn the preferences and capabilities of the people with whom it interacts so that it can adapt its behaviors for more efficient and friendly interaction. Advances in human– computer interaction technologies have been widely used in improving human–robot interaction (HRI). It is now possible to interact with robots via natural communication means such as speech. In this paper, an innovative approach for HRI via voice-controllable intelligent user interfaces is described. The design and implementation of such interfaces are described. The traditional approaches for human–robot user interface design are explained and the advantages of the proposed approach are presented. The designed intelligent user interface, which learns user preferences and capabilities in time, can be controlled with voice. The system was successfully implemented and tested on a Pioneer 3-AT mobile robot. 20 participants, who were assessed on spatial reasoning ability, directed the robot in spatial navigation tasks to evaluate the effectiveness of the voice control in HRI. Time to complete the task, number of steps, and errors were collected. Results indicated that spatial reasoning ability and voice-control were reliable predictors of efficiency of robot teleoperation. 75% of the subjects with high spatial reasoning ability preferred using voice-control over manual control. The effect of spatial reasoning ability in teleoperation with voice-control was lower compared to that of manual control.


Sign in / Sign up

Export Citation Format

Share Document