An Abstract User Interface Framework for Mobile and Wearable Devices

Author(s):  
Claas Ahlrichs ◽  
Michael Lawo ◽  
Hendrik Iben

In the future, mobile and wearable devices will increasingly be used for interaction with surrounding technologies. When developing applications for those devices, one usually has to implement the same application for each individual device. Thus a unified framework could drastically reduce development efforts. This paper presents a framework that facilitates the development of context-aware user interfaces (UIs) with reusable components for those devices. It is based on an abstract description of an envisioned UI which is used to generate a context- and device-specific representation at run-time. Rendition in various modalities and adaption of the generated representation are also supported.

2011 ◽  
Vol 3 (3) ◽  
pp. 28-35 ◽  
Author(s):  
Claas Ahlrichs ◽  
Michael Lawo ◽  
Hendrik Iben

In the future, mobile and wearable devices will increasingly be used for interaction with surrounding technologies. When developing applications for those devices, one usually has to implement the same application for each individual device. Thus a unified framework could drastically reduce development efforts. This paper presents a framework that facilitates the development of context-aware user interfaces (UIs) with reusable components for those devices. It is based on an abstract description of an envisioned UI which is used to generate a context- and device-specific representation at run-time. Rendition in various modalities and adaption of the generated representation are also supported.


Sensors ◽  
2020 ◽  
Vol 20 (5) ◽  
pp. 1275 ◽  
Author(s):  
Hyoseok Yoon ◽  
Se-Ho Park

Current consumer wearable devices such as smartwatches mostly rely on touchscreen-based user interfaces. Even though touch-based user interfaces help smartphone users quickly adapt to wearable devices with touchscreens, there exist several limitations. In this paper, we propose a non-touchscreen tactile wearable interface as an alternative to touchscreens on wearable devices. We designed and implemented a joystick-integrated smartwatch prototype to demonstrate our non-touchscreen tactile wearable interface. We iteratively improved and updated our prototype to improve and polish interaction ideas and prototype integration. To show feasibility of our approach, we compared and contrasted form factors of our prototype against the latest nine commercial smartwatches in terms of their dimensions. We also show response time and accuracy of our wearable interface to discuss our rationale for an alternative and usable wearable UI. With the proposed tactile wearable user interface, we believe our approach may serve as a cohesive single interaction device to enable various cross-device interaction scenarios and applications.


Author(s):  
Firas Bacha ◽  
Káthia Marçal de Oliveira ◽  
Mourad Abed

User Interface (UI) personalization aims at providing the right information, at the right time, and on the right support (tablets, smart-phone, etc.). Personalization can be performed on the interface elements’ presentation (e.g. layout, screen size, and resolution) and on the content provided (e.g., data, information, document). While many existing approaches deal with the first type of personalization, this chapter explores content personalization. To that end, the authors define a context-aware Model Driven Architecture (MDA) approach where the UI model is enriched by data from a domain model and its mapping to a context model. They conclude that this approach is better used only for domains where one envisions several developments of software applications and/or user interfaces.


Author(s):  
Nikola Mitrovic ◽  
Eduardo Mena ◽  
Jose Alberto Royo

Mobility for graphical user interfaces (GUIs) is a challenging problem, as different GUIs need to be constructed for different device capabilities and changing context, preferences and users’ locations. GUI developers frequently create multiple user interface versions for different devices. The solution lies in using a single, abstract, user interface description that is used later to automatically generate user interfaces for different devices. Various techniques are proposed to adapt GUIs from an abstract specification to a concrete interface. Design-time techniques have the possibility of creating better performing GUIs but, in contrast to run-time techniques, lack flexibility and mobility. Run-time techniques’ mobility and autonomy can be significantly improved by using mobile agent technology and an indirect GUI generation paradigm. Using indirect generation enables analysis of computer-human interaction and application of artificial intelligence techniques to be made at run-time, increasing GUIs’ performance and usability.


2009 ◽  
Author(s):  
Alark Joshi ◽  
Dustin Scheinost ◽  
Hirohito Okuda ◽  
Isabella Murphy ◽  
Lawrence Staib ◽  
...  

Developing both graphical and command-line user interfaces for image analysis algorithms requires considerable effort. Generally developers provide limited to very rudimentary user interface controls to their users. These image analysis algorithms can only meet their potential if they can be used easily and frequently by their intended users. Deployment of a large suite of such algorithms on multiple platforms requires that the software be stable and appropriately tested.We present a novel framework that allows for rapid development of image analysis algorithms along with graphical user interface controls. Additionally, our framework allows for simplified nightly testing of the algorithms to ensure stability and cross platform interoperability. It allows for development of complex algorithms by creating a custom pipeline where the output of an algorithm can serve as an input for another algorithm. All of the functionality is encapsulation into the object requiring no separate source code for user interfaces, testing or deployment. This makes our framework ideal for developing novel, stable and easy-to-use algorithms for computer assisted interventions (CAI). The framework has been deployed at the Magnetic Resonance Research Center at Yale University and has been released for public use.


Author(s):  
Károly Tilly ◽  
Zoltán Porkoláb

Semantic User Interfaces (SUIs), are sets of interrelated, static, domain specific documents having layout and content, whose interpretation is defined through semantic decoration. SUIs are declarative in nature. They allow program composition by the user herself at the user interface level. The operation of SUI based applications follow a service oriented approach. SUI elements referenced in user requests are automatically mapped to reusable service provider components, whose contracts are specified in domain ontologies. This assures semantic separation of user interface components from elements of the underlying application system infrastructure, which allows full separation of concerns during system development; real, application independent, reusable components; user editable applications and generic learnability. This article presents the architecture and components of a SUI framework, basic elements of SUI documents and relevant properties of domain ontologies for SUI documents. The basics of representation and operation of SUI applications are explained through a motivating example.


2010 ◽  
Vol 6 (1) ◽  
pp. 29-43 ◽  
Author(s):  
Károly Tilly ◽  
Zoltán Porkoláb

Semantic User Interfaces (SUIs), are sets of interrelated, static, domain specific documents having layout and content, whose interpretation is defined through semantic decoration. SUIs are declarative in nature. They allow program composition by the user herself at the user interface level. The operation of SUI based applications follow a service oriented approach. SUI elements referenced in user requests are automatically mapped to reusable service provider components, whose contracts are specified in domain ontologies. This assures semantic separation of user interface components from elements of the underlying application system infrastructure, which allows full separation of concerns during system development; real, application independent, reusable components; user editable applications and generic learnability. This article presents the architecture and components of a SUI framework, basic elements of SUI documents and relevant properties of domain ontologies for SUI documents. The basics of representation and operation of SUI applications are explained through a motivating example.


Information ◽  
2021 ◽  
Vol 12 (4) ◽  
pp. 162
Author(s):  
Soyeon Kim ◽  
René van Egmond ◽  
Riender Happee

In automated driving, the user interface plays an essential role in guiding transitions between automated and manual driving. This literature review identified 25 studies that explicitly studied the effectiveness of user interfaces in automated driving. Our main selection criterion was how the user interface (UI) affected take-over performance in higher automation levels allowing drivers to take their eyes off the road (SAE3 and SAE4). We categorized user interface (UI) factors from an automated vehicle-related information perspective. Short take-over times are consistently associated with take-over requests (TORs) initiated by the auditory modality with high urgency levels. On the other hand, take-over requests directly displayed on non-driving-related task devices and augmented reality do not affect take-over time. Additional explanations of take-over situation, surrounding and vehicle information while driving, and take-over guiding information were found to improve situational awareness. Hence, we conclude that advanced user interfaces can enhance the safety and acceptance of automated driving. Most studies showed positive effects of advanced UI, but a number of studies showed no significant benefits, and a few studies showed negative effects of advanced UI, which may be associated with information overload. The occurrence of positive and negative results of similar UI concepts in different studies highlights the need for systematic UI testing across driving conditions and driver characteristics. Our findings propose future UI studies of automated vehicle focusing on trust calibration and enhancing situation awareness in various scenarios.


Author(s):  
Mohamed A. Amasha ◽  
Marwa F. Areed ◽  
Salem Alkhalaf ◽  
Rania A. Abougalala ◽  
Safaa M. Elatawy ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document