COOPERATIVE RECONFIGURATION OF USER INTERFACE MODELS FOR LEARNING CRYPTOGRAPHIC PROTOCOLS

2012 ◽  
Vol 11 (06) ◽  
pp. 1127-1154 ◽  
Author(s):  
BENJAMIN WEYERS ◽  
WOLFRAM LUTHER ◽  
NELSON BALOIAN

Cooperative work in learning environments has been shown to be a successful extension to traditional learning systems due to the great impact of cooperation on students' motivation and learning success. A recent evaluation study has confirmed our hypothesis that students who constructed their roles in a cryptographic protocol cooperatively as sequence of actions in a user interface were faster in finding a correct solution than students who worked on their own. Here, students of a cooperation group modeled a user interface collaboratively for simulation of a cryptographic protocol using interactive modeling tools on a shared touch screen. In this paper, we describe an extended approach to cooperative construction of cryptographic protocols. Using a formal language for modeling and reconfiguring user interfaces, students describe a protocol step-by-step, modeling subsequent situations and thereby actions of the protocol. The system automatically generates a colored Petri net, which is matched against an existing action logic specifying the protocol, thus allowing formal validation of the construction process. The formal approach to modeling of user interfaces covers a much broader field than a simple cryptographic protocol simulation. Still, this paper seeks at investigating the use of such a formal modeling approach in the context of cooperative learning of cryptographic protocols and to develop a basis for more complex learning scenarios.

2020 ◽  
pp. 44-55
Author(s):  
Halyna A. Pidnebesna ◽  
◽  
Andrii V. Pavlov ◽  
Volodymyr S. Stepashko ◽  
◽  
...  

This paper is devoted to the analysis of sources in the field of development and building intelligent user interfaces. Particular attention is paid to presenting an ontology-based approach to constructing the architecture of the interface, the tasks arising during the development, and ways for solving them. An example of the construction of the intelligent user interface is given for software tools of inductive modeling based on the detailed analysis of knowledge structures in this domain.


Information ◽  
2021 ◽  
Vol 12 (4) ◽  
pp. 162
Author(s):  
Soyeon Kim ◽  
René van Egmond ◽  
Riender Happee

In automated driving, the user interface plays an essential role in guiding transitions between automated and manual driving. This literature review identified 25 studies that explicitly studied the effectiveness of user interfaces in automated driving. Our main selection criterion was how the user interface (UI) affected take-over performance in higher automation levels allowing drivers to take their eyes off the road (SAE3 and SAE4). We categorized user interface (UI) factors from an automated vehicle-related information perspective. Short take-over times are consistently associated with take-over requests (TORs) initiated by the auditory modality with high urgency levels. On the other hand, take-over requests directly displayed on non-driving-related task devices and augmented reality do not affect take-over time. Additional explanations of take-over situation, surrounding and vehicle information while driving, and take-over guiding information were found to improve situational awareness. Hence, we conclude that advanced user interfaces can enhance the safety and acceptance of automated driving. Most studies showed positive effects of advanced UI, but a number of studies showed no significant benefits, and a few studies showed negative effects of advanced UI, which may be associated with information overload. The occurrence of positive and negative results of similar UI concepts in different studies highlights the need for systematic UI testing across driving conditions and driver characteristics. Our findings propose future UI studies of automated vehicle focusing on trust calibration and enhancing situation awareness in various scenarios.


Author(s):  
Randall Spain ◽  
Jason Saville ◽  
Barry Lui ◽  
Donia Slack ◽  
Edward Hill ◽  
...  

Because advances in broadband capabilities will soon allow first responders to access and use many forms of data when responding to emergencies, it is becoming critically important to design heads-up displays to present first responders with information in a manner that does not induce extraneous mental workload or cause undue interaction errors. Virtual reality offers a unique medium for envisioning and testing user interface concepts in a realistic and controlled environment. In this paper, we describe a virtual reality-based emergency response scenario that was designed to support user experience research for evaluating the efficacy of intelligent user interfaces for firefighters. We describe the results of a usability test that captured firefighters’ feedback and reactions to the VR scenario and the prototype intelligent user interface that presented them with task critical information through the VR headset. The paper concludes with lessons learned from our development process and a discussion of plans for future research.


2021 ◽  
Vol 5 (EICS) ◽  
pp. 1-29
Author(s):  
Arthur Sluÿters ◽  
Jean Vanderdonckt ◽  
Radu-Daniel Vatavu

Intra-platform plasticity regularly assumes that the display of a computing platform remains fixed and rigid during interactions with the platform in contrast to reconfigurable displays, which can change form depending on the context of use. In this paper, we present a model-based approach for designing and deploying graphical user interfaces that support intra-platform plasticity for reconfigurable displays. We instantiate the model for E3Screen, a new device that expands a conventional laptop with two slidable, rotatable, and foldable lateral displays, enabling slidable user interfaces. Based on a UML class diagram as a domain model and a SCRUD list as a task model, we define an abstract user interface as interaction units with a corresponding master-detail design pattern. We then map the abstract user interface to a concrete user interface by applying rules for the reconfiguration, concrete interaction, unit allocation, and widget selection and implement it in JavaScript. In a first experiment, we determine display configurations most preferred by users, which we organize in the form of a state-transition diagram. In a second experiment, we address reconfiguration rules and widget selection rules. A third experiment provides insights into the impact of the lateral displays on a visual search task.


Author(s):  
Henry Larkin

Purpose – The purpose of this paper is to investigate the feasibility of creating a declarative user interface language suitable for rapid prototyping of mobile and Web apps. Moreover, this paper presents a new framework for creating responsive user interfaces using JavaScript. Design/methodology/approach – Very little existing research has been done in JavaScript-specific declarative user interface (UI) languages for mobile Web apps. This paper introduces a new framework, along with several case studies that create modern responsive designs programmatically. Findings – The fully implemented prototype verifies the feasibility of a JavaScript-based declarative user interface library. This paper demonstrates that existing solutions are unwieldy and cumbersome to dynamically create and adjust nodes within a visual syntax of program code. Originality/value – This paper presents the Guix.js platform, a declarative UI library for rapid development of Web-based mobile interfaces in JavaScript.


Robotica ◽  
2007 ◽  
Vol 25 (5) ◽  
pp. 521-527 ◽  
Author(s):  
Harsha Medicherla ◽  
Ali Sekmen

SUMMARYAn understanding of how humans and robots can successfully interact to accomplish specific tasks is crucial in creating more sophisticated robots that may eventually become an integral part of human societies. A social robot needs to be able to learn the preferences and capabilities of the people with whom it interacts so that it can adapt its behaviors for more efficient and friendly interaction. Advances in human– computer interaction technologies have been widely used in improving human–robot interaction (HRI). It is now possible to interact with robots via natural communication means such as speech. In this paper, an innovative approach for HRI via voice-controllable intelligent user interfaces is described. The design and implementation of such interfaces are described. The traditional approaches for human–robot user interface design are explained and the advantages of the proposed approach are presented. The designed intelligent user interface, which learns user preferences and capabilities in time, can be controlled with voice. The system was successfully implemented and tested on a Pioneer 3-AT mobile robot. 20 participants, who were assessed on spatial reasoning ability, directed the robot in spatial navigation tasks to evaluate the effectiveness of the voice control in HRI. Time to complete the task, number of steps, and errors were collected. Results indicated that spatial reasoning ability and voice-control were reliable predictors of efficiency of robot teleoperation. 75% of the subjects with high spatial reasoning ability preferred using voice-control over manual control. The effect of spatial reasoning ability in teleoperation with voice-control was lower compared to that of manual control.


2021 ◽  
Vol 7 (1) ◽  
pp. 61-70
Author(s):  
Henderi Henderi ◽  
Praditya Aliftiar ◽  
Alwan Hibatullah

Information technology has developed rapidly from time to time. One of the technologies commonly owned by many people today is smartphones with the Android and IOS platforms. By knowing this factor, mobile developers compete with each other to design applications with attractive user interfaces so that users are interested in using them. At this stage in mobile application development, starting from designing a user interface prototype. This stage aims to visualize user needs, improve user experience and simplify the coding process by programmers. In this study, researchers applied the prototype method. This research produces a prototype design for the e-learning application user interface which consists of a high fidelity prototype.


2021 ◽  
Vol 17 (4) ◽  
pp. e1008887
Author(s):  
Alex Baranski ◽  
Idan Milo ◽  
Shirley Greenbaum ◽  
John-Paul Oliveria ◽  
Dunja Mrdjen ◽  
...  

Mass Based Imaging (MBI) technologies such as Multiplexed Ion Beam Imaging by time of flight (MIBI-TOF) and Imaging Mass Cytometry (IMC) allow for the simultaneous measurement of the expression levels of 40 or more proteins in biological tissue, providing insight into cellular phenotypes and organization in situ. Imaging artifacts, resulting from the sample, assay or instrumentation complicate downstream analyses and require correction by domain experts. Here, we present MBI Analysis User Interface (MAUI), a series of graphical user interfaces that facilitate this data pre-processing, including the removal of channel crosstalk, noise and antibody aggregates. Our software streamlines these steps and accelerates processing by enabling real-time and interactive parameter tuning across multiple images.


Sign in / Sign up

Export Citation Format

Share Document