scholarly journals Ontology Application to Construct Inductive Modeling Tools with Intelligent Interface

2020 ◽  
pp. 44-55
Author(s):  
Halyna A. Pidnebesna ◽  
◽  
Andrii V. Pavlov ◽  
Volodymyr S. Stepashko ◽  
◽  
...  

This paper is devoted to the analysis of sources in the field of development and building intelligent user interfaces. Particular attention is paid to presenting an ontology-based approach to constructing the architecture of the interface, the tasks arising during the development, and ways for solving them. An example of the construction of the intelligent user interface is given for software tools of inductive modeling based on the detailed analysis of knowledge structures in this domain.

Author(s):  
Randall Spain ◽  
Jason Saville ◽  
Barry Lui ◽  
Donia Slack ◽  
Edward Hill ◽  
...  

Because advances in broadband capabilities will soon allow first responders to access and use many forms of data when responding to emergencies, it is becoming critically important to design heads-up displays to present first responders with information in a manner that does not induce extraneous mental workload or cause undue interaction errors. Virtual reality offers a unique medium for envisioning and testing user interface concepts in a realistic and controlled environment. In this paper, we describe a virtual reality-based emergency response scenario that was designed to support user experience research for evaluating the efficacy of intelligent user interfaces for firefighters. We describe the results of a usability test that captured firefighters’ feedback and reactions to the VR scenario and the prototype intelligent user interface that presented them with task critical information through the VR headset. The paper concludes with lessons learned from our development process and a discussion of plans for future research.


Robotica ◽  
2007 ◽  
Vol 25 (5) ◽  
pp. 521-527 ◽  
Author(s):  
Harsha Medicherla ◽  
Ali Sekmen

SUMMARYAn understanding of how humans and robots can successfully interact to accomplish specific tasks is crucial in creating more sophisticated robots that may eventually become an integral part of human societies. A social robot needs to be able to learn the preferences and capabilities of the people with whom it interacts so that it can adapt its behaviors for more efficient and friendly interaction. Advances in human– computer interaction technologies have been widely used in improving human–robot interaction (HRI). It is now possible to interact with robots via natural communication means such as speech. In this paper, an innovative approach for HRI via voice-controllable intelligent user interfaces is described. The design and implementation of such interfaces are described. The traditional approaches for human–robot user interface design are explained and the advantages of the proposed approach are presented. The designed intelligent user interface, which learns user preferences and capabilities in time, can be controlled with voice. The system was successfully implemented and tested on a Pioneer 3-AT mobile robot. 20 participants, who were assessed on spatial reasoning ability, directed the robot in spatial navigation tasks to evaluate the effectiveness of the voice control in HRI. Time to complete the task, number of steps, and errors were collected. Results indicated that spatial reasoning ability and voice-control were reliable predictors of efficiency of robot teleoperation. 75% of the subjects with high spatial reasoning ability preferred using voice-control over manual control. The effect of spatial reasoning ability in teleoperation with voice-control was lower compared to that of manual control.


2012 ◽  
Vol 11 (06) ◽  
pp. 1127-1154 ◽  
Author(s):  
BENJAMIN WEYERS ◽  
WOLFRAM LUTHER ◽  
NELSON BALOIAN

Cooperative work in learning environments has been shown to be a successful extension to traditional learning systems due to the great impact of cooperation on students' motivation and learning success. A recent evaluation study has confirmed our hypothesis that students who constructed their roles in a cryptographic protocol cooperatively as sequence of actions in a user interface were faster in finding a correct solution than students who worked on their own. Here, students of a cooperation group modeled a user interface collaboratively for simulation of a cryptographic protocol using interactive modeling tools on a shared touch screen. In this paper, we describe an extended approach to cooperative construction of cryptographic protocols. Using a formal language for modeling and reconfiguring user interfaces, students describe a protocol step-by-step, modeling subsequent situations and thereby actions of the protocol. The system automatically generates a colored Petri net, which is matched against an existing action logic specifying the protocol, thus allowing formal validation of the construction process. The formal approach to modeling of user interfaces covers a much broader field than a simple cryptographic protocol simulation. Still, this paper seeks at investigating the use of such a formal modeling approach in the context of cooperative learning of cryptographic protocols and to develop a basis for more complex learning scenarios.


2021 ◽  
Author(s):  
Jide Ebenezer Taiwo Akinsola ◽  
Samuel Akinseinde ◽  
Olamide Kalesanwo ◽  
Moruf Adeagbo ◽  
Kayode Oladapo ◽  
...  

In recent years, Cyber Security threat modeling has been discovered to have the capacity of combatting and mitigating against online threats. In order to minimize the associated risk, these threats need to be modelled with appropriate Intelligent User Interface (IUI) design and consequently the development and evaluation of threat metrics. Artificial Intelligence (AI) has revolutionized every facet of our daily lives and building a responsive Cyber Security Threat Model requires an IUI. The current threat models lack IUI, hence they cannot deliver convenience and efficiency. However, as the User Interface (UI) functionalities and User Experience (UX) continue to increase and deliver more astonishing possibilities, the present threat models lack the predictability capacity thus Machine Learning paradigms must be incorporated. Meanwhile, this deficiency can only be handled through AI-enabled UI that utilizes baseline principles in the design of interfaces for effective Human-Machine Interaction (HMI) with lasting UX. IUI helps developers or designers enhance flexibility, usability, and the relevance of the interaction to improving communication between computer and human. Baseline principles must be applied for developing threat models that will ensure fascinating UI-UX. Application of AI in UI design for Cyber Security Threat Modeling brings about reduction in critical design time and ensures the development of better threat modeling applications and solutions.


2021 ◽  
Author(s):  
Nauman Jalil

This chapter is intended to provide an overview of the Intelligent User Interfaces subject. The outline includes the basic concepts and terminology, a review of current technologies and recent developments in the field, common architectures used for the design of IUI systems, and finally the IUI applications. Intelligent user interfaces (IUIs) are attempting to address human-computer connection issues by offering innovative communication approaches and by listening to the user. Virtual reality is also an emerging IUI area that can be the popular interface of the future by integrating the technology into the environment so that at the same time it can be more real and invisible. The ultimate computer interface is more like interacting with the computer in a dialog, an interactive environment of virtual reality in which you can communicate. This chapter also explores a methodology for the design of situation-aware frameworks for the user interface that utilizes user and context inputs to provide details customized to the activities of the user in particular circumstances. In order to comply to the new situation, the user interface will reconfigure itself automatically. Adjusting the user interface to the actual situation and providing a reusable list of tasks in a given situation decreases operator memory loads. The challenge of pulling together the details needed by situation-aware decision support systems in a way that minimizes cognitive workload is not addressed by current user interface design.


Author(s):  
Dina Goren-Bar

Intelligent Systems are served by Intelligent User Interfaces aimed to improve the efficiency, effectiveness and adaptation of the interaction between the user and the computer by representing, understanding and implementing models. The Intelligent User Interface Model (IUIM) helps to design and develop Intelligent Systems considering its architecture and its behavior. It focuses the Interaction and Dialogue between User and System at the heart of an Intelligent Interactive System. An architectural model, which defines the components of the model, and a conceptual model, which relates to its contents and behavior compose the IUIM. The conceptual model defines three elements: an Adaptive User Model (including components for building and updating the user model), a Task Model (including general and domain specific knowledge) and an Adaptive Discourse Model (to be assisted by an intelligent help and a learning module). We will show the implementation of the model by describing an application named Stigma - A STereotypical Intelligent General Matching Agent for Improving Search Results on the Internet. Finally, we compared the new model with others, stating the differences and the advantages of the proposed model.


Author(s):  
Agnes Kukulska-Hulme

So many user interfaces have the appearance of a collection of labels, stuck onto invisible boxes whose contents remain a mystery to users until they have made the effort of opening up each box in turn and sifting through its contents. In order to explore what might be called “the language of labeling,” we must first make some observations about the relationship between terms and concepts. Terms are words with special subject meanings; a term may consist of one or more “units” (e.g., user interface). As has been pointed out by Sager (1990), concepts are notoriously difficult to define; it is, however, possible to group them into four basic types: • class concepts or entities, generally corresponding to nouns • property concepts or qualities, for the most part corresponding to adjectives • relation concepts realized though various parts of speech, such as prepositions • function concepts or activities, corresponding to nouns and verbs Looking at the relationship between terms and concepts will help us to think about whether terms can be used to label various types of knowledge and also whether they can properly represent users’ knowledge needs. The present book is structured around linguistic “concepts” in the broad sense, whereas in this chapter, when we refer to concepts, it is in the narrower terminological sense indicated above. “We can use any names we wish as labels for concepts so long as we use them consistently. The only other criterion is convenience” In special subject areas, these same criteria apply, except that communication of specialized knowledge obliges us to take account of how concepts have been labeled by others and how the concepts we are handling fit into a wider scheme. We can draw up systems of concepts and try to specify relationships between them, uncovering along the way the knowledge structures that bind them together. However, we cannot do the same with terms. Terms are existential in nature, that is to say, they signal the existence of an entity, a relationship, an activity, or a quality.


2018 ◽  
Vol 2 (4) ◽  
pp. 62 ◽  
Author(s):  
Peter Ruijten ◽  
Jacques Terken ◽  
Sanjeev Chandramouli

Autonomous vehicles use sensors and artificial intelligence to drive themselves. Surveys indicate that people are fascinated by the idea of autonomous driving, but are hesitant to relinquish control of the vehicle. Lack of trust seems to be the core reason for these concerns. In order to address this, an intelligent agent approach was implemented, as it has been argued that human traits increase trust in interfaces. Where other approaches mainly use anthropomorphism to shape appearances, the current approach uses anthropomorphism to shape the interaction, applying Gricean maxims (i.e., guidelines for effective conversation). The contribution of this approach was tested in a simulator that employed both a graphical and a conversational user interface, which were rated on likability, perceived intelligence, trust, and anthropomorphism. Results show that the conversational interface was trusted, liked, and anthropomorphized more, and was perceived as more intelligent, than the graphical user interface. Additionally, an interface that was portrayed as more confident in making decisions scored higher on all four constructs than one that was portrayed as having low confidence. These results together indicate that equipping autonomous vehicles with interfaces that mimic human behavior may help increasing people’s trust in, and, consequently, their acceptance of them.


Sign in / Sign up

Export Citation Format

Share Document