ACM Transactions on Interactive Intelligent Systems
Latest Publications


TOTAL DOCUMENTS

280
(FIVE YEARS 90)

H-INDEX

27
(FIVE YEARS 4)

Published By Association For Computing Machinery

2160-6455

2021 ◽  
Vol 11 (3-4) ◽  
pp. 1-22
Author(s):  
Qiang Yang

With the rapid advances of Artificial Intelligence (AI) technologies and applications, an increasing concern is on the development and application of responsible AI technologies. Building AI technologies or machine-learning models often requires massive amounts of data, which may include sensitive, user private information to be collected from different sites or countries. Privacy, security, and data governance constraints rule out a brute force process in the acquisition and integration of these data. It is thus a serious challenge to protect user privacy while achieving high-performance models. This article reviews recent progress of federated learning in addressing this challenge in the context of privacy-preserving computing. Federated learning allows global AI models to be trained and used among multiple decentralized data sources with high security and privacy guarantees, as well as sound incentive mechanisms. This article presents the background, motivations, definitions, architectures, and applications of federated learning as a new paradigm for building privacy-preserving, responsible AI ecosystems.


2021 ◽  
Vol 11 (3-4) ◽  
pp. 1-35
Author(s):  
Sam Hepenstal ◽  
Leishi Zhang ◽  
Neesha Kodagoda ◽  
B. l. william Wong

The adoption of artificial intelligence (AI) systems in environments that involve high risk and high consequence decision-making is severely hampered by critical design issues. These issues include system transparency and brittleness, where transparency relates to (i) the explainability of results and (ii) the ability of a user to inspect and verify system goals and constraints; and brittleness, (iii) the ability of a system to adapt to new user demands. Transparency is a particular concern for criminal intelligence analysis, where there are significant ethical and trust issues that arise when algorithmic and system processes are not adequately understood by a user. This prevents adoption of potentially useful technologies in policing environments. In this article, we present a novel approach to designing a conversational agent (CA) AI system for intelligence analysis that tackles these issues. We discuss the results and implications of three different studies; a Cognitive Task Analysis to understand analyst thinking when retrieving information in an investigation, Emergent Themes Analysis to understand the explanation needs of different system components, and an interactive experiment with a prototype conversational agent. Our prototype conversational agent, named Pan, demonstrates transparency provision and mitigates brittleness by evolving new CA intentions. We encode interactions with the CA with human factors principles for situation recognition and use interactive visual analytics to support analyst reasoning. Our approach enables complex AI systems, such as Pan, to be used in sensitive environments, and our research has broader application than the use case discussed.


2021 ◽  
Vol 11 (3-4) ◽  
pp. 1-29
Author(s):  
Andreas Hinterreiter ◽  
Christian Steinparz ◽  
Moritz SchÖfl ◽  
Holger Stitz ◽  
Marc Streit

In problem-solving, a path towards a solutions can be viewed as a sequence of decisions. The decisions, made by humans or computers, describe a trajectory through a high-dimensional representation space of the problem. By means of dimensionality reduction, these trajectories can be visualized in lower-dimensional space. Such embedded trajectories have previously been applied to a wide variety of data, but analysis has focused almost exclusively on the self-similarity of single trajectories. In contrast, we describe patterns emerging from drawing many trajectories—for different initial conditions, end states, and solution strategies—in the same embedding space. We argue that general statements about the problem-solving tasks and solving strategies can be made by interpreting these patterns. We explore and characterize such patterns in trajectories resulting from human and machine-made decisions in a variety of application domains: logic puzzles (Rubik’s cube), strategy games (chess), and optimization problems (neural network training). We also discuss the importance of suitably chosen representation spaces and similarity metrics for the embedding.


2021 ◽  
Vol 11 (3-4) ◽  
pp. 1-32
Author(s):  
Alain Starke ◽  
Martijn Willemsen ◽  
Chris Snijders

How can recommender interfaces help users to adopt new behaviors? In the behavioral change literature, social norms and other nudges are studied to understand how people can be convinced to take action (e.g., towel re-use is boosted when stating that “75% of hotel guests” do so), but most of these nudges are not personalized. In contrast, recommender systems know what to recommend in a personalized way, but not much human-computer interaction ( HCI ) research has considered how personalized advice should be presented to help users to change their current habits. We examine the value of depicting normative messages (e.g., “75% of users do X”), based on actual user data, in a personalized energy recommender interface called “Saving Aid.” In a study among 207 smart thermostat owners, we compared three different normative explanations (“Global.” “Similar,” and “Experienced” norm rates) to a non-social baseline (“kWh savings”). Although none of the norms increased the total number of chosen measures directly, we show that depicting high peer adoption rates alongside energy-saving measures increased the likelihood that they would be chosen from a list of recommendations. In addition, we show that depicting social norms positively affects a user’s evaluation of a recommender interface.


2021 ◽  
Vol 11 (3-4) ◽  
pp. 1-34
Author(s):  
Yu Zhang ◽  
Bob Coecke ◽  
Min Chen

In many applications, while machine learning (ML) can be used to derive algorithmic models to aid decision processes, it is often difficult to learn a precise model when the number of similar data points is limited. One example of such applications is data reconstruction from historical visualizations, many of which encode precious data, but their numerical records are lost. On the one hand, there is not enough similar data for training an ML model. On the other hand, manual reconstruction of the data is both tedious and arduous. Hence, a desirable approach is to train an ML model dynamically using interactive classification, and hopefully, after some training, the model can complete the data reconstruction tasks with less human interference. For this approach to be effective, the number of annotated data objects used for training the ML model should be as small as possible, while the number of data objects to be reconstructed automatically should be as large as possible. In this article, we present a novel technique for the machine to initiate intelligent interactions to reduce the user’s interaction cost in interactive classification tasks. The technique of machine-initiated intelligent interaction (MI3) builds on a generic framework featuring active sampling and default labeling. To demonstrate the MI3 approach, we use the well-known cholera map visualization by John Snow as an example, as it features three instances of MI3 pipelines. The experiment has confirmed the merits of the MI3 approach.


2021 ◽  
Vol 11 (3-4) ◽  
pp. 1-46
Author(s):  
Oswald Barral ◽  
SÉbastien LallÉ ◽  
Alireza Iranpour ◽  
Cristina Conati

We study the effectiveness of adaptive interventions at helping users process textual documents with embedded visualizations, a form of multimodal documents known as Magazine-Style Narrative Visualizations (MSNVs). The interventions are meant to dynamically highlight in the visualization the datapoints that are described in the textual sentence currently being read by the user, as captured by eye-tracking. These interventions were previously evaluated in two user studies that involved 98 participants reading excerpts of real-world MSNVs during a 1-hour session. Participants’ outcomes included their subjective feedback about the guidance, and well as their reading time and score on a set of comprehension questions. Results showed that the interventions can increase comprehension of the MSNV excerpts for users with lower levels of a cognitive skill known as visualization literacy. In this article, we aim to further investigate this result by leveraging eye-tracking to analyze in depth how the participants processed the interventions depending on their levels of visualization literacy. We first analyzed summative gaze metrics that capture how users process and integrate the key components of the narrative visualizations. Second, we mined the salient patterns in the users’ scanpaths to contextualize how users sequentially process these components. Results indicate that the interventions succeed in guiding attention to salient components of the narrative visualizations, especially by generating more transitions between key components of the visualization (i.e., datapoints, labels, and legend), as well as between the two modalities (text and visualization). We also show that the interventions help users with lower levels of visualization literacy to better map datapoints to the legend, which likely contributed to their improved comprehension of the documents. These findings shed light on how adaptive interventions help users with different levels of visualization literacy, informing the design of personalized narrative visualizations.


2021 ◽  
Vol 11 (3-4) ◽  
pp. 1-23
Author(s):  
Linhao Meng ◽  
Yating Wei ◽  
Rusheng Pan ◽  
Shuyue Zhou ◽  
Jianwei Zhang ◽  
...  

Federated Learning (FL) provides a powerful solution to distributed machine learning on a large corpus of decentralized data. It ensures privacy and security by performing computation on devices (which we refer to as clients) based on local data to improve the shared global model. However, the inaccessibility of the data and the invisibility of the computation make it challenging to interpret and analyze the training process, especially to distinguish potential client anomalies. Identifying these anomalies can help experts diagnose and improve FL models. For this reason, we propose a visual analytics system, VADAF, to depict the training dynamics and facilitate analyzing potential client anomalies. Specifically, we design a visualization scheme that supports massive training dynamics in the FL environment. Moreover, we introduce an anomaly detection method to detect potential client anomalies, which are further analyzed based on both the client model’s visual and objective estimation. Three case studies have demonstrated the effectiveness of our system in understanding the FL training process and supporting abnormal client detection and analysis.


2021 ◽  
Vol 11 (3-4) ◽  
pp. 1-31
Author(s):  
VinÍcius Segura ◽  
Simone D. J. Barbosa

Nowadays, we have access to data of unprecedented volume, high dimensionality, and complexity. To extract novel insights from such complex and dynamic data, we need effective and efficient strategies. One such strategy is to combine data analysis and visualization techniques, which are the essence of visual analytics applications. After the knowledge discovery process, a major challenge is to filter the essential information that has led to a discovery and to communicate the findings to other people, explaining the decisions they may have made based on the data. We propose to record and use the trace left by the exploratory data analysis, in the form of user interaction history, to aid this process. With the trace, users can choose the desired interaction steps and create a narrative, sharing the acquired knowledge with readers. To achieve our goal, we have developed the BONNIE ( Building Online Narratives from Noteworthy Interaction Events ) framework. BONNIE comprises a log model to register the interaction events, auxiliary code to help developers instrument their own code, and an environment to view users’ own interaction history and build narratives. This article presents our proposal for communicating discoveries in visual analytics applications, the BONNIE framework, and the studies we conducted to evaluate our solution. After two user studies (the first one focused on history visualization and the second one focused on narrative creation), our solution has showed to be promising, with mostly positive feedback and results from a Technology Acceptance Model ( TAM ) questionnaire.


2021 ◽  
Vol 11 (3-4) ◽  
pp. 1-45
Author(s):  
Sina Mohseni ◽  
Niloofar Zarei ◽  
Eric D. Ragan

The need for interpretable and accountable intelligent systems grows along with the prevalence of artificial intelligence ( AI ) applications used in everyday life. Explainable AI ( XAI ) systems are intended to self-explain the reasoning behind system decisions and predictions. Researchers from different disciplines work together to define, design, and evaluate explainable systems. However, scholars from different disciplines focus on different objectives and fairly independent topics of XAI research, which poses challenges for identifying appropriate design and evaluation methodology and consolidating knowledge across efforts. To this end, this article presents a survey and framework intended to share knowledge and experiences of XAI design and evaluation methods across multiple disciplines. Aiming to support diverse design goals and evaluation methods in XAI research, after a thorough review of XAI related papers in the fields of machine learning, visualization, and human-computer interaction, we present a categorization of XAI design goals and evaluation methods. Our categorization presents the mapping between design goals for different XAI user groups and their evaluation methods. From our findings, we develop a framework with step-by-step design guidelines paired with evaluation methods to close the iterative design and evaluation cycles in multidisciplinary XAI teams. Further, we provide summarized ready-to-use tables of evaluation methods and recommendations for different goals in XAI research.


Sign in / Sign up

Export Citation Format

Share Document