scholarly journals Explaining in Time

2021 ◽  
Vol 10 (3) ◽  
pp. 1-23
Author(s):  
Thomas Arnold ◽  
Daniel Kasenberg ◽  
Matthias Scheutz

Explainability has emerged as a critical AI research objective, but the breadth of proposed methods and application domains suggest that criteria for explanation vary greatly. In particular, what counts as a good explanation, and what kinds of explanation are computationally feasible, has become trickier in light of oqaque “black box” systems such as deep neural networks. Explanation in such cases has drifted from what many philosophers stipulated as having to involve deductive and causal principles to mere “interpretation,” which approximates what happened in the target system to varying degrees. However, such post hoc constructed rationalizations are highly problematic for social robots that operate interactively in spaces shared with humans. For in such social contexts, explanations of behavior, and, in particular, justifications for violations of expected behavior, should make reference to socially accepted principles and norms. In this article, we show how a social robot’s actions can face explanatory demands for how it came to act on its decision, what goals, tasks, or purposes its design had those actions pursue and what norms or social constraints the system recognizes in the course of its action. As a result, we argue that explanations for social robots will need to be accurate representations of the system’s operation along causal, purposive, and justificatory lines. These explanations will need to generate appropriate references to principles and norms—explanations based on mere “interpretability” will ultimately fail to connect the robot’s behaviors to its appropriate determinants. We then lay out the foundations for a cognitive robotic architecture for HRI, together with particular component algorithms, for generating explanations and engaging in justificatory dialogues with human interactants. Such explanations track the robot’s actual decision-making and behavior, which themselves are determined by normative principles the robot can describe and use for justifications.

Author(s):  
Vignesh Prasad ◽  
Ruth Stock-Homburg ◽  
Jan Peters

AbstractFor some years now, the use of social, anthropomorphic robots in various situations has been on the rise. These are robots developed to interact with humans and are equipped with corresponding extremities. They already support human users in various industries, such as retail, gastronomy, hotels, education and healthcare. During such Human-Robot Interaction (HRI) scenarios, physical touch plays a central role in the various applications of social robots as interactive non-verbal behaviour is a key factor in making the interaction more natural. Shaking hands is a simple, natural interaction used commonly in many social contexts and is seen as a symbol of greeting, farewell and congratulations. In this paper, we take a look at the existing state of Human-Robot Handshaking research, categorise the works based on their focus areas, draw out the major findings of these areas while analysing their pitfalls. We mainly see that some form of synchronisation exists during the different phases of the interaction. In addition to this, we also find that additional factors like gaze, voice facial expressions etc. can affect the perception of a robotic handshake and that internal factors like personality and mood can affect the way in which handshaking behaviours are executed by humans. Based on the findings and insights, we finally discuss possible ways forward for research on such physically interactive behaviours.


2012 ◽  
Vol 9 (4) ◽  
pp. 378-393 ◽  
Author(s):  
Alice Marwick

People create profiles on social network sites and Twitter accounts against the background of an audience. This paper argues that closely examining content created by others and looking at one’s own content through other people’s eyes, a common part of social media use, should be framed as social surveillance. While social surveillance is distinguished from traditional surveillance along three axes (power, hierarchy, and reciprocity), its effects and behavior modification is common to traditional surveillance. Drawing on ethnographic studies of United States populations, I look at social surveillance, how it is practiced, and its impact on people who engage in it. I use Foucault’s concept of capillaries of power to demonstrate that social surveillance assumes the power differentials evident in everyday interactions rather than the hierarchical power relationships assumed in much of the surveillance literature. Social media involves a collapse of social contexts and social roles, complicating boundary work but facilitating social surveillance. Individuals strategically reveal, disclose and conceal personal information to create connections with others and tend social boundaries. These processes are normal parts of day-to-day life in communities that are highly connected through social media.


2021 ◽  
Author(s):  
J. Eric T. Taylor ◽  
Graham Taylor

Artificial intelligence powered by deep neural networks has reached a levelof complexity where it can be difficult or impossible to express how a modelmakes its decisions. This black-box problem is especially concerning when themodel makes decisions with consequences for human well-being. In response,an emerging field called explainable artificial intelligence (XAI) aims to increasethe interpretability, fairness, and transparency of machine learning. In thispaper, we describe how cognitive psychologists can make contributions to XAI.The human mind is also a black box, and cognitive psychologists have overone hundred and fifty years of experience modeling it through experimentation.We ought to translate the methods and rigour of cognitive psychology to thestudy of artificial black boxes in the service of explainability. We provide areview of XAI for psychologists, arguing that current methods possess a blindspot that can be complemented by the experimental cognitive tradition. Wealso provide a framework for research in XAI, highlight exemplary cases ofexperimentation within XAI inspired by psychological science, and provide atutorial on experimenting with machines. We end by noting the advantages ofan experimental approach and invite other psychologists to conduct research inthis exciting new field.


2020 ◽  
Vol 34 (07) ◽  
pp. 10901-10908 ◽  
Author(s):  
Abdullah Hamdi ◽  
Matthias Mueller ◽  
Bernard Ghanem

One major factor impeding more widespread adoption of deep neural networks (DNNs) is their lack of robustness, which is essential for safety-critical applications such as autonomous driving. This has motivated much recent work on adversarial attacks for DNNs, which mostly focus on pixel-level perturbations void of semantic meaning. In contrast, we present a general framework for adversarial attacks on trained agents, which covers semantic perturbations to the environment of the agent performing the task as well as pixel-level attacks. To do this, we re-frame the adversarial attack problem as learning a distribution of parameters that always fools the agent. In the semantic case, our proposed adversary (denoted as BBGAN) is trained to sample parameters that describe the environment with which the black-box agent interacts, such that the agent performs its dedicated task poorly in this environment. We apply BBGAN on three different tasks, primarily targeting aspects of autonomous navigation: object detection, self-driving, and autonomous UAV racing. On these tasks, BBGAN can generate failure cases that consistently fool a trained agent.


Author(s):  
Daniel E. Adkins ◽  
Kelli M. Rasmussen ◽  
Anna R. Docherty

It is well established that extreme social adversity can lead to negative health outcomes decades after the resolution of the precipitating environmental insult. Although the underlying mechanisms through which such adversity gets “under the skin” to become biologically embedded have long been considered a black box, recent research has indicated an important mediating role for epigenetic mechanisms—molecular modifications that regulate gene activity without changing the DNA sequence. With technical and scientific developments now enabling genome-wide epigenetic studies in humans, behavioral researchers have an unprecedented opportunity to empirically map the ways in which social dynamics become epigenetically embedded, influencing downstream gene expression, health, and behavior. This chapter examines the current state of social epigenetics research and discusses the opportunities and challenges facing this emerging field.


Author(s):  
Michael A. Lones ◽  
Andy M. Tyrrell

Programming is a process of optimization; taking a specification, which tells us what we want, and transforming it into an implementation, a program, which causes the target system to do exactly what we want. Conventionally, this optimization is achieved through manual design. However, manual design can be slow and error-prone, and recently there has been increasing interest in automatic programming; using computers to semiautomate the process of refining a specification into an implementation. Genetic programming is a developing approach to automatic programming, which, rather than treating programming as a design process, treats it as a search process. However, the space of possible programs is infinite, and finding the right program requires a powerful search process. Fortunately for us, we are surrounded by a monotonous search process capable of producing viable systems of great complexity: evolution. Evolution is the inspiration behind genetic programming. Genetic programming copies the process and genetic operators of biological evolution but does not take any inspiration from the biological representations to which they are applied. It can be argued that the program representation that genetic programming does use is not well suited to evolution. Biological representations, by comparison, are a product of evolution and, a fact to which this book is testament, describe computational structures. This chapter is about enzyme genetic programming, a form of genetic programming that mimics biological representations in an attempt to improve the evolvability of programs. Although it would be an advantage to have a familiarity with both genetic programming and biological representations, concise introductions to both these subjects are provided. According to modern biological understanding, evolution is solely responsible for the complexity we see in the structure and behavior of biological organisms. Nevertheless, evolution itself is a simple process that can occur in any population of imperfectly replicating entities where the right to replicate is determined by a process of selection. Consequently, given an appropriate model of such an environment, evolution can also occur within computers.


2020 ◽  
Vol 12 (1) ◽  
pp. 113-121
Author(s):  
Carla Piazzon Ramos Vieira ◽  
Luciano Antonio Digiampietri

The technologies supporting Artificial Intelligence (AI) have advanced rapidly over the past few years and AI is becoming a commonplace in every aspect of life like the future of self-driving cars or earlier health diagnosis. For this to occur shortly, the entire community stands in front of the barrier of explainability, an inherent problem of latest models (e.g. Deep Neural Networks) that were not present in the previous hype of AI (linear and rule-based models). Most of these recent models are used as black boxes without understanding partially or even completely how different features influence the model prediction avoiding algorithmic transparency. In this paper, we focus on how much we can understand the decisions made by an SVM Classifier in a post-hoc model agnostic approach. Furthermore, we train a tree-based model (inherently interpretable) using labels from the SVM, called secondary training data to provide explanations and compare permutation importance method to the more commonly used measures such as accuracy and show that our methods are both more reliable and meaningful techniques to use. We also outline the main challenges for such methods and conclude that model-agnostic interpretability is a key component in making machine learning more trustworthy.


Sign in / Sign up

Export Citation Format

Share Document