scholarly journals Learning tractable probabilistic models for moral responsibility and blame

2021 ◽  
Vol 35 (2) ◽  
pp. 621-659
Author(s):  
Lewis Hammond ◽  
Vaishak Belle

AbstractMoral responsibility is a major concern in autonomous systems, with applications ranging from self-driving cars to kidney exchanges. Although there have been recent attempts to formalise responsibility and blame, among similar notions, the problem of learning within these formalisms has been unaddressed. From the viewpoint of such systems, the urgent questions are: (a) How can models of moral scenarios and blameworthiness be extracted and learnt automatically from data? (b) How can judgements be computed effectively and efficiently, given the split-second decision points faced by some systems? By building on constrained tractable probabilistic learning, we propose and implement a hybrid (between data-driven and rule-based methods) learning framework for inducing models of such scenarios automatically from data and reasoning tractably from them. We report on experiments that compare our system with human judgement in three illustrative domains: lung cancer staging, teamwork management, and trolley problems.

2019 ◽  
Vol 25 (1) ◽  
pp. 21-27
Author(s):  
Joey Lee ◽  
Benedikt Groß ◽  
Raphael Reimann

Abstract Self-driving cars and autonomous transportation systems are projected to create radical societal changes, yet public understanding and trust of self-driving cars and autonomous systems is limited. The authors present a new mixed-reality experience designed to provide its users with insights into the ways that self-driving cars operate. A single-person vehicle equipped with sensors provides its users with data driven visual feedback in a virtual reality headset to navigate in physical space. The authors explore how immersive experiences might provide ‘conceptual affordances’ that lower the entry barrier for diverse audiences to discuss complex topics.


2021 ◽  
Vol 31 (2) ◽  
pp. 021105
Author(s):  
Vipin Agarwal ◽  
Rui Wang ◽  
Balakumar Balachandran

Sensors ◽  
2021 ◽  
Vol 21 (23) ◽  
pp. 8009
Author(s):  
Abdulmajid Murad ◽  
Frank Alexander Kraemer ◽  
Kerstin Bach ◽  
Gavin Taylor

Data-driven forecasts of air quality have recently achieved more accurate short-term predictions. However, despite their success, most of the current data-driven solutions lack proper quantifications of model uncertainty that communicate how much to trust the forecasts. Recently, several practical tools to estimate uncertainty have been developed in probabilistic deep learning. However, there have not been empirical applications and extensive comparisons of these tools in the domain of air quality forecasts. Therefore, this work applies state-of-the-art techniques of uncertainty quantification in a real-world setting of air quality forecasts. Through extensive experiments, we describe training probabilistic models and evaluate their predictive uncertainties based on empirical performance, reliability of confidence estimate, and practical applicability. We also propose improving these models using “free” adversarial training and exploiting temporal and spatial correlation inherent in air quality data. Our experiments demonstrate that the proposed models perform better than previous works in quantifying uncertainty in data-driven air quality forecasts. Overall, Bayesian neural networks provide a more reliable uncertainty estimate but can be challenging to implement and scale. Other scalable methods, such as deep ensemble, Monte Carlo (MC) dropout, and stochastic weight averaging-Gaussian (SWAG), can perform well if applied correctly but with different tradeoffs and slight variations in performance metrics. Finally, our results show the practical impact of uncertainty estimation and demonstrate that, indeed, probabilistic models are more suitable for making informed decisions.


AI Magazine ◽  
2013 ◽  
Vol 34 (3) ◽  
pp. 93-98 ◽  
Author(s):  
Vita Markman ◽  
Georgi Stojanov ◽  
Bipin Indurkhya ◽  
Takashi Kido ◽  
Keiki Takadama ◽  
...  

The Association for the Advancement of Artificial Intelligence was pleased to present the AAAI 2013 Spring Symposium Series, held Monday through Wednesday, March 25-27, 2013. The titles of the eight symposia were Analyzing Microtext, Creativity and (Early) Cognitive Development, Data Driven Wellness: From Self-Tracking to Behavior Change, Designing Intelligent Robots: Reintegrating AI II, Lifelong Machine Learning, Shikakeology: Designing Triggers for Behavior Change, Trust and Autonomous Systems, and Weakly Supervised Learning from Multimedia. This report contains summaries of the symposia, written, in most cases, by the cochairs of the symposium.


2021 ◽  
Vol 65 ◽  
pp. 40-49
Author(s):  
Joosep Hook ◽  
Seif El-Sedky ◽  
Varuna De Silva ◽  
Ahmet Kondoz

2020 ◽  
Vol 34 (07) ◽  
pp. 12701-12708
Author(s):  
Yingruo Fan ◽  
Jacqueline Lam ◽  
Victor Li

The intensity estimation of facial action units (AUs) is challenging due to subtle changes in the person's facial appearance. Previous approaches mainly rely on probabilistic models or predefined rules for modeling co-occurrence relationships among AUs, leading to limited generalization. In contrast, we present a new learning framework that automatically learns the latent relationships of AUs via establishing semantic correspondences between feature maps. In the heatmap regression-based network, feature maps preserve rich semantic information associated with AU intensities and locations. Moreover, the AU co-occurring pattern can be reflected by activating a set of feature channels, where each channel encodes a specific visual pattern of AU. This motivates us to model the correlation among feature channels, which implicitly represents the co-occurrence relationship of AU intensity levels. Specifically, we introduce a semantic correspondence convolution (SCC) module to dynamically compute the correspondences from deep and low resolution feature maps, and thus enhancing the discriminability of features. The experimental results demonstrate the effectiveness and the superior performance of our method on two benchmark datasets.


IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 137656-137667 ◽  
Author(s):  
Bilal Hussain ◽  
Qinghe Du ◽  
Sihai Zhang ◽  
Ali Imran ◽  
Muhammad Ali Imran

2020 ◽  
Vol 95 ◽  
pp. 104235 ◽  
Author(s):  
Qingchao Jiang ◽  
Shifu Yan ◽  
Xuefeng Yan ◽  
Shutian Chen ◽  
Jinggao Sun

2016 ◽  
Vol 28 (5) ◽  
pp. 826-848 ◽  
Author(s):  
Arunava Banerjee

We derive a synaptic weight update rule for learning temporally precise spike train–to–spike train transformations in multilayer feedforward networks of spiking neurons. The framework, aimed at seamlessly generalizing error backpropagation to the deterministic spiking neuron setting, is based strictly on spike timing and avoids invoking concepts pertaining to spike rates or probabilistic models of spiking. The derivation is founded on two innovations. First, an error functional is proposed that compares the spike train emitted by the output neuron of the network to the desired spike train by way of their putative impact on a virtual postsynaptic neuron. This formulation sidesteps the need for spike alignment and leads to closed-form solutions for all quantities of interest. Second, virtual assignment of weights to spikes rather than synapses enables a perturbation analysis of individual spike times and synaptic weights of the output, as well as all intermediate neurons in the network, which yields the gradients of the error functional with respect to the said entities. Learning proceeds via a gradient descent mechanism that leverages these quantities. Simulation experiments demonstrate the efficacy of the proposed learning framework. The experiments also highlight asymmetries between synapses on excitatory and inhibitory neurons.


2020 ◽  
Author(s):  
Zhe Xu

<p>Despite the fact that artificial intelligence boosted with data-driven methods (e.g., deep neural networks) has surpassed human-level performance in various tasks, its application to autonomous</p> <p>systems still faces fundamental challenges such as lack of interpretability, intensive need for data and lack of verifiability. In this overview paper, I overview some attempts to address these fundamental challenges by explaining, guiding and verifying autonomous systems, taking into account limited availability of simulated and real data, the expressivity of high-level</p> <p>knowledge representations and the uncertainties of the underlying model. Specifically, this paper covers learning high-level knowledge from data for interpretable autonomous systems,</p><p>guiding autonomous systems with high-level knowledge, and</p><p>verifying and controlling autonomous systems against high-level specifications.</p>


Sign in / Sign up

Export Citation Format

Share Document