scholarly journals An Initial Study of Machine Learning Underspecification Using Feature Attribution Explainable AI Algorithms: A COVID-19 Virus Transmission Case Study

2021 ◽  
pp. 323-335
Author(s):  
James Hinns ◽  
Xiuyi Fan ◽  
Siyuan Liu ◽  
Veera Raghava Reddy Kovvuri ◽  
Mehmet Orcun Yalcin ◽  
...  
2020 ◽  
Vol 28 (4) ◽  
pp. 415-439 ◽  
Author(s):  
Philipp Hacker ◽  
Ralf Krestel ◽  
Stefan Grundmann ◽  
Felix Naumann

Abstract This paper shows that the law, in subtle ways, may set hitherto unrecognized incentives for the adoption of explainable machine learning applications. In doing so, we make two novel contributions. First, on the legal side, we show that to avoid liability, professional actors, such as doctors and managers, may soon be legally compelled to use explainable ML models. We argue that the importance of explainability reaches far beyond data protection law, and crucially influences questions of contractual and tort liability for the use of ML models. To this effect, we conduct two legal case studies, in medical and corporate merger applications of ML. As a second contribution, we discuss the (legally required) trade-off between accuracy and explainability and demonstrate the effect in a technical case study in the context of spam classification.


2021 ◽  
Vol 13 (1) ◽  
Author(s):  
Weizhong Yan ◽  
Zhaoyuan Yang ◽  
Jianwei Qiu

In almost all PHM applications, driving highest possible performance (prediction accuracy and robustness) of PHM models (fault detection, fault diagnosis and prognostics) has been the top development priority, since PHM models’ performance directly impacts how much business value the PHM models can bring. However, recent research work in other domains, e.g., computer vision (CV), has shown that machine learning (ML) models, especially deep learning models, are vulnerable to adversarial attacks; that is, small deliberately-designed perturbations to the original samples can cause the model to make false predictions with high confidence. In fact, adversarial machine learning (AML) targeting security of ML algorithms against adversaries, has become an emerging ML topic and has attracted tremendous research attention in CV and NLP.   Yet, in the PHM community, not much attention has been paid to adversarial vulnerability or security of PHM models. We contend that the economic impact of adversarial attacks to a PHM model might be even bigger than that to hard perceptual problems and thus securing PHM models from adversarial attacks is as important as the PHM models themselves. Also, PHM models, since the data used by the models are primarily time-series sensor measurements, have their own unique characteristics and deserve special attention in securing them.     In this paper we attempt to explore the adversarial vulnerability of PHM models by conducting an initial case study. More specifically, we consider several unique characteristics associated with streaming time-series sensor measurements data in developing attack strategies for attacking PHM models. We hope our initial study here can shed some light on and stimulate more research interests in the area of PHM models’ security.


2020 ◽  
Author(s):  
Markus Jaeger ◽  
Stephan Krügel ◽  
Dimitri Marinelli ◽  
Jochen Papenbrock ◽  
Peter Schwendner

i-com ◽  
2021 ◽  
Vol 20 (1) ◽  
pp. 19-32
Author(s):  
Daniel Buschek ◽  
Charlotte Anlauff ◽  
Florian Lachner

Abstract This paper reflects on a case study of a user-centred concept development process for a Machine Learning (ML) based design tool, conducted at an industry partner. The resulting concept uses ML to match graphical user interface elements in sketches on paper to their digital counterparts to create consistent wireframes. A user study (N=20) with a working prototype shows that this concept is preferred by designers, compared to the previous manual procedure. Reflecting on our process and findings we discuss lessons learned for developing ML tools that respect practitioners’ needs and practices.


Sign in / Sign up

Export Citation Format

Share Document