The Particle Number Counter as a “Black Box” - A Novel Approach to a Universal Particle Number Calibration Standard for Automotive Exhaust

2020 ◽  
Author(s):  
Alexander Terres ◽  
Heinz Bacher ◽  
Volker Ebert
2020 ◽  
Author(s):  
Julian Hatwell ◽  
Mohamed Medhat Gaber ◽  
R.M. Atif Azad

Abstract Background Computer Aided Diagnostics (CAD) can support medical practitioners to make critical decisions about their patients' disease conditions. Practitioners require access to the chain of reasoning behind CAD to build trust in the CAD advice and to supplement their own expertise. Yet, CAD systems might be based on black box machine learning (ML) models and high dimensional data sources (electronic health records, MRI scans, cardiotocograms, etc). These foundations make interpretation and explanation of the CAD advice very challenging. This challenge is recognised throughout the machine learning research community. eXplainable Artificial Intelligence (XAI) is emerging as one of the most important research areas of recent years because it addresses the interpretability and trust concerns of critical decision makers, including those in clinical and medical practice. Methods In this work, we focus on AdaBoost, a black box ML model that has been widely adopted in the CAD literature. We address the challenge -- to explain AdaBoost classification -- with a novel algorithm that extracts simple, logical rules from AdaBoost models. Our algorithm, Adaptive-Weighted High Importance Path Snippets (Ada-WHIPS), makes use of AdaBoost's adaptive classifier weights. Using a novel formulation, Ada-WHIPS uniquely redistributes the weights among individual decision nodes of the internal decision trees (DT) of the AdaBoost model. Then, a simple heuristic search of the weighted nodes finds a single rule that dominated the model's decision. We compare the explanations generated by our novel approach with the state of the art in an experimental study. We evaluate the derived explanations with simple statistical tests of well-known quality measures, precision and coverage, and a novel measure stability that is better suited to the XAI setting .


2022 ◽  
Author(s):  
Rutger R van de Leur ◽  
Max N Bos ◽  
Karim Taha ◽  
Arjan Sammani ◽  
Stefan van Duijvenboden ◽  
...  

Background Deep neural networks (DNNs) show excellent performance in interpreting electrocardiograms (ECGs), both for conventional ECG interpretation and for novel applications such as detection of reduced ejection fraction and prediction of one-year mortality. Despite these promising developments, clinical implementation is severely hampered by the lack of trustworthy techniques to explain the decisions of the algorithm to clinicians. Especially, currently employed heatmap-based methods have shown to be inaccurate. Methods We present a novel approach that is inherently explainable and uses an unsupervised variational auto-encoder (VAE) to learn the underlying factors of variation of the ECG (the FactorECG) in a database with 1.1 million ECG recordings. These factors are subsequently used in a pipeline with common and interpretable statistical methods. As the ECG factors are explainable by generating and visualizing ECGs on both the model- and individual patient-level, the pipeline becomes fully explainable. The performance of the pipeline is compared to a state-of-the-art black box DNN in three tasks: conventional ECG interpretation with 35 diagnostic statements, detection of reduced ejection fraction and prediction of one-year mortality. Results The VAE was able to compress the ECG into 21 generative ECG factors, which are associated with physiologically valid underlying anatomical and (patho)physiological processes. When applying the novel pipeline to the three tasks, the explainable FactorECG pipeline performed similar to state-of-the-art black box DNNs in conventional ECG interpretation (AUROC 0.94 vs 0.96), detection of reduced ejection fraction (AUROC 0.90 vs 0.91) and prediction of one-year mortality (AUROC 0.76 vs 0.75). Contrary to state-of-the-art, our pipeline provided inherent explainability on which morphological ECG features were important for prediction or diagnosis. Conclusion Future studies should employ DNNs that are inherently explainable to facilitate clinical implementation by gaining confidence in artificial intelligence, and more importantly, making it possible to identify biased or inaccurate models.


2018 ◽  
Vol 66 (9) ◽  
pp. 704-713 ◽  
Author(s):  
Tobias Münker ◽  
Timm J. Peter ◽  
Oliver Nelles

Abstract The problem of modeling a linear dynamic system is discussed and a novel approach to automatically combine black-box and white-box models is introduced. The solution proposed in this contribution is based on the usage of regularized finite-impulse-response (FIR) models. In contrast to classical gray-box modelling, which often only optimizes the parameters of a given model structure, our approach is able to handle the problem of undermodeling as well. Therefore, the amount of trust in the white-box or gray-box model is optimized based on a generalized cross-validation criterion. The feasibility of the approach is demonstrated with a pendulum example. It is furthermore investigated, which level of prior knowledge is best suited for the identification of the process.


Cybersecurity ◽  
2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Jianhua Wang ◽  
Xiaolin Chang ◽  
Yixiang Wang ◽  
Ricardo J. Rodríguez ◽  
Jianan Zhang

AbstractAdversarial Malware Example (AME)-based adversarial training can effectively enhance the robustness of Machine Learning (ML)-based malware detectors against AME. AME quality is a key factor to the robustness enhancement. Generative Adversarial Network (GAN) is a kind of AME generation method, but the existing GAN-based AME generation methods have the issues of inadequate optimization, mode collapse and training instability. In this paper, we propose a novel approach (denote as LSGAN-AT) to enhance ML-based malware detector robustness against Adversarial Examples, which includes LSGAN module and AT module. LSGAN module can generate more effective and smoother AME by utilizing brand-new network structures and Least Square (LS) loss to optimize boundary samples. AT module makes adversarial training using AME generated by LSGAN to generate ML-based Robust Malware Detector (RMD). Extensive experiment results validate the better transferability of AME in terms of attacking 6 ML detectors and the RMD transferability in terms of resisting the MalGAN black-box attack. The results also verify the performance of the generated RMD in the recognition rate of AME.


2020 ◽  
Author(s):  
Julian Hatwell ◽  
Mohamed Medhat Gaber ◽  
R.M. Atif Azad

Abstract Background Computer Aided Diagnostics (CAD) can support medical practitioners to make critical decisions about their patients' disease conditions. Practitioners require access to the chain of reasoning behind CAD to build trust in the CAD advice and to supplement their own expertise. Yet, CAD systems might be based on black box machine learning (ML) models and high dimensional data sources (electronic health records, MRI scans, cardiotocograms, etc). These foundations make interpretation and explanation of the CAD advice very challenging. This challenge is recognised throughout the machine learning research community. eXplainable Artificial Intelligence (XAI) is emerging as one of the most important research areas of recent years because it addresses the interpretability and trust concerns of critical decision makers, including those in clinical and medical practice. Methods In this work, we focus on AdaBoost, a black box ML model that has been widely adopted in the CAD literature. We address the challenge -- to explain AdaBoost classification -- with a novel algorithm that extracts simple, logical rules from AdaBoost models. Our algorithm, Adaptive-Weighted High Importance Path Snippets (Ada-WHIPS), makes use of AdaBoost's adaptive classifier weights. Using a novel formulation, Ada-WHIPS uniquely redistributes the weights among individual decision nodes of the internal decision trees (DT) of the AdaBoost model. Then, a simple heuristic search of the weighted nodes finds a single rule that dominated the model's decision. We compare the explanations generated by our novel approach with the state of the art in an experimental study. We evaluate the derived explanations with simple statistical tests of well-known quality measures, precision and coverage, and a novel measure stability that is better suited to the XAI setting .


2004 ◽  
Vol 91 (3) ◽  
pp. 229-244 ◽  
Author(s):  
A.H. Geeraerd ◽  
V.P. Valdramidis ◽  
F. Devlieghere ◽  
H. Bernaert ◽  
J. Debevere ◽  
...  

Legal Studies ◽  
2021 ◽  
pp. 1-20
Author(s):  
Rebecca Schmidt ◽  
Colin Scott

Abstract Discretion gives decision makers choices as to how resources are allocated, or how other aspects of state largesse or coercion are deployed. Discretionary state power challenges aspects of the rule of law, first by transferring decisions from legislators to departments, agencies and street-level bureaucrats and secondly by risking the uniform application of key fairness and equality norms. Concerns to find alternative and decentred forms of regulation gave rise to new types of regulation, sometimes labeled ‘regulatory capitalism’. Regulatory capitalism highlights the roles of a wider range of actors exercising powers and a wider range of instruments. It includes also new forms of discretion, for example over automated decision making processes, over the formulation and dissemination of league tables or over the use of behavioural measures. This paper takes a novel approach by linking and extending the significant literature on these changing patterns of regulatory administration with consideration of the changing modes of deployment of discretion. Using this specific lens, we observe two potentially contradictory trends: an increase in determining and structuring administrative decision, leading to a more transparent use of discretion; and the increased use of automated decision making processes which have the potential of producing a less transparent black box scenario.


Author(s):  
Julian Hatwell ◽  
Mohamed Medhat Gaber ◽  
R. Muhammad Atif Azad

Abstract Background Computer Aided Diagnostics (CAD) can support medical practitioners to make critical decisions about their patients’ disease conditions. Practitioners require access to the chain of reasoning behind CAD to build trust in the CAD advice and to supplement their own expertise. Yet, CAD systems might be based on black box machine learning models and high dimensional data sources such as electronic health records, magnetic resonance imaging scans, cardiotocograms, etc. These foundations make interpretation and explanation of the CAD advice very challenging. This challenge is recognised throughout the machine learning research community. eXplainable Artificial Intelligence (XAI) is emerging as one of the most important research areas of recent years because it addresses the interpretability and trust concerns of critical decision makers, including those in clinical and medical practice. Methods In this work, we focus on AdaBoost, a black box model that has been widely adopted in the CAD literature. We address the challenge – to explain AdaBoost classification – with a novel algorithm that extracts simple, logical rules from AdaBoost models. Our algorithm, Adaptive-Weighted High Importance Path Snippets (Ada-WHIPS), makes use of AdaBoost’s adaptive classifier weights. Using a novel formulation, Ada-WHIPS uniquely redistributes the weights among individual decision nodes of the internal decision trees of the AdaBoost model. Then, a simple heuristic search of the weighted nodes finds a single rule that dominated the model’s decision. We compare the explanations generated by our novel approach with the state of the art in an experimental study. We evaluate the derived explanations with simple statistical tests of well-known quality measures, precision and coverage, and a novel measure stability that is better suited to the XAI setting. Results Experiments on 9 CAD-related data sets showed that Ada-WHIPS explanations consistently generalise better (mean coverage 15%-68%) than the state of the art while remaining competitive for specificity (mean precision 80%-99%). A very small trade-off in specificity is shown to guard against over-fitting which is a known problem in the state of the art methods. Conclusions The experimental results demonstrate the benefits of using our novel algorithm for explaining CAD AdaBoost classifiers widely found in the literature. Our tightly coupled, AdaBoost-specific approach outperforms model-agnostic explanation methods and should be considered by practitioners looking for an XAI solution for this class of models.


Sign in / Sign up

Export Citation Format

Share Document