box models
Recently Published Documents


TOTAL DOCUMENTS

473
(FIVE YEARS 162)

H-INDEX

29
(FIVE YEARS 7)

Electronics ◽  
2021 ◽  
Vol 11 (1) ◽  
pp. 106
Author(s):  
Irfan Ahmed ◽  
Indika Kumara ◽  
Vahideh Reshadat ◽  
A. S. M. Kayes ◽  
Willem-Jan van den Heuvel ◽  
...  

Travel time information is used as input or auxiliary data for tasks such as dynamic navigation, infrastructure planning, congestion control, and accident detection. Various data-driven Travel Time Prediction (TTP) methods have been proposed in recent years. One of the most challenging tasks in TTP is developing and selecting the most appropriate prediction algorithm. The existing studies that empirically compare different TTP models only use a few models with specific features. Moreover, there is a lack of research on explaining TTPs made by black-box models. Such explanations can help to tune and apply TTP methods successfully. To fill these gaps in the current TTP literature, using three data sets, we compare three types of TTP methods (ensemble tree-based learning, deep neural networks, and hybrid models) and ten different prediction algorithms overall. Furthermore, we apply XAI (Explainable Artificial Intelligence) methods (SHAP and LIME) to understand and interpret models’ predictions. The prediction accuracy and reliability for all models are evaluated and compared. We observed that the ensemble learning methods, i.e., XGBoost and LightGBM, are the best performing models over the three data sets, and XAI methods can adequately explain how various spatial and temporal features influence travel time.


Author(s):  
LEOPOLDO BERTOSSI

Abstract We propose answer-set programs that specify and compute counterfactual interventions on entities that are input on a classification model. In relation to the outcome of the model, the resulting counterfactual entities serve as a basis for the definition and computation of causality-based explanation scores for the feature values in the entity under classification, namely responsibility scores. The approach and the programs can be applied with black-box models, and also with models that can be specified as logic programs, such as rule-based classifiers. The main focus of this study is on the specification and computation of best counterfactual entities, that is, those that lead to maximum responsibility scores. From them one can read off the explanations as maximum responsibility feature values in the original entity. We also extend the programs to bring into the picture semantic or domain knowledge. We show how the approach could be extended by means of probabilistic methods, and how the underlying probability distributions could be modified through the use of constraints. Several examples of programs written in the syntax of the DLV ASP-solver, and run with it, are shown.


Sports ◽  
2021 ◽  
Vol 10 (1) ◽  
pp. 5
Author(s):  
Alessio Rossi ◽  
Luca Pappalardo ◽  
Paolo Cintia

In the last decade, the number of studies about machine learning algorithms applied to sports, e.g., injury forecasting and athlete performance prediction, have rapidly increased. Due to the number of works and experiments already present in the state-of-the-art regarding machine-learning techniques in sport science, the aim of this narrative review is to provide a guideline describing a correct approach for training, validating, and testing machine learning models to predict events in sports science. The main contribution of this narrative review is to highlight any possible strengths and limitations during all the stages of model development, i.e., training, validation, testing, and interpretation, in order to limit possible errors that could induce misleading results. In particular, this paper shows an example about injury forecaster that provides a description of all the features that could be used to predict injuries, all the possible pre-processing approaches for time series analysis, how to correctly split the dataset to train and test the predictive models, and the importance to explain the decision-making approach of the white and black box models.


Risks ◽  
2021 ◽  
Vol 10 (1) ◽  
pp. 3
Author(s):  
Spencer Matthews ◽  
Brian Hartman

Two-part models are important to and used throughout insurance and actuarial science. Since insurance is required for registering a car, obtaining a mortgage, and participating in certain businesses, it is especially important that the models that price insurance policies are fair and non-discriminatory. Black box models can make it very difficult to know which covariates are influencing the results, resulting in model risk and bias. SHAP (SHapley Additive exPlanations) values enable interpretation of various black box models, but little progress has been made in two-part models. In this paper, we propose mSHAP (or multiplicative SHAP), a method for computing SHAP values of two-part models using the SHAP values of the individual models. This method will allow for the predictions of two-part models to be explained at an individual observation level. After developing mSHAP, we perform an in-depth simulation study. Although the kernelSHAP algorithm is also capable of computing approximate SHAP values for a two-part model, a comparison with our method demonstrates that mSHAP is exponentially faster. Ultimately, we apply mSHAP to a two-part ratemaking model for personal auto property damage insurance coverage. Additionally, an R package (mshap) is available to easily implement the method in a wide variety of applications.


AI and Ethics ◽  
2021 ◽  
Author(s):  
Christian Herzog

AbstractThis Comment explores the implications of a lack of tools that facilitate an explicable utilization of epistemologically richer, but also more involved white-box approaches in AI. In contrast, advances in explainable artificial intelligence for black-box approaches have led to the availability of semi-standardized and attractive toolchains that offer a seemingly competitive edge over inherently interpretable white-box models in terms of intelligibility towards users. Consequently, there is a need for research on efficient tools for rendering interpretable white-box approaches in AI explicable to facilitate responsible use.


2021 ◽  
Vol 2021 (12) ◽  
pp. 124007
Author(s):  
Christoph Feinauer ◽  
Carlo Lucibello

Abstract Pairwise models like the Ising model or the generalized Potts model have found many successful applications in fields like physics, biology, and economics. Closely connected is the problem of inverse statistical mechanics, where the goal is to infer the parameters of such models given observed data. An open problem in this field is the question of how to train these models in the case where the data contain additional higher-order interactions that are not present in the pairwise model. In this work, we propose an approach based on energy-based models and pseudolikelihood maximization to address these complications: we show that hybrid models, which combine a pairwise model and a neural network, can lead to significant improvements in the reconstruction of pairwise interactions. We show these improvements to hold consistently when compared to a standard approach using only the pairwise model and to an approach using only a neural network. This is in line with the general idea that simple interpretable models and complex black-box models are not necessarily a dichotomy: interpolating these two classes of models can allow to keep some advantages of both.


Energies ◽  
2021 ◽  
Vol 14 (23) ◽  
pp. 7865
Author(s):  
Saeid Shahpouri ◽  
Armin Norouzi ◽  
Christopher Hayduk ◽  
Reza Rezaei ◽  
Mahdi Shahbakhti ◽  
...  

The standards for emissions from diesel engines are becoming more stringent and accurate emission modeling is crucial in order to control the engine to meet these standards. Soot emissions are formed through a complex process and are challenging to model. A comprehensive analysis of diesel engine soot emissions modeling for control applications is presented in this paper. Physical, black-box, and gray-box models are developed for soot emissions prediction. Additionally, different feature sets based on the least absolute shrinkage and selection operator (LASSO) feature selection method and physical knowledge are examined to develop computationally efficient soot models with good precision. The physical model is a virtual engine modeled in GT-Power software that is parameterized using a portion of experimental data. Different machine learning methods, including Regression Tree (RT), Ensemble of Regression Trees (ERT), Support Vector Machines (SVM), Gaussian Process Regression (GPR), Artificial Neural Network (ANN), and Bayesian Neural Network (BNN) are used to develop the black-box models. The gray-box models include a combination of the physical and black-box models. A total of five feature sets and eight different machine learning methods are tested. An analysis of the accuracy, training time and test time of the models is performed using the K-means clustering algorithm. It provides a systematic way for categorizing the feature sets and methods based on their performance and selecting the best method for a specific application. According to the analysis, the black-box model consisting of GPR and feature selection by LASSO shows the best performance with test R2 of 0.96. The best gray-box model consists of SVM-based method with physical insight feature set along with LASSO for feature selection with test R2 of 0.97.


2021 ◽  
Author(s):  
Matthew L. Dawson ◽  
Christian Guzman ◽  
Jeffrey H. Curtis ◽  
Mario Acosta ◽  
Shupeng Zhu ◽  
...  

Abstract. A flexible treatment for gas- and aerosol-phase chemical processes has been developed for models of diverse scale, from box models up to global models. At the core of this novel framework is an "abstracted aerosol representation" that allows a given chemical mechanism to be solved in atmospheric models with different aerosol representations (e.g., sectional, modal, or particle-resolved). This is accomplished by treating aerosols as a collection of condensed phases that are implemented according to the aerosol representation of the host model. The framework also allows multiple chemical processes (e.g., gas- and aerosol-phase chemical reactions, emissions, deposition, photolysis, and mass-transfer) to be solved simultaneously as a single system. The flexibility of the model is achieved by (1) using an object-oriented design that facilitates extensibility to new types of chemical processes and to new ways of representing aerosol systems; (2) runtime model configuration using JSON input files that permits making changes to any part of the chemical mechanism without recompiling the model; this widely used, human-readable format allows entire gas- and aerosol-phase chemical mechanisms to be described with as much complexity as necessary; and (3) automated comprehensive testing that ensures stability of the code as new functionality is introduced. Together, these design choices enable users to build a customized multiphase mechanism, without having to handle pre-processors, solvers or compilers. Removing these hurdles makes this type of modeling accessible to a much wider community, including modelers, experimentalists, and educators. This new treatment compiles as a stand-alone library and has been deployed in the particle-resolved PartMC model and in the MONARCH chemical weather prediction system for use at regional and global scales. Results from the initial deployment to box models of different complexity and MONARCH will be discussed, along with future extension to more complex gas--aerosol systems, and the integration of GPU-based solvers.


Author(s):  
Ivars Namatēvs ◽  
Kaspars Sudars ◽  
Kaspars Ozols

Model understanding is critical in many domains, particularly those involved in high-stakes decisions, i.e., medicine, criminal justice, and autonomous driving. Explainable AI (XAI) methods are essential for working with black-box models such as Convolutional Neural Networks. This paper evaluates the traffic sign classifier of Deep Neural Network (DNN) from the Programmable Systems for Intelligence in Automobiles (PRYSTINE) project for explainability. The results of explanations were further used for the CNN PRYSTINE classifier vague kernels` compression. After all, the precision of the classifier was evaluated in different pruning scenarios. The proposed classifier performance methodology was realised by creating the original traffic sign and traffic light classification and explanation code. First, the status of the kernels of the network was evaluated for explainability. For this task, the post-hoc, local, meaningful perturbation-based forward explainable method was integrated into the model to evaluate each kernel status of the network. This method enabled distinguishing high and low-impact kernels in the CNN. Second, the vague kernels of the classifier of the last layer before the fully connected layer were excluded by withdrawing them from the network. Third, the network's precision was evaluated in different kernel compression levels. It is shown that by using the XAI approach for network kernel compression, the pruning of 5% of kernels leads only to a 1% loss in traffic sign and traffic light classification precision. The proposed methodology is crucial where execution time and processing capacity prevail.


2021 ◽  
Vol 2069 (1) ◽  
pp. 012101
Author(s):  
Xiang Zhang ◽  
Katia Ritosa ◽  
Dirk Saelens ◽  
Staf Roels

Abstract The combination of in-situ collected data and statistical modelling techniques proved to be a promising approach in actual building energy performance assessments, such as heat loss coefficient (HLC) evaluation. In this study, based on datasets of co-heating and pseudo-random binary sequence heating tests on a portable site office, the performance of three types of statistical models (i.e. multiple linear regression (MLR), autoregressive with exogenous terms (ARX), and grey-box models) on HLC-determination are examined. It is revealed that a similar HLC estimation outcome (about 115 W/K) is offered by the aforesaid three types of statistical models, but with different confidence intervals (CIs), where the 95% CIs of MLR (±3.1%) and ARX (±2.4%) are relatively narrow and the ones of grey box models are somewhat wider (around ± 9%). Moreover, for the current case study building, with evenly orientation-wise distributed glazed envelope, integrating B-splines into the grey-box model, to characterize the solar aperture (gA) and solar gain dynamics more precisely, imposed insignificant effects on the HLC estimation and corresponding 95% CIs, compared to the grey-box model with a constant gA assumption.


Sign in / Sign up

Export Citation Format

Share Document