Looking inside the black box: assessing model-based learning and inquiry in BioLogica™

2010 ◽  
Vol 5 (2) ◽  
pp. 166 ◽  
Author(s):  
Barbara C. Buckley ◽  
Janice D. Gobert ◽  
Paul Horwitz ◽  
Laura M. O'Dwyer
Keyword(s):  
2021 ◽  
Author(s):  
Junjie Shi ◽  
Jiang Bian ◽  
Jakob Richter ◽  
Kuan-Hsun Chen ◽  
Jörg Rahnenführer ◽  
...  

AbstractThe predictive performance of a machine learning model highly depends on the corresponding hyper-parameter setting. Hence, hyper-parameter tuning is often indispensable. Normally such tuning requires the dedicated machine learning model to be trained and evaluated on centralized data to obtain a performance estimate. However, in a distributed machine learning scenario, it is not always possible to collect all the data from all nodes due to privacy concerns or storage limitations. Moreover, if data has to be transferred through low bandwidth connections it reduces the time available for tuning. Model-Based Optimization (MBO) is one state-of-the-art method for tuning hyper-parameters but the application on distributed machine learning models or federated learning lacks research. This work proposes a framework $$\textit{MODES}$$ MODES that allows to deploy MBO on resource-constrained distributed embedded systems. Each node trains an individual model based on its local data. The goal is to optimize the combined prediction accuracy. The presented framework offers two optimization modes: (1) $$\textit{MODES}$$ MODES -B considers the whole ensemble as a single black box and optimizes the hyper-parameters of each individual model jointly, and (2) $$\textit{MODES}$$ MODES -I considers all models as clones of the same black box which allows it to efficiently parallelize the optimization in a distributed setting. We evaluate $$\textit{MODES}$$ MODES by conducting experiments on the optimization for the hyper-parameters of a random forest and a multi-layer perceptron. The experimental results demonstrate that, with an improvement in terms of mean accuracy ($$\textit{MODES}$$ MODES -B), run-time efficiency ($$\textit{MODES}$$ MODES -I), and statistical stability for both modes, $$\textit{MODES}$$ MODES outperforms the baseline, i.e., carry out tuning with MBO on each node individually with its local sub-data set.


2000 ◽  
Author(s):  
Chengyu Gan ◽  
Kourosh Danai

Abstract The utility of a model-based recurrent neural network (MBRNN) is demonstrated in fault diagnosis. The MBRNN can be formatted according to a state-space model. Therefore, it can use model-based fault detection and isolation (FDI) solutions as a starting point, and improve them via training by adapting them to plant nonlinearities. In this paper, the application of MBRNN to the IFAC Benchmark Problem is explored and its performance is compared with ‘black box’ neural network solutions. For this problem, the MBRNN is formulated according to the Eigen-Structure Assignment (ESA) residual generator developed by Jorgensen et al. [1]. The results indicate that the MBRNN provides better results than ‘black box’ neural networks, and that with training it can perform better than the ESA residual generator.


2021 ◽  
Author(s):  
Matteo Corno ◽  
Stefano Dattilo ◽  
Sergio Savaresi

2021 ◽  
Vol 117 ◽  
pp. 104950
Author(s):  
Gianluca Papa ◽  
Mara Tanelli ◽  
Giulio Panzani ◽  
Sergio M. Savaresi

2020 ◽  
Vol 53 (2) ◽  
pp. 14775-14780
Author(s):  
Gianluca Papa ◽  
Mara Tanelli ◽  
Giulio Panzani ◽  
Sergio M. Savaresi

Sign in / Sign up

Export Citation Format

Share Document