Case studies of neural networks in engineering applications

Author(s):  
Zhaoying Zhou ◽  
Shenshu Xiong
2003 ◽  
Vol 7 (5) ◽  
pp. 693-706 ◽  
Author(s):  
E. Gaume ◽  
R. Gosset

Abstract. Recently Feed-Forward Artificial Neural Networks (FNN) have been gaining popularity for stream flow forecasting. However, despite the promising results presented in recent papers, their use is questionable. In theory, their “universal approximator‿ property guarantees that, if a sufficient number of neurons is selected, good performance of the models for interpolation purposes can be achieved. But the choice of a more complex model does not ensure a better prediction. Models with many parameters have a high capacity to fit the noise and the particularities of the calibration dataset, at the cost of diminishing their generalisation capacity. In support of the principle of model parsimony, a model selection method based on the validation performance of the models, "traditionally" used in the context of conceptual rainfall-runoff modelling, was adapted to the choice of a FFN structure. This method was applied to two different case studies: river flow prediction based on knowledge of upstream flows, and rainfall-runoff modelling. The predictive powers of the neural networks selected are compared to the results obtained with a linear model and a conceptual model (GR4j). In both case studies, the method leads to the selection of neural network structures with a limited number of neurons in the hidden layer (two or three). Moreover, the validation results of the selected FNN and of the linear model are very close. The conceptual model, specifically dedicated to rainfall-runoff modelling, appears to outperform the other two approaches. These conclusions, drawn on specific case studies using a particular evaluation method, add to the debate on the usefulness of Artificial Neural Networks in hydrology. Keywords: forecasting; stream-flow; rainfall-runoff; Artificial Neural Networks


2002 ◽  
Vol 124 (3) ◽  
pp. 364-374 ◽  
Author(s):  
Alexander G. Parlos ◽  
Sunil K. Menon ◽  
Amir F. Atiya

On-line filtering of stochastic variables that are difficult or expensive to directly measure has been widely studied. In this paper a practical algorithm is presented for adaptive state filtering when the underlying nonlinear state equations are partially known. The unknown dynamics are constructively approximated using neural networks. The proposed algorithm is based on the two-step prediction-update approach of the Kalman Filter. The algorithm accounts for the unmodeled nonlinear dynamics and makes no assumptions regarding the system noise statistics. The proposed filter is implemented using static and dynamic feedforward neural networks. Both off-line and on-line learning algorithms are presented for training the filter networks. Two case studies are considered and comparisons with Extended Kalman Filters (EKFs) performed. For one of the case studies, the EKF converges but it results in higher state estimation errors than the equivalent neural filter with on-line learning. For another, more complex case study, the developed EKF does not converge. For both case studies, the off-line trained neural state filters converge quite rapidly and exhibit acceptable performance. On-line training further enhances filter performance, decoupling the eventual filter accuracy from the accuracy of the assumed system model.


2021 ◽  
Author(s):  
Chih-Kuan Yeh ◽  
Been Kim ◽  
Pradeep Ravikumar

Understanding complex machine learning models such as deep neural networks with explanations is crucial in various applications. Many explanations stem from the model perspective, and may not necessarily effectively communicate why the model is making its predictions at the right level of abstraction. For example, providing importance weights to individual pixels in an image can only express which parts of that particular image is important to the model, but humans may prefer an explanation which explains the prediction by concept-based thinking. In this work, we review the emerging area of concept based explanations. We start by introducing concept explanations including the class of Concept Activation Vectors (CAV) which characterize concepts using vectors in appropriate spaces of neural activations, and discuss different properties of useful concepts, and approaches to measure the usefulness of concept vectors. We then discuss approaches to automatically extract concepts, and approaches to address some of their caveats. Finally, we discuss some case studies that showcase the utility of such concept-based explanations in synthetic settings and real world applications.


2020 ◽  
Vol 32 (16) ◽  
pp. 12241-12242
Author(s):  
Elias Pimenidis ◽  
Chrisina Jayne

2016 ◽  
Vol 27 (5) ◽  
pp. 1075-1076 ◽  
Author(s):  
Chrisina Jayne ◽  
Lazaros Iliadis ◽  
Valeri Mladenov

Sign in / Sign up

Export Citation Format

Share Document