Expectation Propagation with Factorizing Distributions: A Gaussian Approximation and Performance Results for Simple Models

2011 ◽  
Vol 23 (4) ◽  
pp. 1047-1069 ◽  
Author(s):  
Fabiano Ribeiro ◽  
Manfred Opper

We discuss the expectation propagation (EP) algorithm for approximate Bayesian inference using a factorizing posterior approximation. For neural network models, we use a central limit theorem argument to make EP tractable when the number of parameters is large. For two types of models, we show that EP can achieve optimal generalization performance when data are drawn from a simple distribution.

Author(s):  
Sergio Cavalieri ◽  
Paolo Maccarrone ◽  
Roberto Pinto

The estimation of the production cost per unit of a product during its design phase can be extremely difficult, especially if information about previous similar products is missing. On the other hand, most of the costs that will be sustained during the production activity are implicitly determined mainly in the design phase, depending on the choice of characteristics and performance of the new product. Hence, the earlier the information about costs becomes available, the better the trade-off between costs and product performances can be managed. These considerations have led to the development of different design rules and techniques, such as Design to Cost, which


2021 ◽  
Author(s):  
Dipanwita Sinha Mukherjee ◽  
Naveen Yeri

<div>Initializing weights are important for fast convergence and performance improvement of Artificial Neural Network models. This study proposes a heuristic method to initialize weights for Neural Network with Fibonacci sequence. Experiments have been carried out with different network structures and datasets and results have been compared with other initialization techniques such as Zero, Random, Xavier and He. It has been observed that for small sized datasets, Fibonacci initialization technique reports 94% of test accuracy which is better than Random (85%) and close to Xavier (93%) and He (96%) initialization methods. Also, for medium sized dataset, we have noted that performance of Fibonacci weight initialization method is comparable with the same for Random, Xavier and He initialization techniques.</div>


1996 ◽  
Vol 07 (02) ◽  
pp. 203-212 ◽  
Author(s):  
M. ZAKI ◽  
A. GHALWASH ◽  
A.A. ELKOUNY

The main emphasis of this paper is to present an approach for combining supervised and unsupervised neural network models to the issue of speaker recognition. To enhance the overall operation and performance of recognition, the proposed strategy integrates the two techniques, forming one global model called the cascaded model. We first present a simple conventional technique based on the distance measured between a test vector and a reference vector for different speakers in the population. This particular distance metric has the property of weighting down the components in those directions along which the intraspeaker variance is large. The reason for presenting this method is to clarify the discrepancy in performance between the conventional and neural network approach. We then introduce the idea of using unsupervised learning technique, presented by the winner-take-all model, as a means of recognition. Due to several tests that have been conducted and in order to enhance the performance of this model, dealing with noisy patterns, we have preceded it with a supervised learning model—the pattern association model—which acts as a filtration stage. This work includes both the design and implementation of both conventional and neural network approaches to recognize the speakers templates—which are introduced to the system via a voice master card and preprocessed before extracting the features used in the recognition. The conclusion indicates that the system performance in case of neural network is better than that of the conventional one, achieving a smooth degradation in respect of noisy patterns, and higher performance in respect of noise-free patterns.


2021 ◽  
Author(s):  
Dipanwita Sinha Mukherjee ◽  
Naveen Yeri

<div>Initializing weights are important for fast convergence and performance improvement of Artificial Neural Network models. This study proposes a heuristic method to initialize weights for Neural Network with Fibonacci sequence. Experiments have been carried out with different network structures and datasets and results have been compared with other initialization techniques such as Zero, Random, Xavier and He. It has been observed that for small sized datasets, Fibonacci initialization technique reports 94% of test accuracy which is better than Random (85%) and close to Xavier (93%) and He (96%) initialization methods. Also, for medium sized dataset, we have noted that performance of Fibonacci weight initialization method is comparable with the same for Random, Xavier and He initialization techniques.</div>


2020 ◽  
Vol 5 ◽  
pp. 140-147 ◽  
Author(s):  
T.N. Aleksandrova ◽  
◽  
E.K. Ushakov ◽  
A.V. Orlova ◽  
◽  
...  

The neural network models series used in the development of an aggregated digital twin of equipment as a cyber-physical system are presented. The twins of machining accuracy, chip formation and tool wear are examined in detail. On their basis, systems for stabilization of the chip formation process during cutting and diagnose of the cutting too wear are developed. Keywords cyberphysical system; neural network model of equipment; big data, digital twin of the chip formation; digital twin of the tool wear; digital twin of nanostructured coating choice


Energies ◽  
2021 ◽  
Vol 14 (14) ◽  
pp. 4242
Author(s):  
Fausto Valencia ◽  
Hugo Arcos ◽  
Franklin Quilumba

The purpose of this research is the evaluation of artificial neural network models in the prediction of stresses in a 400 MVA power transformer winding conductor caused by the circulation of fault currents. The models were compared considering the training, validation, and test data errors’ behavior. Different combinations of hyperparameters were analyzed based on the variation of architectures, optimizers, and activation functions. The data for the process was created from finite element simulations performed in the FEMM software. The design of the Artificial Neural Network was performed using the Keras framework. As a result, a model with one hidden layer was the best suited architecture for the problem at hand, with the optimizer Adam and the activation function ReLU. The final Artificial Neural Network model predictions were compared with the Finite Element Method results, showing good agreement but with a much shorter solution time.


2021 ◽  
Vol 11 (3) ◽  
pp. 908
Author(s):  
Jie Zeng ◽  
Panagiotis G. Asteris ◽  
Anna P. Mamou ◽  
Ahmed Salih Mohammed ◽  
Emmanuil A. Golias ◽  
...  

Buried pipes are extensively used for oil transportation from offshore platforms. Under unfavorable loading combinations, the pipe’s uplift resistance may be exceeded, which may result in excessive deformations and significant disruptions. This paper presents findings from a series of small-scale tests performed on pipes buried in geogrid-reinforced sands, with the measured peak uplift resistance being used to calibrate advanced numerical models employing neural networks. Multilayer perceptron (MLP) and Radial Basis Function (RBF) primary structure types have been used to train two neural network models, which were then further developed using bagging and boosting ensemble techniques. Correlation coefficients in excess of 0.954 between the measured and predicted peak uplift resistance have been achieved. The results show that the design of pipelines can be significantly improved using the proposed novel, reliable and robust soft computing models.


Sign in / Sign up

Export Citation Format

Share Document