Feedforward Neural Network Models for FPGA Routing Channel Width Estimation

2016 ◽  
Vol 25 (1) ◽  
pp. 71-76 ◽  
Author(s):  
Qiang Liu ◽  
Ming Gao ◽  
Qijun Zhang ◽  
Tao Zhang
2017 ◽  
Vol 32 (1) ◽  
pp. 83-103 ◽  
Author(s):  
Muhammad Shoaib ◽  
Asaad Y. Shamseldin ◽  
Sher Khan ◽  
Mudasser Muneer Khan ◽  
Zahid Mahmood Khan ◽  
...  

1989 ◽  
Vol 1 (2) ◽  
pp. 161-172 ◽  
Author(s):  
Fernando J. Pineda

Error backpropagation in feedforward neural network models is a popular learning algorithm that has its roots in nonlinear estimation and optimization. It is being used routinely to calculate error gradients in nonlinear systems with hundreds of thousands of parameters. However, the classical architecture for backpropagation has severe restrictions. The extension of backpropagation to networks with recurrent connections will be reviewed. It is now possible to efficiently compute the error gradients for networks that have temporal dynamics, which opens applications to a host of problems in systems identification and control.


2005 ◽  
Vol 15 (05) ◽  
pp. 323-338 ◽  
Author(s):  
RALF KRETZSCHMAR ◽  
NICOLAOS B. KARAYIANNIS ◽  
FRITZ EGGIMANN

This paper proposes a framework for training feedforward neural network models capable of handling class overlap and imbalance by minimizing an error function that compensates for such imperfections of the training set. A special case of the proposed error function can be used for training variance-controlled neural networks (VCNNs), which are developed to handle class overlap by minimizing an error function involving the class-specific variance (CSV) computed at their outputs. Another special case of the proposed error function can be used for training class-balancing neural networks (CBNNs), which are developed to handle class imbalance by relying on class-specific correction (CSC). VCNNs and CBNNs are compared with conventional feedforward neural networks (FFNNs), quantum neural networks (QNNs), and resampling techniques. The properties of VCNNs and CBNNs are illustrated by experiments on artificial data. Various experiments involving real-world data reveal the advantages offered by VCNNs and CBNNs in the presence of class overlap and class imbalance.


2020 ◽  
Vol 19 (3) ◽  
pp. 13-25
Author(s):  
Paweł Kaczmarczyk

The presented research focuses on the construction of a model to effectively forecast demand for connection services – it is thus relevant to the Prediction System (PS) of telecom operators. The article contains results of comparative studies regarding the effectiveness of neural network models and regressive-neural (integrated) models, in terms of their short-term forecasting abilities for multi-sectional demand of telecom services. The feedforward neural network was used as the neural network model. A regressive-neural model was constructed by fusing the dichotomous linear regression of multi-sectional demand and the feedforward neural network that was used to model the residuals of the regression model (i.e. the residual variability). The response variable was the hourly counted seconds of outgoing calls within the framework of the selected operator network. The calls were analysed within: type of 24 hours (e.g. weekday/weekend), connection categories, and subscriber groups. For both compared models 35 explanatory variables were specified and used in the estimation process. The results show that the regressive-neural model is characterised by higher approximation and predictive capabilities than the non-integrated neural model.


2005 ◽  
Vol 17 (9) ◽  
pp. 2034-2059 ◽  
Author(s):  
Kosuke Hamaguchi ◽  
Masato Okada ◽  
Michiko Yamana ◽  
Kazuyuki Aihara

We report on deterministic and stochastic evolutions of firing states through a feedforward neural network with Mexican-hat-type connectivity. The prevalence of columnar structures in a cortex implies spatially localized connectivity between neural pools. Although feedforward neural network models with homogeneous connectivity have been intensively studied within the context of the synfire chain, the effect of local connectivity has not yet been studied so thoroughly. When a neuron fires independently, the dynamics of macroscopic state variables (a firing rate and spatial eccentricity of a firing pattern) is deterministic from the law of large numbers. Possible stable firing states, which are derived from deterministic evolution equations, are uniform, localized, and nonfiring. The multistability of these three states is obtained where the excitatory and inhibitory interactions among neurons are balanced. When the presynapse-dependent variance in connection efficacies is incorporated into the network, the variance generates common noise. Then the evolution of the macroscopic state variables becomes stochastic, and neurons begin to fire in a correlated manner due to the common noise. The correlation structure that is generated by common noise exhibits a nontrivial bimodal distribution. The development of a firing state through neural layers does not converge to a certain fixed point but keeps on fluctuating.


2020 ◽  
Vol 5 ◽  
pp. 140-147 ◽  
Author(s):  
T.N. Aleksandrova ◽  
◽  
E.K. Ushakov ◽  
A.V. Orlova ◽  
◽  
...  

The neural network models series used in the development of an aggregated digital twin of equipment as a cyber-physical system are presented. The twins of machining accuracy, chip formation and tool wear are examined in detail. On their basis, systems for stabilization of the chip formation process during cutting and diagnose of the cutting too wear are developed. Keywords cyberphysical system; neural network model of equipment; big data, digital twin of the chip formation; digital twin of the tool wear; digital twin of nanostructured coating choice


Sign in / Sign up

Export Citation Format

Share Document