Author(s):  
Yuanchen Fang ◽  
Huyang Xu ◽  
Nasser Fard

For systems with multiple redundancies, reliability evaluation in the redundancy allocation problem (RAP) constitutes a computational complexity. It has been demonstrated that neural network training provides an efficient approach to estimate the complex system reliability function. When executing the neural network algorithm, there are many parameters that need to be determined for improving the training performance. Therefore, robust experimental design method can be used to determine the neural network parameters. The traditional robust design methods are intended for a single response variable. However, the application of neural network method includes more than one measurement, such as estimation accuracy and time efficiency. In this paper, utility function is first estimated by neural network training, in which the algorithm parameters are determined by weighted principal component (WPCA)-based multi-response optimization which simultaneously optimizes more than one training performance measurements. Moreover, it is always desirable to simultaneously optimize several objectives in designing a system, such as reliability, cost, etc. Therefore, continuous WPCA-based multi-response design is then applied to obtain the best design of redundancies in RAP, which simultaneously optimize multiple objectives by taking into account the correlations between them.


2022 ◽  
Vol 15 ◽  
Author(s):  
Chaeun Lee ◽  
Kyungmi Noh ◽  
Wonjae Ji ◽  
Tayfun Gokmen ◽  
Seyoung Kim

Recent progress in novel non-volatile memory-based synaptic device technologies and their feasibility for matrix-vector multiplication (MVM) has ignited active research on implementing analog neural network training accelerators with resistive crosspoint arrays. While significant performance boost as well as area- and power-efficiency is theoretically predicted, the realization of such analog accelerators is largely limited by non-ideal switching characteristics of crosspoint elements. One of the most performance-limiting non-idealities is the conductance update asymmetry which is known to distort the actual weight change values away from the calculation by error back-propagation and, therefore, significantly deteriorates the neural network training performance. To address this issue by an algorithmic remedy, Tiki-Taka algorithm was proposed and shown to be effective for neural network training with asymmetric devices. However, a systematic analysis to reveal the required asymmetry specification to guarantee the neural network performance has been unexplored. Here, we quantitatively analyze the impact of update asymmetry on the neural network training performance when trained with Tiki-Taka algorithm by exploring the space of asymmetry and hyper-parameters and measuring the classification accuracy. We discover that the update asymmetry level of the auxiliary array affects the way the optimizer takes the importance of previous gradients, whereas that of main array affects the frequency of accepting those gradients. We propose a novel calibration method to find the optimal operating point in terms of device and network parameters. By searching over the hyper-parameter space of Tiki-Taka algorithm using interpolation and Gaussian filtering, we find the optimal hyper-parameters efficiently and reveal the optimal range of asymmetry, namely the asymmetry specification. Finally, we show that the analysis and calibration method be applicable to spiking neural networks.


Entropy ◽  
2021 ◽  
Vol 23 (6) ◽  
pp. 711
Author(s):  
Mina Basirat ◽  
Bernhard C. Geiger ◽  
Peter M. Roth

Information plane analysis, describing the mutual information between the input and a hidden layer and between a hidden layer and the target over time, has recently been proposed to analyze the training of neural networks. Since the activations of a hidden layer are typically continuous-valued, this mutual information cannot be computed analytically and must thus be estimated, resulting in apparently inconsistent or even contradicting results in the literature. The goal of this paper is to demonstrate how information plane analysis can still be a valuable tool for analyzing neural network training. To this end, we complement the prevailing binning estimator for mutual information with a geometric interpretation. With this geometric interpretation in mind, we evaluate the impact of regularization and interpret phenomena such as underfitting and overfitting. In addition, we investigate neural network learning in the presence of noisy data and noisy labels.


2020 ◽  
pp. 106878
Author(s):  
H. M. Dipu Kabir ◽  
Abbas Khosravi ◽  
Abdollah Kavousi-Fard ◽  
Saeid Nahavandi ◽  
Dipti Srinivasan

Sign in / Sign up

Export Citation Format

Share Document