TAG: A Neural Network Model for Large-Scale Optical Implementation

1991 ◽  
Vol 3 (1) ◽  
pp. 135-143 ◽  
Author(s):  
Hyuek-Jae Lee ◽  
Soo-Young Lee ◽  
Sang-Yung Shin ◽  
Bo-Yun Koh

TAG (Training by Adaptive Gain) is a new adaptive learning algorithm developed for optical implementation of large-scale artificial neural networks. For fully interconnected single-layer neural networks with N input and M output neurons TAG contains two different types of interconnections, i.e., M N global fixed interconnections and N + M adaptive gain controls. For two-dimensional input patterns the former may be achieved by multifacet holograms, and the latter by spatial light modulators (SLMs). For the same number of input and output neurons TAG requires much less adaptive elements, and provides a possibility for large-scale optical implementation at some sacrifice in performance as compared to the perceptron. The training algorithm is based on gradient descent and error backpropagation, and is easily extensible to multilayer architecture. Computer simulation demonstrates reasonable performance of TAG compared to perceptron performance. An electrooptical implementation of TAG is also proposed.

1994 ◽  
Author(s):  
T.C. B. Yu ◽  
Robert J. Mears ◽  
Anthony B. Davey ◽  
William A. Crossland ◽  
M. W. Snook ◽  
...  

BMC Genomics ◽  
2019 ◽  
Vol 20 (S9) ◽  
Author(s):  
Yang-Ming Lin ◽  
Ching-Tai Chen ◽  
Jia-Ming Chang

Abstract Background Tandem mass spectrometry allows biologists to identify and quantify protein samples in the form of digested peptide sequences. When performing peptide identification, spectral library search is more sensitive than traditional database search but is limited to peptides that have been previously identified. An accurate tandem mass spectrum prediction tool is thus crucial in expanding the peptide space and increasing the coverage of spectral library search. Results We propose MS2CNN, a non-linear regression model based on deep convolutional neural networks, a deep learning algorithm. The features for our model are amino acid composition, predicted secondary structure, and physical-chemical features such as isoelectric point, aromaticity, helicity, hydrophobicity, and basicity. MS2CNN was trained with five-fold cross validation on a three-way data split on the large-scale human HCD MS2 dataset of Orbitrap LC-MS/MS downloaded from the National Institute of Standards and Technology. It was then evaluated on a publicly available independent test dataset of human HeLa cell lysate from LC-MS experiments. On average, our model shows better cosine similarity and Pearson correlation coefficient (0.690 and 0.632) than MS2PIP (0.647 and 0.601) and is comparable with pDeep (0.692 and 0.642). Notably, for the more complex MS2 spectra of 3+ peptides, MS2PIP is significantly better than both MS2PIP and pDeep. Conclusions We showed that MS2CNN outperforms MS2PIP for 2+ and 3+ peptides and pDeep for 3+ peptides. This implies that MS2CNN, the proposed convolutional neural network model, generates highly accurate MS2 spectra for LC-MS/MS experiments using Orbitrap machines, which can be of great help in protein and peptide identifications. The results suggest that incorporating more data for deep learning model may improve performance.


2014 ◽  
Vol 2014 ◽  
pp. 1-9 ◽  
Author(s):  
Syed Saad Azhar Ali ◽  
Muhammad Moinuddin ◽  
Kamran Raza ◽  
Syed Hasan Adil

Radial basis function neural networks are used in a variety of applications such as pattern recognition, nonlinear identification, control and time series prediction. In this paper, the learning algorithm of radial basis function neural networks is analyzed in a feedback structure. The robustness of the learning algorithm is discussed in the presence of uncertainties that might be due to noisy perturbations at the input or to modeling mismatch. An intelligent adaptation rule is developed for the learning rate of RBFNN which gives faster convergence via an estimate of error energy while giving guarantee to thel2stability governed by the upper bounding via small gain theorem. Simulation results are presented to support our theoretical development.


2013 ◽  
Vol 411-414 ◽  
pp. 1660-1664
Author(s):  
Yan Jun Zhao ◽  
Li LIU

This paper introduces fuzzy neural network technology into the adaptive filter and makes further research on its structure and algorithms. At first, fuzzy rules are determined and the network structure is built by means of dividing fuzzy subspaces. Secondly, membership functions are chosen layers are defined and the network is trained by adaptive learning algorithm. Thirdly, training error is the minimum with repeating debugging. Finally, linking weight, the central value and width of the network membership function is adjusted by using experience of experts. The optimal performance of Adaptive Wiener Filter is realized based on Fuzzy Neural Networks.


2021 ◽  
Author(s):  
Justin Sirignano ◽  
Konstantinos Spiliopoulos

We prove that a single-layer neural network trained with the Q-learning algorithm converges in distribution to a random ordinary differential equation as the size of the model and the number of training steps become large. Analysis of the limit differential equation shows that it has a unique stationary solution that is the solution of the Bellman equation, thus giving the optimal control for the problem. In addition, we study the convergence of the limit differential equation to the stationary solution. As a by-product of our analysis, we obtain the limiting behavior of single-layer neural networks when trained on independent and identically distributed data with stochastic gradient descent under the widely used Xavier initialization.


Sign in / Sign up

Export Citation Format

Share Document