Statistical analysis of the single-layer backpropagation algorithm for noisy training data

Author(s):  
N.J. Bershad ◽  
N. Cubaud ◽  
J.J. Shynk
2018 ◽  
Vol 215 ◽  
pp. 01011
Author(s):  
Sitti Amalia

This research proposed to design and implementation system of voice pattern recognition in the form of numbers with offline pronunciation. Artificial intelligent with backpropagation algorithm used on the simulation test. The test has been done to 100 voice files which got from 10 person voices for 10 different numbers. The words are consisting of number 0 to 9. The trial has been done with artificial neural network parameters such as tolerance value and the sum of a neuron. The best result is shown at tolerance value varied and a sum of the neuron is fixed. The percentage of this network training with optimal architecture and network parameter for each training data and new data are 82,2% and 53,3%. Therefore if tolerance value is fixed and a sum of neuron varied gave 82,2% for training data and 54,4% for new data


Author(s):  
Raj Dabre ◽  
Atsushi Fujita

In encoder-decoder based sequence-to-sequence modeling, the most common practice is to stack a number of recurrent, convolutional, or feed-forward layers in the encoder and decoder. While the addition of each new layer improves the sequence generation quality, this also leads to a significant increase in the number of parameters. In this paper, we propose to share parameters across all layers thereby leading to a recurrently stacked sequence-to-sequence model. We report on an extensive case study on neural machine translation (NMT) using our proposed method, experimenting with a variety of datasets. We empirically show that the translation quality of a model that recurrently stacks a single-layer 6 times, despite its significantly fewer parameters, approaches that of a model that stacks 6 different layers. We also show how our method can benefit from a prevalent way for improving NMT, i.e., extending training data with pseudo-parallel corpora generated by back-translation. We then analyze the effects of recurrently stacked layers by visualizing the attentions of models that use recurrently stacked layers and models that do not. Finally, we explore the limits of parameter sharing where we share even the parameters between the encoder and decoder in addition to recurrent stacking of layers.


Author(s):  
George Leal Jamil ◽  
Alexis Rocha da Silva

Users' personal, highly sensitive data such as photos and voice recordings are kept indefinitely by the companies that collect it. Users can neither delete nor restrict the purposes for which it is used. Learning how to machine learning that protects privacy, we can make a huge difference in solving many social issues like curing disease, etc. Deep neural networks are susceptible to various inference attacks as they remember information about their training data. In this chapter, the authors introduce differential privacy, which ensures that different kinds of statistical analysis don't compromise privacy and federated learning, training a machine learning model on a data to which we do not have access to.


1994 ◽  
Vol 05 (03) ◽  
pp. 159-163
Author(s):  
R. LENDE ◽  
L.P. CSERNAI ◽  
D. KAMP

A backpropagation algorithm is used to train a neural net with the goal of distinguishing between two groups of biological species: prokaryotic and eukaryotic, based on frequencies of all 16 doublets in DNA sequences. An improvement of about 15% is obtained compared to statistical analysis based on one doublet only. This is done first by presenting sequences of species to the network with known classification (the training phase) and then showing species which the neural net has never seen before, and looking for the response. A brief discussion of the speed of training is given.


Sign in / Sign up

Export Citation Format

Share Document