scholarly journals AEM-DEDUPE- A NOVEL IMPLEMENTATION OF ACTIVE SUPERVISORY FEEDFORWARD NETWORKS FOR DETECTION OF DATA DEDUPLICATION

2021 ◽  
Vol 12 (4) ◽  
pp. 1102-1111
Author(s):  
N. Lakshmi Narayana ◽  
Dr. B. Tirapathi Reddy
2013 ◽  
Vol 33 (9) ◽  
pp. 2493-2496
Author(s):  
Xueqiong LIU ◽  
Gang WU ◽  
Houping DENG

2014 ◽  
Vol 143 ◽  
pp. 182-196 ◽  
Author(s):  
Sartaj Singh Sodhi ◽  
Pravin Chandra

Author(s):  
B. Tirapathi reddy ◽  
Maddireddy Vaishnavi ◽  
Makireddy Lalitha ◽  
Papineni Poojitha ◽  
Vakalapudi Bhavya Sri Kanthi

2019 ◽  
Vol 116 (16) ◽  
pp. 7723-7731 ◽  
Author(s):  
Dmitry Krotov ◽  
John J. Hopfield

It is widely believed that end-to-end training with the backpropagation algorithm is essential for learning good feature detectors in early layers of artificial neural networks, so that these detectors are useful for the task performed by the higher layers of that neural network. At the same time, the traditional form of backpropagation is biologically implausible. In the present paper we propose an unusual learning rule, which has a degree of biological plausibility and which is motivated by Hebb’s idea that change of the synapse strength should be local—i.e., should depend only on the activities of the pre- and postsynaptic neurons. We design a learning algorithm that utilizes global inhibition in the hidden layer and is capable of learning early feature detectors in a completely unsupervised way. These learned lower-layer feature detectors can be used to train higher-layer weights in a usual supervised way so that the performance of the full network is comparable to the performance of standard feedforward networks trained end-to-end with a backpropagation algorithm on simple tasks.


2002 ◽  
Vol 14 (7) ◽  
pp. 1755-1769 ◽  
Author(s):  
Robert M. French ◽  
Nick Chater

In error-driven distributed feedforward networks, new information typically interferes, sometimes severely, with previously learned information. We show how noise can be used to approximate the error surface of previously learned information. By combining this approximated error surface with the error surface associated with the new information to be learned, the network's retention of previously learned items can be improved and catastrophic interference significantly reduced. Further, we show that the noise-generated error surface is produced using only first-derivative information and without recourse to any explicit error information.


1991 ◽  
Vol 3 (2) ◽  
pp. 246-257 ◽  
Author(s):  
J. Park ◽  
I. W. Sandberg

There have been several recent studies concerning feedforward networks and the problem of approximating arbitrary functionals of a finite number of real variables. Some of these studies deal with cases in which the hidden-layer nonlinearity is not a sigmoid. This was motivated by successful applications of feedforward networks with nonsigmoidal hidden-layer units. This paper reports on a related study of radial-basis-function (RBF) networks, and it is proved that RBF networks having one hidden layer are capable of universal approximation. Here the emphasis is on the case of typical RBF networks, and the results show that a certain class of RBF networks with the same smoothing factor in each kernel node is broad enough for universal approximation.


Sign in / Sign up

Export Citation Format

Share Document