Depth of General Scenes from Defocused Images Using Multilayer Feedforward Networks

Author(s):  
Veysel Aslantas ◽  
Mehmet Tunckanat
Keyword(s):  
2014 ◽  
Vol 143 ◽  
pp. 182-196 ◽  
Author(s):  
Sartaj Singh Sodhi ◽  
Pravin Chandra

2019 ◽  
Vol 116 (16) ◽  
pp. 7723-7731 ◽  
Author(s):  
Dmitry Krotov ◽  
John J. Hopfield

It is widely believed that end-to-end training with the backpropagation algorithm is essential for learning good feature detectors in early layers of artificial neural networks, so that these detectors are useful for the task performed by the higher layers of that neural network. At the same time, the traditional form of backpropagation is biologically implausible. In the present paper we propose an unusual learning rule, which has a degree of biological plausibility and which is motivated by Hebb’s idea that change of the synapse strength should be local—i.e., should depend only on the activities of the pre- and postsynaptic neurons. We design a learning algorithm that utilizes global inhibition in the hidden layer and is capable of learning early feature detectors in a completely unsupervised way. These learned lower-layer feature detectors can be used to train higher-layer weights in a usual supervised way so that the performance of the full network is comparable to the performance of standard feedforward networks trained end-to-end with a backpropagation algorithm on simple tasks.


2002 ◽  
Vol 14 (7) ◽  
pp. 1755-1769 ◽  
Author(s):  
Robert M. French ◽  
Nick Chater

In error-driven distributed feedforward networks, new information typically interferes, sometimes severely, with previously learned information. We show how noise can be used to approximate the error surface of previously learned information. By combining this approximated error surface with the error surface associated with the new information to be learned, the network's retention of previously learned items can be improved and catastrophic interference significantly reduced. Further, we show that the noise-generated error surface is produced using only first-derivative information and without recourse to any explicit error information.


1991 ◽  
Vol 3 (2) ◽  
pp. 246-257 ◽  
Author(s):  
J. Park ◽  
I. W. Sandberg

There have been several recent studies concerning feedforward networks and the problem of approximating arbitrary functionals of a finite number of real variables. Some of these studies deal with cases in which the hidden-layer nonlinearity is not a sigmoid. This was motivated by successful applications of feedforward networks with nonsigmoidal hidden-layer units. This paper reports on a related study of radial-basis-function (RBF) networks, and it is proved that RBF networks having one hidden layer are capable of universal approximation. Here the emphasis is on the case of typical RBF networks, and the results show that a certain class of RBF networks with the same smoothing factor in each kernel node is broad enough for universal approximation.


2005 ◽  
Vol 17 (10) ◽  
pp. 2139-2175 ◽  
Author(s):  
Naoki Masuda ◽  
Brent Doiron ◽  
André Longtin ◽  
Kazuyuki Aihara

Oscillatory and synchronized neural activities are commonly found in the brain, and evidence suggests that many of them are caused by global feedback. Their mechanisms and roles in information processing have been discussed often using purely feedforward networks or recurrent networks with constant inputs. On the other hand, real recurrent neural networks are abundant and continually receive information-rich inputs from the outside environment or other parts of the brain. We examine how feedforward networks of spiking neurons with delayed global feedback process information about temporally changing inputs. We show that the network behavior is more synchronous as well as more correlated with and phase-locked to the stimulus when the stimulus frequency is resonant with the inherent frequency of the neuron or that of the network oscillation generated by the feedback architecture. The two eigenmodes have distinct dynamical characteristics, which are supported by numerical simulations and by analytical arguments based on frequency response and bifurcation theory. This distinction is similar to the class I versus class II classification of single neurons according to the bifurcation from quiescence to periodic firing, and the two modes depend differently on system parameters. These two mechanisms may be associated with different types of information processing.


2019 ◽  
Author(s):  
Hedyeh Rezaei ◽  
Ad Aertsen ◽  
Arvind Kumar ◽  
Alireza Valizadeh

AbstractTransient oscillations in the network activity upon sensory stimulation have been reported in different sensory areas. These evoked oscillations are the generic response of networks of excitatory and inhibitory neurons (EI-networks) to a transient external input. Recently, it has been shown that this resonance property of EI-networks can be exploited for communication in modular neuronal networks by enabling the transmission of sequences of synchronous spike volleys (‘pulse packets’), despite the sparse and weak connectivity between the modules. The condition for successful transmission is that the pulse packet (PP) intervals match the period of the modules’ resonance frequency. Hence, the mechanism was termed communication through resonance (CTR). This mechanism has three sever constraints, though. First, it needs periodic trains of PPs, whereas single PPs fail to propagate. Second, the inter-PP interval needs to match the network resonance. Third, transmission is very slow, because in each module, the network resonance needs to build-up over multiple oscillation cycles. Here, we show that, by adding appropriate feedback connections to the network, the CTR mechanism can be improved and the aforementioned constraints relaxed. Specifically, we show that adding feedback connections between two upstream modules, called the resonance pair, in an otherwise feedforward modular network can support successful propagation of a single PP throughout the entire network. The key condition for successful transmission is that the sum of the forward and backward delays in the resonance pair matches the resonance frequency of the network modules. The transmission is much faster, by more than a factor of two, than in the original CTR mechanism. Moreover, it distinctly lowers the threshold for successful communication by synchronous spiking in modular networks of weakly coupled networks. Thus, our results suggest a new functional role of bidirectional connectivity for the communication in cortical area networks.Author summaryThe cortex is organized as a modular system, with the modules (cortical areas) communicating via weak long-range connections. It has been suggested that the intrinsic resonance properties of population activities in these areas might contribute to enabling successful communication. A module’s intrinsic resonance appears in the damped oscillatory response to an incoming spike volley, enabling successful communication during the peaks of the oscillation. Such communication can be exploited in feedforward networks, provided the participating networks have similar resonance frequencies. This, however, is not necessarily true for cortical networks. Moreover, the communication is slow, as it takes several oscillation cycles to build up the response in the downstream network. Also, only periodic trains of spikes volleys (and not single volleys) with matching intervals can propagate. Here, we present a novel mechanism that alleviates these shortcomings and enables propagation of synchronous spiking across weakly connected networks with not necessarily identical resonance frequencies. In this framework, an individual spike volley can propagate by local amplification through reverberation in a loop between two successive networks, connected by feedforward and feedback connections: the resonance pair. This overcomes the need for activity build-up in downstream networks, causing the volley to propagate distinctly faster and more reliably.


Sign in / Sign up

Export Citation Format

Share Document