feedforward networks
Recently Published Documents


TOTAL DOCUMENTS

269
(FIVE YEARS 29)

H-INDEX

36
(FIVE YEARS 2)

Sensors ◽  
2022 ◽  
Vol 22 (1) ◽  
pp. 329
Author(s):  
Congming Tan ◽  
Shuli Cheng ◽  
Liejun Wang

Recently, many super-resolution reconstruction (SR) feedforward networks based on deep learning have been proposed. These networks enable the reconstructed images to achieve convincing results. However, due to a large amount of computation and parameters, SR technology is greatly limited in devices with limited computing power. To trade-off the network performance and network parameters. In this paper, we propose the efficient image super-resolution network via Self-Calibrated Feature Fuse, named SCFFN, by constructing the self-calibrated feature fuse block (SCFFB). Specifically, to recover the high-frequency detail information of the image as much as possible, we propose SCFFB by self-transformation and self-fusion of features. In addition, to accelerate the network training while reducing the computational complexity of the network, we employ an attention mechanism to elaborate the reconstruction part of the network, called U-SCA. Compared with the existing transposed convolution, it can greatly reduce the computation burden of the network without reducing the reconstruction effect. We have conducted full quantitative and qualitative experiments on public datasets, and the experimental results show that the network achieves comparable performance to other networks, while we only need fewer parameters and computational resources.


Author(s):  
Osval Antonio Montesinos López ◽  
Abelardo Montesinos López ◽  
Jose Crossa

AbstractWe provide the fundamentals of convolutional neural networks (CNNs) and include several examples using the Keras library. We give a formal motivation for using CNN that clearly shows the advantages of this topology compared to feedforward networks for processing images. Several practical examples with plant breeding data are provided using CNNs under two scenarios: (a) one-dimensional input data and (b) two-dimensional input data. The examples also illustrate how to tune the hyperparameters to be able to increase the probability of a successful application. Finally, we give comments on the advantages and disadvantages of deep neural networks in general as compared with many other statistical machine learning methodologies.


2021 ◽  
Author(s):  
Laura Bella Naumann ◽  
Joram Keijser ◽  
Henning Sprekeler

Sensory systems reliably process incoming stimuli in spite of changes in context. Most recent models accredit this context invariance to an extraction of increasingly complex sensory features in hierarchical feedforward networks. Here, we study how context-invariant representations can be established by feedback rather than feedforward processing. We show that feedforward neural networks modulated by feedback can dynamically generate invariant sensory representations. The required feedback can be implemented as a slow and spatially diffuse gain modulation. The invariance is not present on the level of individual neurons, but emerges only on the population level. Mechanistically, the feedback modulation dynamically reorients the manifold of neural activity and thereby maintains an invariant neural subspace in spite of contextual variations. Our results highlight the importance of population-level analyses for understanding the role of feedback in flexible sensory processing.


2021 ◽  
Vol 24 (67) ◽  
pp. 40-50
Author(s):  
Jean Phelipe de Oliveira Lima ◽  
Carlos Maurí­cio Seródio Figueiredo

In modern smart cities, there is a quest for the highest level of integration and automation service. In the surveillance sector, one of the main challenges is to automate the analysis of videos in real-time to identify critical situations. This paper presents intelligent models based on Convolutional Neural Networks (in which the MobileNet, InceptionV3 and VGG16 networks had used), LSTM networks and feedforward networks for the task of classifying videos under the classes "Violence" and "Non-Violence", using for this the RLVS database. Different data representations held used according to the Temporal Fusion techniques. The best outcome achieved was Accuracy and F1-Score of 0.91, a higher result compared to those found in similar researches for works conducted on the same database.


Author(s):  
Tianshi Gao ◽  
Bin Deng ◽  
Jixuan Wang ◽  
Jiang Wang ◽  
Guosheng Yi

The regularity of the inter-spike intervals (ISIs) gives a critical window into how the information is coded temporally in the cortex. Previous researches mostly adopt pure feedforward networks (FFNs) to study how the network structure affects spiking regularity propagation, which ignore the role of local dynamics within the layer. In this paper, we construct an FFN with recurrent connections and investigate the propagation of spiking regularity. We argue that an FFN with recurrent connections serves as a basic circuit to explain that the regularity increases as spikes propagate from middle temporal visual areas to higher cortical areas. We find that the reduction of regularity is related to the decreased complexity of the shared activity co-fluctuations. We show in simulations that there is an appropriate excitation–inhibition ratio maximizing the regularity of deeper layers. Furthermore, it is demonstrated that collective temporal regularity in deeper layers exhibits resonance-like behavior with respect to both synaptic connection probability and synaptic weight. Our work provides a critical link between cortical circuit structure and realistic spiking regularity.


2021 ◽  
Author(s):  
Brett W. Larsen ◽  
Shaul Druckmann

AbstractLateral and recurrent connections are ubiquitous in biological neural circuits. The strong computational abilities of feedforward networks have been extensively studied; on the other hand, while certain roles for lateral and recurrent connections in specific computations have been described, a more complete understanding of the role and advantages of recurrent computations that might explain their prevalence remains an important open challenge. Previous key studies by Minsky and later by Roelfsema argued that the sequential, parallel computations for which recurrent networks are well suited can be highly effective approaches to complex computational problems. Such “tag propagation” algorithms perform repeated, local propagation of information and were introduced in the context of detecting connectedness, a task that is challenging for feedforward networks. Here, we advance the understanding of the utility of lateral and recurrent computation by first performing a large-scale empirical study of neural architectures for the computation of connectedness to explore feedforward solutions more fully and establish robustly the importance of recurrent architectures. In addition, we highlight a tradeoff between computation time and performance and demonstrate hybrid feedforward/recurrent models that perform well even in the presence of varying computational time limitations. We then generalize tag propagation architectures to multiple, interacting propagating tags and demonstrate that these are efficient computational substrates for more general computations by introducing and solving an abstracted biologically inspired decision-making task. More generally, our work clarifies and expands the set of computational tasks that can be solved efficiently by recurrent computation, yielding hypotheses for structure in population activity that may be present in such tasks.Author SummaryLateral and recurrent connections are ubiquitous in biological neural circuits; intriguingly, this stands in contrast to the majority of current-day artificial neural network research which primarily uses feedforward architectures except in the context of temporal sequences. This raises the possibility that part of the difference in computational capabilities between real neural circuits and artificial neural networks is accounted for by the role of recurrent connections, and as a result a more detailed understanding of the computational role played by such connections is of great importance. Making effective comparisons between architectures is a subtle challenge, however, and in this paper we leverage the computational capabilities of large-scale machine learning to robustly explore how differences in architectures affect a network’s ability to learn a task. We first focus on the task of determining whether two pixels are connected in an image which has an elegant and efficient recurrent solution: propagate a connected label or tag along paths. Inspired by this solution, we show that it can be generalized in many ways, including propagating multiple tags at once and changing the computation performed on the result of the propagation. To illustrate these generalizations, we introduce an abstracted decision-making task related to foraging in which an animal must determine whether it can avoid predators in a random environment. Our results shed light on the set of computational tasks that can be solved efficiently by recurrent computation and how these solutions may appear in neural activity.


2021 ◽  
Author(s):  
Aran Nayebi ◽  
Javier Sagastuy-Brena ◽  
Daniel M. Bear ◽  
Kohitij Kar ◽  
Jonas Kubilius ◽  
...  

The ventral visual stream (VVS) is a hierarchically connected series of cortical areas known to underlie core object recognition behaviors, enabling humans and non-human primates to effortlessly recognize objects across a multitude of viewing conditions. While recent feedforward convolutional neural networks (CNNs) provide quantitatively accurate predictions of temporally-averaged neural responses throughout the ventral pathway, they lack two ubiquitous neuroanatomical features: local recurrence within cortical areas and long-range feedback from downstream areas to upstream areas. As a result, such models are unable to account for the temporally-varying dynamical patterns thought to arise from recurrent visual circuits, nor can they provide insight into the behavioral goals that these recurrent circuits might help support. In this work, we augment CNNs with local recurrence and long-range feedback, developing convolutional RNN (ConvRNN) network models that more correctly mimic the gross neuroanatomy of the ventral pathway. Moreover, when the form of the recurrent circuit is chosen properly, ConvRNNs with comparatively small numbers of layers can achieve high performance on a core recognition task, comparable to that of much deeper feedforward networks. We then compared these models to temporally fine-grained neural and behavioral recordings from primates to thousands of images. We found that ConvRNNs better matched these data than alternative models, including the deepest feedforward networks, on two metrics: 1) neural dynamics in V4 and inferotemporal (IT) cortex at late timepoints after stimulus onset, and 2) the varying times at which object identity can be decoded from IT, including more challenging images that take longer to decode. Moreover, these results differentiate within the class of ConvRNNs, suggesting that there are strong functional constraints on the recurrent connectivity needed to match these phenomena. Finally, we find that recurrent circuits that attain high task performance while having a smaller network size as measured by number of units, rather than another metric such as the number of parameters, are overall most consistent with these data. Taken together, our results evince the role of recurrence and feedback in the ventral pathway to reliably perform core object recognition while subject to a strong total network size constraint.


2021 ◽  
Vol 31 (02) ◽  
pp. 2150030
Author(s):  
Tyler Levasseur ◽  
Antonio Palacios

A feedforward network is a unidirectionally coupled chain of dynamical systems in which the first cell is coupled to itself, and each successive cell is coupled to the next one. Feedforward networks have gained considerable interest because of their potential to enhance signal amplification and to manipulate the frequency of oscillations. Indeed, it has been shown that the growth rate of the bifurcation undergone by the final cell is much larger than the expected square root growth rate associated with the standard Hopf bifurcation. In this paper, we present a new approach to studying this growth rate phenomenon. We employ a two-time-scale analysis and asymptotic approximations to detect behavior associated with the growth rate phenomenon that has not been previously observed. In particular, we show that the Hopf bifurcation is not the only bifurcation capable of exhibiting this large growth rate behavior. Using asymptotic methods we show that it is not a special property of the Hopf bifurcation that allows for this accelerated growth rate; it is a combination of the unidirectional coupling and the higher-degree nonlinearities that cause this effect. Furthermore, we show that this large growth rate need not persist away from the bifurcation. In fact, the growth rate is asymptotic to the standard square root growth rate as the bifurcation parameter increases.


Sign in / Sign up

Export Citation Format

Share Document