feedback connections
Recently Published Documents


TOTAL DOCUMENTS

119
(FIVE YEARS 22)

H-INDEX

32
(FIVE YEARS 3)

2021 ◽  
Vol 13 (22) ◽  
pp. 4505
Author(s):  
Weisheng Li ◽  
Minghao Xiang ◽  
Xuesong Liang

To meet the need for multispectral images having high spatial resolution in practical applications, we propose a dense encoder–decoder network with feedback connections for pan-sharpening. Our network consists of four parts. The first part consists of two identical subnetworks, one each to extract features from PAN and MS images, respectively. The second part is an efficient feature-extraction block. We hope that the network can focus on features at different scales, so we propose innovative multiscale feature-extraction blocks that fully extract effective features from networks of various depths and widths by using three multiscale feature-extraction blocks and two long-jump connections. The third part is the feature fusion and recovery network. We are inspired by the work on U-Net network improvements to propose a brand new encoder network structure with dense connections that improves network performance through effective connections to encoders and decoders at different scales. The fourth part is a continuous feedback connection operation with overfeedback to refine shallow features, which enables the network to obtain better reconstruction capabilities earlier. To demonstrate the effectiveness of our method, we performed several experiments. Experiments on various satellite datasets show that the proposed method outperforms existing methods. Our results show significant improvements over those from other models in terms of the multiple-target index values used to measure the spectral quality and spatial details of the generated images.


2021 ◽  
Author(s):  
Anand P Singh ◽  
Ping Wu ◽  
Sergey Ryabichko ◽  
Joao Raimundo ◽  
Michael Swan ◽  
...  

Developmental patterning networks are regulated by multiple inputs and feedback connections that rapidly reshape gene expression, limiting the information that can be gained solely from slow genetic perturbations. Here we show that fast optogenetic stimuli, real-time transcriptional reporters, and a simplified genetic background can be combined to reveal quantitative regulatory dynamics from a complex genetic network in vivo. We engineer light-controlled variants of the Bicoid transcription factor and study their effects on downstream gap genes in embryos. Our results recapitulate known relationships, including rapid Bicoid-dependent expression of giant and hunchback and delayed repression of Kruppel. In contrast, we find that the posterior pattern of knirps exhibits a quick but inverted response to Bicoid perturbation, suggesting a previously unreported role for Bicoid in suppressing knirps expression. Acute modulation of transcription factor concentration while simultaneously recording output gene activity represents a powerful approach for studying how gene circuit elements are coupled to cell identification and complex body pattern formation in vivo.


Entropy ◽  
2021 ◽  
Vol 23 (9) ◽  
pp. 1218
Author(s):  
Adrian Moldovan ◽  
Angel Caţaron ◽  
Răzvan Andonie

Recently, there is a growing interest in applying Transfer Entropy (TE) in quantifying the effective connectivity between artificial neurons. In a feedforward network, the TE can be used to quantify the relationships between neuron output pairs located in different layers. Our focus is on how to include the TE in the learning mechanisms of a Convolutional Neural Network (CNN) architecture. We introduce a novel training mechanism for CNN architectures which integrates the TE feedback connections. Adding the TE feedback parameter accelerates the training process, as fewer epochs are needed. On the flip side, it adds computational overhead to each epoch. According to our experiments on CNN classifiers, to achieve a reasonable computational overhead–accuracy trade-off, it is efficient to consider only the inter-neural information transfer of the neuron pairs between the last two fully connected layers. The TE acts as a smoothing factor, generating stability and becoming active only periodically, not after processing each input sample. Therefore, we can consider the TE is in our model a slowly changing meta-parameter.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Thiago Leiros Costa ◽  
Johan Wagemans

AbstractWe review and revisit the predictive processing inspired “Gestalts as predictions” hypothesis. The study of Gestalt phenomena at and below threshold can help clarify the role of higher-order object selective areas and feedback connections in mid-level vision. In two psychophysical experiments assessing manipulations of contrast and configurality we showed that: (1) Gestalt phenomena are robust against saliency manipulations across the psychometric function even below threshold (with the accuracy gains and higher saliency associated with Gestalts being present even around chance performance); and (2) peak differences between Gestalt and control conditions happened around the time where responses to Gestalts are starting to saturate (mimicking the differential contrast response profile of striate vs. extra-striate visual neurons). In addition, Gestalts are associated with steeper psychometric functions in all experiments. We propose that these results reflect the differential engagement of object-selective areas in Gestalt phenomena and of information- or percept-based processing, as opposed to energy- or stimulus-based processing, more generally. In addition, the presence of nonlinearities in the psychometric functions suggest differential top-down modulation of the early visual cortex. We treat this as a proof of principle study, illustrating that classic psychophysics can help assess possible involvement of hierarchical predictive processing in Gestalt phenomena.


2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Polina Iamshchinina ◽  
Daniel Kaiser ◽  
Renat Yakupov ◽  
Daniel Haenelt ◽  
Alessandro Sciarra ◽  
...  

AbstractPrimary visual cortex (V1) in humans is known to represent both veridically perceived external input and internally-generated contents underlying imagery and mental rotation. However, it is unknown how the brain keeps these contents separate thus avoiding a mixture of the perceived and the imagined which could lead to potentially detrimental consequences. Inspired by neuroanatomical studies showing that feedforward and feedback connections in V1 terminate in different cortical layers, we hypothesized that this anatomical compartmentalization underlies functional segregation of external and internally-generated visual contents, respectively. We used high-resolution layer-specific fMRI to test this hypothesis in a mental rotation task. We found that rotated contents were predominant at outer cortical depth bins (i.e. superficial and deep). At the same time perceived contents were represented stronger at the middle cortical bin. These results identify how through cortical depth compartmentalization V1 functionally segregates rather than confuses external from internally-generated visual contents. These results indicate that feedforward and feedback manifest in distinct subdivisions of the early visual cortex, thereby reflecting a general strategy for implementing multiple cognitive functions within a single brain region.


2021 ◽  
Author(s):  
Polina Iamshchinina ◽  
Daniel Kaiser ◽  
Renat Yakupov ◽  
Daniel Haenelt ◽  
Alessandro Sciarra ◽  
...  

AbstractPrimary visual cortex (V1) in humans is known to represent both veridically perceived external input and internally-generated contents underlying imagery and mental rotation. However, it is unknown how the brain keeps these contents separate thus avoiding a mixture of the perceived and the imagined which could lead to potentially detrimental consequences. Inspired by neuroanatomical studies showing that feedforward and feedback connections in V1 terminate in different cortical layers, we hypothesized that this anatomical compartmentalization underlies functional segregation of external and internally-generated visual contents, respectively. We used high-resolution layer-specific fMRI to test this hypothesis in a mental rotation task. We found that rotated contents were predominant at outer cortical depth bins (i.e. superficial and deep). At the same time perceived contents were represented stronger at the middle cortical bin. These results identify how through cortical depth compartmentalization V1 functionally segregates rather than confuses external from internally-generated visual contents. These results indicate that feedforward and feedback manifest in distinct subdivisions of the early visual cortex, thereby reflecting a general strategy for implementing multiple cognitive functions within a single brain region.


2021 ◽  
Author(s):  
Tiberiu Teşileanu ◽  
Siavash Golkar ◽  
Samaneh Nasiri ◽  
Anirvan M. Sengupta ◽  
Dmitri B. Chklovskii

AbstractThe brain must extract behaviorally relevant latent variables from the signals streamed by the sensory organs. Such latent variables are often encoded in the dynamics that generated the signal rather than in the specific realization of the waveform. Therefore, one problem faced by the brain is to segment time series based on underlying dynamics. We present two algorithms for performing this segmentation task that are biologically plausible, which we define as acting in a streaming setting and all learning rules being local. One algorithm is model-based and can be derived from an optimization problem involving a mixture of autoregressive processes. This algorithm relies on feedback in the form of a prediction error, and can also be used for forecasting future samples. In some brain regions, such as the retina, the feedback connections necessary to use the prediction error for learning are absent. For this case, we propose a second, model-free algorithm that uses a running estimate of the autocorrelation structure of the signal to perform the segmentation. We show that both algorithms do well when tasked with segmenting signals drawn from autoregressive models with piecewise-constant parameters. In particular, the segmentation accuracy is similar to that obtained from oracle-like methods in which the ground-truth parameters of the autoregressive models are known. We provide implementations of our algorithms at https://github.com/ttesileanu/bio-time-series.


2021 ◽  
pp. 1-29
Author(s):  
Shanshan Qin ◽  
Nayantara Mudur ◽  
Cengiz Pehlevan

We propose a novel biologically plausible solution to the credit assignment problem motivated by observations in the ventral visual pathway and trained deep neural networks. In both, representations of objects in the same category become progressively more similar, while objects belonging to different categories become less similar. We use this observation to motivate a layer-specific learning goal in a deep network: each layer aims to learn a representational similarity matrix that interpolates between previous and later layers. We formulate this idea using a contrastive similarity matching objective function and derive from it deep neural networks with feedforward, lateral, and feedback connections and neurons that exhibit biologically plausible Hebbian and anti-Hebbian plasticity. Contrastive similarity matching can be interpreted as an energy-based learning algorithm, but with significant differences from others in how a contrastive function is constructed.


2021 ◽  
Vol 17 (1) ◽  
pp. e1008629
Author(s):  
Victor Boutin ◽  
Angelo Franciosini ◽  
Frederic Chavane ◽  
Franck Ruffier ◽  
Laurent Perrinet

Both neurophysiological and psychophysical experiments have pointed out the crucial role of recurrent and feedback connections to process context-dependent information in the early visual cortex. While numerous models have accounted for feedback effects at either neural or representational level, none of them were able to bind those two levels of analysis. Is it possible to describe feedback effects at both levels using the same model? We answer this question by combining Predictive Coding (PC) and Sparse Coding (SC) into a hierarchical and convolutional framework applied to realistic problems. In the Sparse Deep Predictive Coding (SDPC) model, the SC component models the internal recurrent processing within each layer, and the PC component describes the interactions between layers using feedforward and feedback connections. Here, we train a 2-layered SDPC on two different databases of images, and we interpret it as a model of the early visual system (V1 & V2). We first demonstrate that once the training has converged, SDPC exhibits oriented and localized receptive fields in V1 and more complex features in V2. Second, we analyze the effects of feedback on the neural organization beyond the classical receptive field of V1 neurons using interaction maps. These maps are similar to association fields and reflect the Gestalt principle of good continuation. We demonstrate that feedback signals reorganize interaction maps and modulate neural activity to promote contour integration. Third, we demonstrate at the representational level that the SDPC feedback connections are able to overcome noise in input images. Therefore, the SDPC captures the association field principle at the neural level which results in a better reconstruction of blurred images at the representational level.


Sign in / Sign up

Export Citation Format

Share Document