scholarly journals The color phi phenomenon: Not so special, after all?

2021 ◽  
Vol 17 (9) ◽  
pp. e1009344
Author(s):  
Lars Keuninckx ◽  
Axel Cleeremans

We show how anomalous time reversal of stimuli and their associated responses can exist in very small connectionist models. These networks are built from dynamical toy model neurons which adhere to a minimal set of biologically plausible properties. The appearance of a “ghost” response, temporally and spatially located in between responses caused by actual stimuli, as in the phi phenomenon, is demonstrated in a similar small network, where it is caused by priming and long-distance feedforward paths. We then demonstrate that the color phi phenomenon can be present in an echo state network, a recurrent neural network, without explicitly training for the presence of the effect, such that it emerges as an artifact of the dynamical processing. Our results suggest that the color phi phenomenon might simply be a feature of the inherent dynamical and nonlinear sensory processing in the brain and in and of itself is not related to consciousness.

2020 ◽  
Author(s):  
Lars Keuninckx ◽  
Axel Cleeremans

We show how anomalous time reversal of stimuli and their associated responses can exist in very small connectionist models. The networks these models exist of, are built using a dynamical toy model neuron, and adhere to a minimal set of biologically plausible properties. The appearance of a “ghost” response, temporally and spatially located in between responses caused by actual stimuli, as in the Phi phenomenon, is demonstrated in a similar small network, where it is caused by priming and long-distance feedforward paths. We then demonstrate that the Color Phi phenomenon can be present in an Echo State Network, a dynamical recurrent neural network, markedly without explicitly training for the presence of the effect. Our results suggest that similar illusions are likely to be obtainable for any system, artificial or biological in nature, with minimal neuron-like properties, and are thus merely artifacts of the inherent dynamical and nonlinear behavior and of suchsystems.


2019 ◽  
Author(s):  
Daniel Miner ◽  
Christian Tetzlaff

AbstractIn the course of everyday life, the brain must store and recall a huge variety of representations of stimuli which are presented in an ordered or sequential way. The processes by which the ordering of these various things is stored and recalled are moderately well understood. We use here a computational model of a cortex-like recurrent neural network adapted by a multitude of plasticity mechanisms. We first demonstrate the learning of a sequence. Then, we examine the influence of different types of distractors on the network dynamics during the recall of the encoded ordered information being ordered in a sequence. We are able to broadly arrive at two distinct effect-categories for distractors, arrive at a basic understanding of why this is so, and predict what distractors will fall into each category.


2019 ◽  
Author(s):  
Zhewei Zhang ◽  
Huzi Cheng ◽  
Tianming Yang

AbstractThe brain makes flexible and adaptive responses in the complicated and ever-changing environment for the organism’s survival. To achieve this, the brain needs to choose appropriate actions flexibly in response to sensory inputs. Moreover, the brain also has to understand how its actions affect future sensory inputs and what reward outcomes should be expected, and adapts its behavior based on the actual outcomes. A modeling approach that takes into account of the combined contingencies between sensory inputs, actions, and reward outcomes may be the key to understanding the underlying neural computation. Here, we train a recurrent neural network model based on sequence learning to predict future events based on the past event sequences that combine sensory, action, and reward events. We use four exemplary tasks that have been used in previous animal and human experiments to study different aspects of decision making and learning. We first show that the model reproduces the animals’ choice and reaction time pattern in a probabilistic reasoning task, and its units’ activities mimics the classical findings of the ramping pattern of the parietal neurons that reflects the evidence accumulation process during decision making. We further demonstrate that the model carries out Bayesian inference and may support meta-cognition such as confidence with additional tasks. Finally, we show how the network model achieves adaptive behavior with an approach distinct from reinforcement learning. Our work pieces together many experimental findings in decision making and reinforcement learning and provides a unified framework for the flexible and adaptive behavior of the brain.


Brain tumor is one of the major causes of death among other types of the cancer because Brain is a very sensitive, complex and central part of the body. Proper and timely diagnosis can prevent the life of a person to some extent. Therefore, in this paper we have introduced brain tumor detection system based on combining wavelet statistical texture features and recurrent neural network (RNN). Basically, the system consists of four phases such as (i) feature extraction (ii) feature selection (iii) classification and (iii) segmentation. First, noise removal is performed as the preprocessing step on the brain MR images. After that texture features (both the dominant run length and co-occurrence texture features) are extracted from these noise free MR images. The high number of features is reduced based on sparse principle component analysis (SPCA) approach. The next step is to classify the brain image using Recurrent Neural Network (RNN). After classification, proposed system extracts tumor region from MRI images using modified region growing segmentation algorithm (MRG). This technique has been tested against the datasets of different patients received from muthu neuro center hospital. The experimentation result proves that the proposed system achieves the better result compared to the existing approaches


2021 ◽  
Author(s):  
Nimrod Shaham ◽  
Jay Chandra ◽  
Gabriel Kreiman ◽  
Haim Sompolinsky

Humans have the remarkable ability to continually store new memories, while maintaining old memories for a lifetime. How the brain avoids catastrophic forgetting of memories due to interference between encoded memories is an open problem in computational neuroscience. Here we present a model for continual learning in a recurrent neural network combining Hebbian learning, synaptic decay and a novel memory consolidation mechanism. Memories undergo stochastic rehearsals with rates proportional to the memory's basin of attraction, causing self-amplified consolidation, giving rise to memory lifetimes that extend much longer than synaptic decay time, and capacity proportional to a power of the number of neurons. Perturbations to the circuit model cause temporally-graded retrograde and anterograde deficits, mimicking observed memory impairments following neurological trauma.


2019 ◽  
Author(s):  
Eli Pollock ◽  
Mehrdad Jazayeri

AbstractMany cognitive processes involve transformations of distributed representations in neural populations, creating a need for population-level models. Recurrent neural network models fulfill this need, but there are many open questions about how their connectivity gives rise to dynamics that solve a task. Here, we present a method for finding the connectivity of networks for which the dynamics are specified to solve a task in an interpretable way. We apply our method to a working memory task by synthesizing a network that implements a drift-diffusion process over a ring-shaped manifold. We also use our method to demonstrate how inputs can be used to control network dynamics for cognitive flexibility and explore the relationship between representation geometry and network capacity. Our work fits within the broader context of understanding neural computations as dynamics over relatively low-dimensional manifolds formed by correlated patterns of neurons.Author SummaryNeurons in the brain form intricate networks that can produce a vast array of activity patterns. To support goal-directed behavior, the brain must adjust the connections between neurons so that network dynamics can perform desirable computations on behaviorally relevant variables. A fundamental goal in computational neuroscience is to provide an understanding of how network connectivity aligns the dynamics in the brain to the dynamics needed to track those variables. Here, we develop a mathematical framework for creating recurrent neural network models that can address this problem. Specifically, we derive a set of linear equations that constrain the connectivity to afford a direct mapping of task-relevant dynamics onto network activity. We demonstrate the utility of this technique by creating and analyzing a set of network models that can perform a simple working memory task. We then extend the approach to show how additional constraints can furnish networks whose dynamics are controlled flexibly by external inputs. Finally, we exploit the flexibility of this technique to explore the robustness and capacity limitations of recurrent networks. This network synthesis method provides a powerful means for generating and validating hypotheses about how task-relevant computations can emerge from network dynamics.


2021 ◽  
pp. 1-14
Author(s):  
A. Karthika ◽  
R. Subramanian ◽  
S. Karthik

Focal cortical dysplasia (FCD) is an inborn anomaly in brain growth and morphological deformation in lesions of the brain which induces focal seizures. Neurosurgical therapies were performed for the detection of FCD. Furthermore, it can be overcome through the presurgical evaluation of epilepsy. The surgical result is attained basically through the output of the presurgical output. In preprocessing the process of increasing true positives with the decrease in false negatives occurs which results in an effective outcome. MRI (Magnetic Resonance Imaging) outputs are efficient to predict the FCD lesions through T1- MPRAGE and T2- FLAIR efficient output can be obtained. In our proposed work we extract the S2 features through the testing of T1, T2 images. Using RNN-LSTM (Recurrent neural network-Long short-term memory) test images were trained and the FCD lesions were segmented. The output of our work is compared with the proposed work yields better results compared to the existing system such as artificial neural network (ANN), support vector machine (SVM), and convolution neural network (CNN). This approach obtained an accuracy rate of 0.195% (ANN), 0.20% (SVM), 0.14% (CNN), specificity rate of 0.23% (ANN), 0.15% (SVM), 0.13% (CNN) and sensitivity rate of 0.22% (ANN), 0.14% (SVM), 0.08% (CNN) respectively in comparison with RNN-LSTM.


Sign in / Sign up

Export Citation Format

Share Document