scholarly journals Hey, look over there: Distraction effects on rapid sequence recall

2019 ◽  
Author(s):  
Daniel Miner ◽  
Christian Tetzlaff

AbstractIn the course of everyday life, the brain must store and recall a huge variety of representations of stimuli which are presented in an ordered or sequential way. The processes by which the ordering of these various things is stored and recalled are moderately well understood. We use here a computational model of a cortex-like recurrent neural network adapted by a multitude of plasticity mechanisms. We first demonstrate the learning of a sequence. Then, we examine the influence of different types of distractors on the network dynamics during the recall of the encoded ordered information being ordered in a sequence. We are able to broadly arrive at two distinct effect-categories for distractors, arrive at a basic understanding of why this is so, and predict what distractors will fall into each category.


Author(s):  
Veeraraghavan Jagannathan

Question Answering (QA) has become one of the most significant information retrieval applications. Despite that, most of the question answering system focused to increase the user experience in finding the relevant result. Due to the continuous increase of web content, retrieving the relevant result faces a challenging issue in the Question Answering System (QAS). Thus, an effective Question Classification (QC), and retrieval approach named Bayesian probability and Tanimoto-based Recurrent Neural Network (RNN) are proposed in this research to differentiate the types of questions more efficiently. This research presented an analysis of different types of questions with respect to the grammatical structures. Various patterns are identified from the questions and the RNN classifier is used to classify the questions. The results obtained by the proposed Bayesian probability and Tanimoto-based RNN showed that the syntactic categories related to the domain-specific types of proper nouns, numeral numbers, and the common nouns enable the RNN classifier to reveal better result for different types of questions. However, the proposed approach obtained better performance in terms of precision, recall, and F-measure with the values of 90.14, 86.301, and 90.936 using dataset-2.



2021 ◽  
Vol 17 (9) ◽  
pp. e1009344
Author(s):  
Lars Keuninckx ◽  
Axel Cleeremans

We show how anomalous time reversal of stimuli and their associated responses can exist in very small connectionist models. These networks are built from dynamical toy model neurons which adhere to a minimal set of biologically plausible properties. The appearance of a “ghost” response, temporally and spatially located in between responses caused by actual stimuli, as in the phi phenomenon, is demonstrated in a similar small network, where it is caused by priming and long-distance feedforward paths. We then demonstrate that the color phi phenomenon can be present in an echo state network, a recurrent neural network, without explicitly training for the presence of the effect, such that it emerges as an artifact of the dynamical processing. Our results suggest that the color phi phenomenon might simply be a feature of the inherent dynamical and nonlinear sensory processing in the brain and in and of itself is not related to consciousness.



1999 ◽  
Vol 10 (05) ◽  
pp. 815-821 ◽  
Author(s):  
DANIEL VOLK

A discrete model of a neural network of excitatory and inhibitory neurons is presented which yields oscillations of its global activity. Different types of dynamics occur depending on the selection of parameters: oscillating population activity as well as randomly fluctuating but mainly constant activity. For certain sets of parameters the model also shows temporary transitions from apparently random to periodic behavior in one run, similar to an epileptic seizure.



It is a well-known fact that all the Artificial Intelligence (AI)researches happening across multiple verticals such as Neuro Imaging, Computer Vision, Deep learning etc point to one master goal of modelling the human brain function by understanding how each part of the brain works. The Convolution neural network (CNN) is one of best deep architecture suitable to handle variety of inputs. In this paper we explore the different types of input data the CNN deep architecture can process and some of the CNN configuration changes that has proved good Accuracy. We have highlighted those specialized CNN architectures along with different types of data inputs they handle including the Functional Magnetic Resonance (fMRI) Neuro Image brain data input.



2019 ◽  
Author(s):  
Zhewei Zhang ◽  
Huzi Cheng ◽  
Tianming Yang

AbstractThe brain makes flexible and adaptive responses in the complicated and ever-changing environment for the organism’s survival. To achieve this, the brain needs to choose appropriate actions flexibly in response to sensory inputs. Moreover, the brain also has to understand how its actions affect future sensory inputs and what reward outcomes should be expected, and adapts its behavior based on the actual outcomes. A modeling approach that takes into account of the combined contingencies between sensory inputs, actions, and reward outcomes may be the key to understanding the underlying neural computation. Here, we train a recurrent neural network model based on sequence learning to predict future events based on the past event sequences that combine sensory, action, and reward events. We use four exemplary tasks that have been used in previous animal and human experiments to study different aspects of decision making and learning. We first show that the model reproduces the animals’ choice and reaction time pattern in a probabilistic reasoning task, and its units’ activities mimics the classical findings of the ramping pattern of the parietal neurons that reflects the evidence accumulation process during decision making. We further demonstrate that the model carries out Bayesian inference and may support meta-cognition such as confidence with additional tasks. Finally, we show how the network model achieves adaptive behavior with an approach distinct from reinforcement learning. Our work pieces together many experimental findings in decision making and reinforcement learning and provides a unified framework for the flexible and adaptive behavior of the brain.



Brain tumor is one of the major causes of death among other types of the cancer because Brain is a very sensitive, complex and central part of the body. Proper and timely diagnosis can prevent the life of a person to some extent. Therefore, in this paper we have introduced brain tumor detection system based on combining wavelet statistical texture features and recurrent neural network (RNN). Basically, the system consists of four phases such as (i) feature extraction (ii) feature selection (iii) classification and (iii) segmentation. First, noise removal is performed as the preprocessing step on the brain MR images. After that texture features (both the dominant run length and co-occurrence texture features) are extracted from these noise free MR images. The high number of features is reduced based on sparse principle component analysis (SPCA) approach. The next step is to classify the brain image using Recurrent Neural Network (RNN). After classification, proposed system extracts tumor region from MRI images using modified region growing segmentation algorithm (MRG). This technique has been tested against the datasets of different patients received from muthu neuro center hospital. The experimentation result proves that the proposed system achieves the better result compared to the existing approaches



2021 ◽  
Author(s):  
Yogesh Deshmukh ◽  
Samiksha Dahe ◽  
Tanmayeeta Belote ◽  
Aishwarya Gawali ◽  
Sunnykumar Choudhary

Brain Tumor detection using Convolutional Neural Network (CNN) is used to discover and classify the types of Tumor. Over a amount of years, many researchers are researched and planned ways throughout this area. We’ve proposed a technique that’s capable of detecting and classifying different types of tumor. For detecting and classifying tumor we have used MRI because MRI images gives the complete structure of the human brain, without any operation it scans the human brain and this helps in processing of image for the detection of the Tumor. The prediction of tumor by human from the MRI images leads to misclassification. This motivates us to construct the algorithm for detection of the brain tumor. Machine learning helps and plays a vital role in detecting tumor. In this paper, we tend to use one among the machine learning algorithm i.e. Convolutional neural network (CNN), as CNNs are powerful in image processing and with the help of CNN and MRI images we designed a framework for detection of the brain tumor and classifying its Different types.



2021 ◽  
Author(s):  
Nimrod Shaham ◽  
Jay Chandra ◽  
Gabriel Kreiman ◽  
Haim Sompolinsky

Humans have the remarkable ability to continually store new memories, while maintaining old memories for a lifetime. How the brain avoids catastrophic forgetting of memories due to interference between encoded memories is an open problem in computational neuroscience. Here we present a model for continual learning in a recurrent neural network combining Hebbian learning, synaptic decay and a novel memory consolidation mechanism. Memories undergo stochastic rehearsals with rates proportional to the memory's basin of attraction, causing self-amplified consolidation, giving rise to memory lifetimes that extend much longer than synaptic decay time, and capacity proportional to a power of the number of neurons. Perturbations to the circuit model cause temporally-graded retrograde and anterograde deficits, mimicking observed memory impairments following neurological trauma.



PLoS ONE ◽  
2017 ◽  
Vol 12 (9) ◽  
pp. e0184561 ◽  
Author(s):  
WenBo Xiao ◽  
Gina Nazario ◽  
HuaMing Wu ◽  
HuaMing Zhang ◽  
Feng Cheng


Sign in / Sign up

Export Citation Format

Share Document