scholarly journals Dynamical Analysis of Bayesian Inference Models for the Eriksen Task

2009 ◽  
Vol 21 (6) ◽  
pp. 1520-1553 ◽  
Author(s):  
Yuan Sophie Liu ◽  
Angela Yu ◽  
Philip Holmes

The Eriksen task is a classical paradigm that explores the effects of competing sensory inputs on response tendencies and the nature of selective attention in controlling these processes. In this task, conflicting flanker stimuli interfere with the processing of a central target, especially on short reaction time trials. This task has been modeled by neural networks and more recently by a normative Bayesian account. Here, we analyze the dynamics of the Bayesian models, which are nonlinear, coupled discrete time dynamical systems, by considering simplified, approximate systems that are linear and decoupled. Analytical solutions of these allow us to describe how posterior probabilities and psychometric functions depend on model parameters. We compare our results with numerical simulations of the original models and derive fits to experimental data, showing that agreements are rather good. We also investigate continuum limits of these simplified dynamical systems and demonstrate that Bayesian updating is closely related to a drift-diffusion process, whose implementation in neural network models has been extensively studied. This provides insight into how neural substrates can implement Bayesian computations.

2020 ◽  
Author(s):  
Vysakh S Mohan

Edge processing for computer vision systems enable incorporating visual intelligence to mobile robotics platforms. Demand for low power, low cost and small form factor devices are on the rise.This work proposes a unified platform to generate deep learning models compatible on edge devices from Intel, NVIDIA and XaLogic. The platform enables users to create custom data annotations,train neural networks and generate edge compatible inference models. As a testimony to the tools ease of use and flexibility, we explore two use cases — vision powered prosthetic hand and drone vision. Neural network models for these use cases will be built using the proposed pipeline and will be open-sourced. Online and offline versions of the tool and corresponding inference modules for edge devices will also be made public for users to create custom computer vision use cases.


2020 ◽  
Author(s):  
Vysakh S Mohan

Edge processing for computer vision systems enable incorporating visual intelligence to mobile robotics platforms. Demand for low power, low cost and small form factor devices are on the rise.This work proposes a unified platform to generate deep learning models compatible on edge devices from Intel, NVIDIA and XaLogic. The platform enables users to create custom data annotations,train neural networks and generate edge compatible inference models. As a testimony to the tools ease of use and flexibility, we explore two use cases — vision powered prosthetic hand and drone vision. Neural network models for these use cases will be built using the proposed pipeline and will be open-sourced. Online and offline versions of the tool and corresponding inference modules for edge devices will also be made public for users to create custom computer vision use cases.


2019 ◽  
Author(s):  
Dat Duong ◽  
Ankith Uppunda ◽  
Lisa Gai ◽  
Chelsea Ju ◽  
James Zhang ◽  
...  

AbstractProtein functions can be described by the Gene Ontology (GO) terms, allowing us to compare the functions of two proteins by measuring the similarity of the terms assigned to them. Recent works have applied neural network models to derive the vector representations for GO terms and compute similarity scores for these terms by comparing their vector embeddings. There are two typical ways to embed GO terms into vectors; a model can either embed the definitions of the terms or the topology of the terms in the ontology. In this paper, we design three tasks to critically evaluate the GO embeddings of two recent neural network models, and further introduce additional models for embedding GO terms, adapted from three popular neural network frameworks: Graph Convolution Network (GCN), Embeddings from Language Models (ELMo), and Bidirectional Encoder Representations from Transformers (BERT), which have not yet been explored in previous works. Task 1 studies edge cases where the GO embeddings may not provide meaningful similarity scores for GO terms. We find that all neural network based methods fail to produce high similarity scores for related terms when these terms have low Information Content values. Task 2 is a canonical task which estimates how well GO embeddings can compare functions of two orthologous genes or two interacting proteins. The best neural network methods for this task are those that embed GO terms using their definitions, and the differences among such methods are small. Task 3 evaluates how GO embeddings affect the performance of GO annotation methods, which predict whether a protein should be labeled by certain GO terms. When the annotation datasets contain many samples for each GO label, GO embeddings do not improve the classification accuracy. Machine learning GO annotation methods often remove rare GO labels from the training datasets so that the model parameters can be efficiently trained. We evaluate whether GO embeddings can improve prediction of rare labels unseen in the training datasets, and find that GO embeddings based on the BERT framework achieve the best results in this setting. We present our embedding methods and three evaluation tasks as the basis for future research on this topic.


2018 ◽  
Author(s):  
Simen Tennøe ◽  
Geir Halnes ◽  
Gaute T. Einevoll

AbstractComputational models in neuroscience typically contain many parameters that are poorly constrained by experimental data. Uncertainty quantification and sensitivity analysis provide rigorous procedures to quantify how the model output depends on this parameter uncertainty. Unfortunately, the application of such methods is not yet standard within the field of neuroscience.Here we present Uncertainpy, an open-source Python toolbox, tailored to perform uncertainty quantification and sensitivity analysis of neuroscience models. Uncertainpy aims to make it easy and quick to get started with uncertainty analysis, without any need for detailed prior knowledge. The toolbox allows uncertainty quantification and sensitivity analysis to be performed on already existing models without needing to modify the model equations or model implementation. Uncertainpy bases its analysis on polynomial chaos expansions, which are more efficient than the more standard Monte-Carlo based approaches.Uncertainpy is tailored for neuroscience applications by its built-in capability for calculating characteristic features in the model output. The toolbox does not merely perform a point-to- point comparison of the “raw” model output (e.g. membrane voltage traces), but can also calculate the uncertainty and sensitivity of salient model response features such as spike timing, action potential width, mean interspike interval, and other features relevant for various neural and neural network models. Uncertainpy comes with several common models and features built in, and including custom models and new features is easy.The aim of the current paper is to present Uncertainpy for the neuroscience community in a user- oriented manner. To demonstrate its broad applicability, we perform an uncertainty quantification and sensitivity analysis on three case studies relevant for neuroscience: the original Hodgkin-Huxley point-neuron model for action potential generation, a multi-compartmental model of a thalamic interneuron implemented in the NEURON simulator, and a sparsely connected recurrent network model implemented in the NEST simulator.SIGNIFICANCE STATEMENTA major challenge in computational neuroscience is to specify the often large number of parameters that define the neuron and neural network models. Many of these parameters have an inherent variability, and some may even be actively regulated and change with time. It is important to know how the uncertainty in model parameters affects the model predictions. To address this need we here present Uncertainpy, an open-source Python toolbox tailored to perform uncertainty quantification and sensitivity analysis of neuroscience models.


2008 ◽  
Vol 20 (4) ◽  
pp. 1065-1090 ◽  
Author(s):  
Wenlian Lu ◽  
Tianping Chen

We use the concept of the Filippov solution to study the dynamics of a class of delayed dynamical systems with discontinuous right-hand side, which contains the widely studied delayed neural network models with almost periodic self-inhibitions, interconnection weights, and external inputs. We prove that diagonal-dominant conditions can guarantee the existence and uniqueness of an almost periodic solution, as well as its global exponential stability. As special cases, we derive a series of results on the dynamics of delayed dynamical systems with discontinuous activations and periodic coefficients or constant coefficients, respectively. From the proof of the existence and uniqueness of the solution, we prove that the solution of a delayed dynamical system with high-slope activations approximates to the Filippov solution of the dynamical system with discontinuous activations.


Sign in / Sign up

Export Citation Format

Share Document