computable model
Recently Published Documents


TOTAL DOCUMENTS

40
(FIVE YEARS 11)

H-INDEX

6
(FIVE YEARS 2)

2022 ◽  
Vol 32 (1) ◽  
pp. 1-27
Author(s):  
Damian Vicino ◽  
Gabriel A. Wainer ◽  
Olivier Dalle

Uncertainty Propagation methods are well-established when used in modeling and simulation formalisms like differential equations. Nevertheless, until now there are no methods for Discrete-Dynamic Systems. Uncertainty-Aware Discrete-Event System Specification (UA-DEVS) is a formalism for modeling Discrete-Event Dynamic Systems that include uncertainty quantification in messages, states, and event times. UA-DEVS models provide a theoretical framework to describe the models’ uncertainty and their properties. As UA-DEVS models can include continuous variables and non-computable functions, their simulation could be non-computable. For this reason, we also introduce Interval-Approximated Discrete-Event System Specification (IA-DEVS), a formalism that approximates UA-DEVS models using a set of order and bounding functions to obtain a computable model. The computable model approximation produces a tree of all trajectories that can be traversed from the original model and some erroneous ones introduced by the approximation process. We also introduce abstract simulation algorithms for IA-DEVS, present a case study of UA-DEVS, its IA-DEVS approximation and, its simulation results using the algorithms defined.


2021 ◽  
Vol 17 (6) ◽  
pp. e1008981
Author(s):  
Yaniv Morgenstern ◽  
Frieder Hartmann ◽  
Filipp Schmidt ◽  
Henning Tiedemann ◽  
Eugen Prokott ◽  
...  

Shape is a defining feature of objects, and human observers can effortlessly compare shapes to determine how similar they are. Yet, to date, no image-computable model can predict how visually similar or different shapes appear. Such a model would be an invaluable tool for neuroscientists and could provide insights into computations underlying human shape perception. To address this need, we developed a model (‘ShapeComp’), based on over 100 shape features (e.g., area, compactness, Fourier descriptors). When trained to capture the variance in a database of >25,000 animal silhouettes, ShapeComp accurately predicts human shape similarity judgments between pairs of shapes without fitting any parameters to human data. To test the model, we created carefully selected arrays of complex novel shapes using a Generative Adversarial Network trained on the animal silhouettes, which we presented to observers in a wide range of tasks. Our findings show that incorporating multiple ShapeComp dimensions facilitates the prediction of human shape similarity across a small number of shapes, and also captures much of the variance in the multiple arrangements of many shapes. ShapeComp outperforms both conventional pixel-based metrics and state-of-the-art convolutional neural networks, and can also be used to generate perceptually uniform stimulus sets, making it a powerful tool for investigating shape and object representations in the human brain.


2021 ◽  
Author(s):  
Michael Jigo ◽  
David J. Heeger ◽  
Marisa Carrasco

ABSTRACTAttention can facilitate or impair texture segmentation, altering whether objects are isolated from their surroundings in visual scenes. We simultaneously explain several empirical phenomena of texture segmentation and its attentional modulation with a single image-computable model. At the model’s core, segmentation relies on the interaction between sensory processing and attention, with different operating regimes for involuntary and voluntary attention systems. Model comparisons were used to identify computations critical for texture segmentation and attentional modulation. The model reproduced (i) the central performance drop, which is the parafoveal advantage for segmentation over the fovea, (ii) the peripheral improvements and central impairments induced by involuntary attention and (iii) the uniform improvements across eccentricity by voluntary attention. The proposed model reveals distinct functional roles for involuntary and voluntary attention and provides a generalizable quantitative framework for predicting the perceptual impact of attention across the visual field.


Author(s):  
Yaniv Morgenstern ◽  
Frieder Hartmann ◽  
Filipp Schmidt ◽  
Henning Tiedemann ◽  
Eugen Prokott ◽  
...  

AbstractShape is a defining feature of objects. Yet, no image-computable model accurately predicts how similar or different shapes appear to human observers. To address this, we developed a model (‘ShapeComp’), based on over 100 shape features (e.g., area, compactness, Fourier descriptors). When trained to capture the variance in a database of >25,000 animal silhouettes, ShapeComp predicts human shape similarity judgments almost perfectly (r2>0.99) without fitting any parameters to human data. To test the model, we created carefully selected arrays of complex novel shapes using a Generative Adversarial Network trained on the animal silhouettes, which we presented to observers in a wide range of tasks. Our findings show that human shape perception is inherently multidimensional and optimized for comparing natural shapes. ShapeComp outperforms conventional metrics, and can also be used to generate perceptually uniform stimulus sets, making it a powerful tool for investigating shape and object representations in the human brain.


eLife ◽  
2019 ◽  
Vol 8 ◽  
Author(s):  
Dora Hermes ◽  
Natalia Petridou ◽  
Kendrick N Kay ◽  
Jonathan Winawer

Gamma oscillations in visual cortex have been hypothesized to be critical for perception, cognition, and information transfer. However, observations of these oscillations in visual cortex vary widely; some studies report little to no stimulus-induced narrowband gamma oscillations, others report oscillations for only some stimuli, and yet others report large oscillations for most stimuli. To better understand this signal, we developed a model that predicts gamma responses for arbitrary images and validated this model on electrocorticography (ECoG) data from human visual cortex. The model computes variance across the outputs of spatially pooled orientation channels, and accurately predicts gamma amplitude across 86 images. Gamma responses were large for a small subset of stimuli, differing dramatically from fMRI and ECoG broadband (non-oscillatory) responses. We propose that gamma oscillations in visual cortex serve as a biomarker of gain control rather than being a fundamental mechanism for communicating visual information.


2019 ◽  
Author(s):  
Dora Hermes ◽  
Natalia Petridou ◽  
Kendrick N Kay ◽  
Jonathan Winawer

2019 ◽  
Vol 19 (10) ◽  
pp. 37c
Author(s):  
Yaniv Morgenstern ◽  
Filipp Schmidt ◽  
Frieder Hartmann ◽  
Henning Tiedemann ◽  
Eugen Prokott ◽  
...  

2019 ◽  
Author(s):  
Dora Hermes ◽  
Natalia Petridou ◽  
Kendrick Kay ◽  
Jonathan Winawer

AbstractGamma oscillations in visual cortex have been hypothesized to be critical for perception, cognition, and information transfer. However, observations of these oscillations in visual cortex vary widely; some studies report little to no stimulus-induced narrowband gamma oscillations, others report oscillations for only some stimuli, and yet others report large oscillations for most stimuli. To reconcile these findings and better understand this signal, we developed a model that predicts gamma responses for arbitrary images and validated this model on electrocorticography (ECoG) data from human visual cortex. The model computes variance across the outputs of spatially pooled orientation channels, and accurately predicts gamma amplitude across 86 images. Gamma responses were large for a small subset of stimuli, differing dramatically from fMRI and ECoG broadband (non-oscillatory) responses. We suggest that gamma oscillations in visual cortex serve as a biomarker of gain control rather than being a fundamental mechanism for communicating visual information.


Sign in / Sign up

Export Citation Format

Share Document