scholarly journals High-performing computational models of visual cortex are marked by high intrinsic dimensionality

2021 ◽  
Vol 21 (9) ◽  
pp. 2662
Author(s):  
Eric Elmoznino ◽  
Michael Bonner
2021 ◽  
Vol 7 (8) ◽  
pp. eabe9375
Author(s):  
J. J. Muldoon ◽  
V. Kandula ◽  
M. Hong ◽  
P. S. Donahue ◽  
J. D. Boucher ◽  
...  

Genetically engineering cells to perform customizable functions is an emerging frontier with numerous technological and translational applications. However, it remains challenging to systematically engineer mammalian cells to execute complex functions. To address this need, we developed a method enabling accurate genetic program design using high-performing genetic parts and predictive computational models. We built multifunctional proteins integrating both transcriptional and posttranslational control, validated models for describing these mechanisms, implemented digital and analog processing, and effectively linked genetic circuits with sensors for multi-input evaluations. The functional modularity and compositional versatility of these parts enable one to satisfy a given design objective via multiple synonymous programs. Our approach empowers bioengineers to predictively design mammalian cellular functions that perform as expected even at high levels of biological complexity.


2015 ◽  
Vol 15 (12) ◽  
pp. 1001
Author(s):  
Catherine Olsson ◽  
Kendrick Kay ◽  
Jonathan Winawer

2021 ◽  
Author(s):  
Zedong Bi

According to analysis-by-synthesis theories of perception, the primary visual cortex (V1) reconstructs visual stimuli through top-down pathway, and higher-order cortex reconstructs V1 activity. Experiments also found that neural representations are generated in a top-down cascade during visual imagination. What code does V1 provide higher-order cortex to reconstruct or simulate to improve perception or imaginative creativity? What unsupervised learning principles shape V1 for reconstructing stimuli so that V1 activity eigenspectrum is power-law with close-to-1 exponent? Using computational models, we reveal that reconstructing the activities of V1 complex cells facilitate higher-order cortex to form representations smooth to shape morphing of stimuli, improving perception and creativity. Power-law eigenspectrum with close-to-1 exponent results from the constraints of sparseness and temporal slowness when V1 is reconstructing stimuli, at a sparseness strength that best whitens V1 code and makes the exponent most insensitive to slowness strength. Our results provide fresh insights into V1 computation.


2021 ◽  
pp. 1-36
Author(s):  
David Berga ◽  
Xavier Otazu

Lateral connections in the primary visual cortex (V1) have long been hypothesized to be responsible for several visual processing mechanisms such as brightness induction, chromatic induction, visual discomfort, and bottom-up visual attention (also named saliency). Many computational models have been developed to independently predict these and other visual processes, but no computational model has been able to reproduce all of them simultaneously. In this work, we show that a biologically plausible computational model of lateral interactions of V1 is able to simultaneously predict saliency and all the aforementioned visual processes. Our model's architecture (NSWAM) is based on Penacchio's neurodynamic model of lateral connections of V1. It is defined as a network of firing rate neurons, sensitive to visual features such as brightness, color, orientation, and scale. We tested NSWAM saliency predictions using images from several eye tracking data sets. We show that the accuracy of predictions obtained by our architecture, using shuffled metrics, is similar to other state-of-the-art computational methods, particularly with synthetic images (CAT2000-Pattern and SID4VAM) that mainly contain low-level features. Moreover, we outperform other biologically inspired saliency models that are specifically designed to exclusively reproduce saliency. We show that our biologically plausible model of lateral connections can simultaneously explain different visual processes present in V1 (without applying any type of training or optimization and keeping the same parameterization for all the visual processes). This can be useful for the definition of a unified architecture of the primary visual cortex.


2018 ◽  
Author(s):  
Kshitij Dwivedi ◽  
Gemma Roig

AbstractComputational models such as deep neural networks (DNN) trained for classification are often used to explain responses of the visual cortex. However, not all the areas of the visual cortex are involved in object/scene classification. For instance, scene selective occipital place area (OPA) plays a role in mapping navigational affordances. Therefore, for explaining responses of such task-specific brain area, we investigate if a model that performs a related task can serve as a better computational model than a model that performs an unrelated task. We found that DNN trained on a task (scene-parsing) related to the function (navigational affordances) of a brain region (OPA) explains its responses better than a DNN trained on a task (scene-classification) which is not explicitly related. In a subsequent analysis, we found that the DNNs that showed high correlation with a particular brain region were trained on a task that was consistent with functions of that brain region reported in previous neuroimaging studies. Our results demonstrate that the task is paramount for selecting a computational model of a brain area. Further, explaining the responses of a brain area by a diverse set of tasks has the potential to shed some light on its functions.Author summaryAreas in the human visual cortex are specialized for specific behaviors either due to supervision and interaction with the world or due to evolution. A standard way to gain insight into the function of these brain region is to design experiments related to a particular behavior, and localize the regions showing significant relative activity corresponding to that behavior. In this work, we investigate if we can figure out the function of a brain area in visual cortex using computational vision models. From our results, we find that explaining responses of a brain region using DNNs trained on a diverse set of possible vision tasks can help us gain insights into its function. The consistency of our results using DNNs with the previous neuroimaging studies suggest that the brain region may be specialized for behavior similar to the tasks for which DNNs showed a high correlation with its responses.


2020 ◽  
Author(s):  
Joseph J. Muldoon ◽  
Viswajit Kandula ◽  
Mihe Hong ◽  
Patrick S. Donahue ◽  
Jonathan D. Boucher ◽  
...  

ABSTRACTGenetically engineering cells to perform customizable functions is an emerging frontier with numerous technological and translational applications. However, it remains challenging to systematically engineer mammalian cells to execute complex functions. To address this need, we developed a method enabling accurate genetic program design using high-performing genetic parts and predictive computational models. We built multi-functional proteins integrating both transcriptional and post-translational control, validated models for describing these mechanisms, implemented digital and analog processing, and effectively linked genetic circuits with sensors for multi-input evaluations. The functional modularity and compositional versatility of these parts enable one to satisfy a given design objective via multiple synonymous programs. Our approach empowers bioengineers to predictively design mammalian cellular functions that perform as expected even at high levels of biological complexity.


Sign in / Sign up

Export Citation Format

Share Document