scholarly journals A neural correlate of image memorability

2019 ◽  
Author(s):  
Andrew Jaegle ◽  
Vahid Mehrpour ◽  
Yalda Mohsenzadeh ◽  
Travis Meyer ◽  
Aude Oliva ◽  
...  

Some images are easy to remember while others are easily forgotten. While variation in image memorability is consistent across individuals, we lack a full account of its neural correlates. By analyzing data collected from inferotemporal cortex (IT) as monkeys performed a visual memory task, we demonstrate that a simple property of the visual encoding of an image, its population response magnitude, is strongly correlated with its memorability. These results establish a novel behavioral role for the magnitude of the IT response, which lies largely orthogonal to the coding scheme that IT uses to represent object identity. To investigate the origin of IT memorability modulation, we also probed convolutional neural network models trained to categorize objects. We found brain-analogous correlates of memorability that grew in strength across the hierarchy of these networks, suggesting that this memorability correlate is likely to arise from the optimizations required for visual as opposed to mnemonic processing.

eLife ◽  
2019 ◽  
Vol 8 ◽  
Author(s):  
Andrew Jaegle ◽  
Vahid Mehrpour ◽  
Yalda Mohsenzadeh ◽  
Travis Meyer ◽  
Aude Oliva ◽  
...  

Most accounts of image and object encoding in inferotemporal cortex (IT) focus on the distinct patterns of spikes that different images evoke across the IT population. By analyzing data collected from IT as monkeys performed a visual memory task, we demonstrate that variation in a complementary coding scheme, the magnitude of the population response, can largely account for how well images will be remembered. To investigate the origin of IT image memorability modulation, we probed convolutional neural network models trained to categorize objects. We found that, like the brain, different natural images evoked different magnitude responses from these networks, and in higher layers, larger magnitude responses were correlated with the images that humans and monkeys find most memorable. Together, these results suggest that variation in IT population response magnitude is a natural consequence of the optimizations required for visual processing, and that this variation has consequences for visual memory.


2020 ◽  
Author(s):  
Rachel St.Clair ◽  
Michael Teti ◽  
Mirjana Pavlovic ◽  
William Hahn ◽  
Elan Barenholtz

AbstractComputer-aided rational vaccine design (RVD) and synthetic pharmacology are rapidly developing fields that leverage existing datasets for developing compounds of interest. Computational proteomics utilizes algorithms and models to probe proteins for functional prediction. A potentially strong target for such a computational approach is autoimmune antibodies which are the result of broken tolerance in the immune system where it cannot distinguish “self” from “non-self” resulting in attack of its own structures (proteins and DNA, mainly). The information on structure, function and pathogenicity of autoantibodies may assist in engineering RVD against autoimmune diseases. Current computational approaches exploit large datasets curated with extensive domain knowledge, most of which include the need for many computational resources and have been applied indirectly to problems of interest for DNA, RNA, and monomer protein binding. Here, we present a novel method for discovering potential binding sites. We employed long short-term memory (LSTM) models trained on FASTA primary sequences directly to predict protein binding in DNA-binding hydrolytic antibodies (abzymes). We also employed CNN models applied to the same dataset. While the CNN model outperformed the LSTM on the primary task of binding prediction, analysis of internal model representations of both models showed that the LSTM models highlighted sub-sequences that were more strongly correlated with sites known to be involved in binding. These results demonstrate that analysis of internal processes of recurrent neural network models may serve as a powerful tool for primary sequence analysis.


2019 ◽  
Author(s):  
Eli Pollock ◽  
Mehrdad Jazayeri

AbstractMany cognitive processes involve transformations of distributed representations in neural populations, creating a need for population-level models. Recurrent neural network models fulfill this need, but there are many open questions about how their connectivity gives rise to dynamics that solve a task. Here, we present a method for finding the connectivity of networks for which the dynamics are specified to solve a task in an interpretable way. We apply our method to a working memory task by synthesizing a network that implements a drift-diffusion process over a ring-shaped manifold. We also use our method to demonstrate how inputs can be used to control network dynamics for cognitive flexibility and explore the relationship between representation geometry and network capacity. Our work fits within the broader context of understanding neural computations as dynamics over relatively low-dimensional manifolds formed by correlated patterns of neurons.Author SummaryNeurons in the brain form intricate networks that can produce a vast array of activity patterns. To support goal-directed behavior, the brain must adjust the connections between neurons so that network dynamics can perform desirable computations on behaviorally relevant variables. A fundamental goal in computational neuroscience is to provide an understanding of how network connectivity aligns the dynamics in the brain to the dynamics needed to track those variables. Here, we develop a mathematical framework for creating recurrent neural network models that can address this problem. Specifically, we derive a set of linear equations that constrain the connectivity to afford a direct mapping of task-relevant dynamics onto network activity. We demonstrate the utility of this technique by creating and analyzing a set of network models that can perform a simple working memory task. We then extend the approach to show how additional constraints can furnish networks whose dynamics are controlled flexibly by external inputs. Finally, we exploit the flexibility of this technique to explore the robustness and capacity limitations of recurrent networks. This network synthesis method provides a powerful means for generating and validating hypotheses about how task-relevant computations can emerge from network dynamics.


2015 ◽  
Author(s):  
◽  
Marc Halusic

Implicit motives are measured using a projective assessment, the Picture Story Exercise (PSE), involving labor-intensive coding of participant-generated writing. The present research uses insights from previous attempts to automate coding, as well as advances in natural language processing and machine learning, to create a new method of automated coding for the achievement motive (NAch). In part 1, I collected coded PSE sentences from implicit motive researchers. Two models were generated using multilayer perceptron neural networks to predict achievement motive imagery, one using the Linguistic Inquiry and Word Count (LIWC; Pennebaker, 2001) software, and one using a novel text processing system, called Maximum Synset-to-Sentence Relatedness (MSSR). Part 2 sought to experimentally manipulate NAch, and produce 2 more neural network models similar to those of part 1, except that the models in this case predicted experimental condition. Further, human generated NAch scores from the PSEs collected in this part were compared against computer generated NAch scores produced by the models from part 1, to provide another test of the magnitude of the relation between human and computer generated NAch scores. Part 3 tested all 4 models to predict achievement motive imagery in archival data collected by Ratliff (1979). Because these data were coded using a different NAch coding scheme, and also included other variables theoretically related to NAch, these tests were used to search for evidence of convergent and predictive validity. Findings were promising for both models developed in part 1, but further improvements will be necessary before they can replace human coders."


2020 ◽  
Vol 5 ◽  
pp. 140-147 ◽  
Author(s):  
T.N. Aleksandrova ◽  
◽  
E.K. Ushakov ◽  
A.V. Orlova ◽  
◽  
...  

The neural network models series used in the development of an aggregated digital twin of equipment as a cyber-physical system are presented. The twins of machining accuracy, chip formation and tool wear are examined in detail. On their basis, systems for stabilization of the chip formation process during cutting and diagnose of the cutting too wear are developed. Keywords cyberphysical system; neural network model of equipment; big data, digital twin of the chip formation; digital twin of the tool wear; digital twin of nanostructured coating choice


Author(s):  
Ann-Sophie Barwich

How much does stimulus input shape perception? The common-sense view is that our perceptions are representations of objects and their features and that the stimulus structures the perceptual object. The problem for this view concerns perceptual biases as responsible for distortions and the subjectivity of perceptual experience. These biases are increasingly studied as constitutive factors of brain processes in recent neuroscience. In neural network models the brain is said to cope with the plethora of sensory information by predicting stimulus regularities on the basis of previous experiences. Drawing on this development, this chapter analyses perceptions as processes. Looking at olfaction as a model system, it argues for the need to abandon a stimulus-centred perspective, where smells are thought of as stable percepts, computationally linked to external objects such as odorous molecules. Perception here is presented as a measure of changing signal ratios in an environment informed by expectancy effects from top-down processes.


Energies ◽  
2021 ◽  
Vol 14 (14) ◽  
pp. 4242
Author(s):  
Fausto Valencia ◽  
Hugo Arcos ◽  
Franklin Quilumba

The purpose of this research is the evaluation of artificial neural network models in the prediction of stresses in a 400 MVA power transformer winding conductor caused by the circulation of fault currents. The models were compared considering the training, validation, and test data errors’ behavior. Different combinations of hyperparameters were analyzed based on the variation of architectures, optimizers, and activation functions. The data for the process was created from finite element simulations performed in the FEMM software. The design of the Artificial Neural Network was performed using the Keras framework. As a result, a model with one hidden layer was the best suited architecture for the problem at hand, with the optimizer Adam and the activation function ReLU. The final Artificial Neural Network model predictions were compared with the Finite Element Method results, showing good agreement but with a much shorter solution time.


Sign in / Sign up

Export Citation Format

Share Document