scholarly journals Recursive Processing of Nested Structures in Monkeys? Two alternative accounts

2021 ◽  
Author(s):  
Yair Lakretz ◽  
Stanislas Dehaene

Ferrigno et al. [2020] introduced an ingenious task to investigate recursion in human and non-human primates. American adults, Tsimane adults, and 3-5 year-old children successfully performed the task. Macaque monkeys required additional training, but two out of three eventually showed good generalization and scored above many Tsimane and child participants. Moreover, when tested on sequences composed of new bracket signs, the monkeys still showed good performance. The authors thus concluded that recursive nesting is not unique to humans. Here, we dispute the claim by showing that at least two alternative interpretations remain tenable. We first examine this conclusion in light of recent findings from modern artificial recurrent neural networks (RNNs), regarding how these networks encode sequences. We show that although RNNs, like monkeys, succeed on demanding generalization tasks as in Ferrigno et al., the underlying neural mechanisms are not recursive. Moreover, we show that when the networks are tested on sequences with deeper center-embedded structures compared to training, the networks fail to generalize. We then discuss an additional interpretation of the results in light of a simple model of sequence memory.

2021 ◽  
Author(s):  
Shiva Farashahi ◽  
Alireza Soltani

AbstractLearning appropriate representations of the reward environment is extremely challenging in the real world where there are many options to learn about and these options have many attributes or features. Despite existence of alternative solutions for this challenge, neural mechanisms underlying emergence and adoption of value representations and learning strategies remain unknown. To address this, we measured learning and choice during a novel multi-dimensional probabilistic learning task in humans and trained recurrent neural networks (RNNs) to capture our experimental observations. We found that participants estimate stimulus-outcome associations by learning and combining estimates of reward probabilities associated with the informative feature followed by those of informative conjunctions. Through analyzing representations, connectivity, and lesioning of the RNNs, we demonstrate this mixed learning strategy relies on a distributed neural code and distinct contributions of inhibitory and excitatory neurons. Together, our results reveal neural mechanisms underlying emergence of complex learning strategies in naturalistic settings.


2020 ◽  
Author(s):  
Dean Sumner ◽  
Jiazhen He ◽  
Amol Thakkar ◽  
Ola Engkvist ◽  
Esben Jannik Bjerrum

<p>SMILES randomization, a form of data augmentation, has previously been shown to increase the performance of deep learning models compared to non-augmented baselines. Here, we propose a novel data augmentation method we call “Levenshtein augmentation” which considers local SMILES sub-sequence similarity between reactants and their respective products when creating training pairs. The performance of Levenshtein augmentation was tested using two state of the art models - transformer and sequence-to-sequence based recurrent neural networks with attention. Levenshtein augmentation demonstrated an increase performance over non-augmented, and conventionally SMILES randomization augmented data when used for training of baseline models. Furthermore, Levenshtein augmentation seemingly results in what we define as <i>attentional gain </i>– an enhancement in the pattern recognition capabilities of the underlying network to molecular motifs.</p>


Author(s):  
Faisal Ladhak ◽  
Ankur Gandhe ◽  
Markus Dreyer ◽  
Lambert Mathias ◽  
Ariya Rastrow ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document