scholarly journals Dissections of Larval, Pupal and Adult Butterfly Brains for Immunostaining and Molecular Analysis

2021 ◽  
Vol 4 (3) ◽  
pp. 53
Author(s):  
Yi Peng Toh ◽  
Emilie Dion ◽  
Antónia Monteiro

Butterflies possess impressive cognitive abilities, and investigations into the neural mechanisms underlying these abilities are increasingly being conducted. Exploring butterfly neurobiology may require the isolation of larval, pupal, and/or adult brains for further molecular and histological experiments. This procedure has been largely described in the fruit fly, but a detailed description of butterfly brain dissections is still lacking. Here, we provide a detailed written and video protocol for the removal of Bicyclus anynana adult, pupal, and larval brains. This species is gradually becoming a popular model because it uses a large set of sensory modalities, displays plastic and hormonally controlled courtship behaviour, and learns visual mate preference and olfactory preferences that can be passed on to its offspring. The extracted brain can be used for downstream analyses, such as immunostaining, DNA or RNA extraction, and the procedure can be easily adapted to other lepidopteran species and life stages.

2001 ◽  
Vol 24 (5) ◽  
pp. 883-884 ◽  
Author(s):  
Paul Cisek ◽  
John F. Kalaska

A common code for integrating perceptions and actions was relevant for simple behavioral guidance well before the evolution of cognitive abilities. We review proposals that representation of to-be-produced events played important roles in early behavior, and evidence that the neural mechanisms supporting such rudimentary sensory predictions have been elaborated through evolution to support the cognitive codes addressed by TEC.


2020 ◽  
Vol 375 (1802) ◽  
pp. 20190467 ◽  
Author(s):  
Sara E. Miller ◽  
Michael J. Sheehan ◽  
H. Kern Reeve

Social interactions are mediated by recognition systems, meaning that the cognitive abilities or phenotypic diversity that facilitate recognition may be common targets of social selection. Recognition occurs when a receiver compares the phenotypes produced by a sender with a template. Coevolution between sender and receiver traits has been empirically reported in multiple species and sensory modalities, though the dynamics and relative exaggeration of traits from senders versus receivers have received little attention. Here, we present a coevolutionary dynamic model that examines the conditions under which senders and receivers should invest effort in facilitating individual recognition. The model predicts coevolution of sender and receiver traits, with the equilibrium investment dependent on the relative costs of signal production versus cognition. In order for recognition to evolve, initial sender and receiver trait values must be above a threshold, suggesting that recognition requires some degree of pre-existing diversity and cognitive abilities. The analysis of selection gradients demonstrates that the strength of selection on sender signals and receiver cognition is strongest when the trait values are furthest from the optima. The model provides new insights into the expected strength and dynamics of selection during the origin and elaboration of individual recognition, an important feature of social cognition in many taxa. This article is part of the theme issue ‘Signal detection theory in recognition systems: from evolving models to experimental tests’.


Author(s):  
Rachel Sharkey ◽  
Thomas Nickl-Jockschat

There is a long-standing association between exceptional cognitive abilities, of various sorts, and neuropsychiatric illness, but it has historically largely been investigated in an exploratory and non-systematic way. One group in which this association has been investigated with more rigor is in subjects who have been identified as twice exceptional; an educational term describing subjects who are both gifted and diagnosed with a neuropsychiatric disability. This term covers multiple conditions, but is of specific interest in particular in the study of autism spectrum disorder. Recent findings have led to the development of a hypothesis that a certain degree of the neurobiology associated with autism might even be advantageous for individuals and could lead to high giftedness, while becoming disadvantageous, once a certain threshold is surpassed. In this model, the same neurobiological mechanisms confer an increasing advantage up to a certain threshold, but become pathological past that point. Twice-exceptional individuals would be exactly at the inflection point, being highly gifted, but also symptomatic at the same time. Here, we review how existing neuroimaging literature on autism spectrum disorder can inform research on twice exceptionality specifically. We propose to study key neural networks with a robust implication in ASD to identify the neurobiology underlying twice-exceptionality. A better understanding of the neural mechanisms of twice exceptionality should help to better understand resilience and vulnerability to neurodevelopmental disorders and tofurther support affected individuals.


2015 ◽  
Author(s):  
Abe Kazemzadeh ◽  
James Gibson ◽  
Panayiotis Georgiou ◽  
Sungbok Lee ◽  
Shrikanth Narayanan

We describe and experimentally validate a question-asking framework for machine-learned linguistic knowledge about human emotions. Using the Socratic method as a theoretical inspiration, we develop an experimental method and computational model for computers to learn subjective information about emotions by playing emotion twenty questions (EMO20Q), a game of twenty questions limited to words denoting emotions. Using human-human EMO20Q data we bootstrap a sequential Bayesian model that drives a generalized pushdown automaton-based dialog agent that further learns from 300 human-computer dialogs collected on Amazon Mechanical Turk. The human-human EMO20Q dialogs show the capability of humans to use a large, rich, subjective vocabulary of emotion words. Training on successive batches of human-computer EMO20Q dialogs shows that the automated agent is able to learn from subsequent human-computer interactions. Our results show that the training procedure enables the agent to learn a large set of emotions words. The fully trained agent successfully completes EMO20Q at 67% of human performance and 30% better than the bootstrapped agent. Even when the agent fails to guess the human opponent's emotion word in the EMO20Q game, the agent's behavior of searching for knowledge makes it appear human-like, which enables the agent maintain user engagement and learn new, out-of-vocabulary words. These results lead us to conclude that the question-asking methodology and its implementation as a sequential Bayes pushdown automaton are a successful model for the cognitive abilities involved in learning, retrieving, and using emotion words by an automated agent in a dialog setting.


2020 ◽  
Vol 71 (1) ◽  
pp. 193-219 ◽  
Author(s):  
Mark T. Wallace ◽  
Tiffany G. Woynaroski ◽  
Ryan A. Stevenson

During our everyday lives, we are confronted with a vast amount of information from several sensory modalities. This multisensory information needs to be appropriately integrated for us to effectively engage with and learn from our world. Research carried out over the last half century has provided new insights into the way such multisensory processing improves human performance and perception; the neurophysiological foundations of multisensory function; the time course for its development; how multisensory abilities differ in clinical populations; and, most recently, the links between multisensory processing and cognitive abilities. This review summarizes the extant literature on multisensory function in typical and atypical circumstances, discusses the implications of the work carried out to date for theory and research, and points toward next steps for advancing the field.


2012 ◽  
Vol 25 (0) ◽  
pp. 17
Author(s):  
Magdalena Chechlacz ◽  
Anna Terry ◽  
Pia Rotshtein ◽  
Wai-Ling Bickerton ◽  
Glyn Humphreys

Extinction is diagnosed when patients respond to a single contralesional item but fail to detect this item when an ipsilesional item is present concurrently. It is considered to be a disorder of attention characterized by a striking bias for the ipsilesional stimulus at the expense of the contralesional stimulus. Extinction has been studied mainly in the visual modality but it occurs also in other sensory modalities (touch, audition) and hence can be considered a multisensory phenomenon. The functional and neuroanatomical relations between extinction in different modalities are poorly understood. It could be hypothesised that extinction deficits in different modalities emerge after damage to both common (attention specific) and distinct (modality specific) brain regions. Here, we used voxel-based morphometry to examine the neuronal substrates of visual versus tactile extinction in a large group of stroke patients (). We found that extinction deficits in the two modalities were significantly correlated (; ). Lesions to inferior parietal lobule and middle frontal gyrus were linked to visual extinction, while lesions involving the superior temporal gyrus were associated with tactile extinction. Damage within the middle temporal gyrus was linked to both types of deficits but interestingly these lesions extended into the middle occipital gyrus in patients with visual but not tactile extinction. White matter damage within the temporal lobe was associated with both types of deficits, including lesions within long association pathways involved in spatial attention. Our findings indicate both common and distinct neural mechanisms of visual and tactile extinction.


2010 ◽  
Vol 365 (1542) ◽  
pp. 883-900 ◽  
Author(s):  
Tom V. Smulders ◽  
Kristy L. Gould ◽  
Lisa A. Leaver

Understanding the survival value of behaviour does not tell us how the mechanisms that control this behaviour work. Nevertheless, understanding survival value can guide the study of these mechanisms. In this paper, we apply this principle to understanding the cognitive mechanisms that support cache retrieval in scatter-hoarding animals. We believe it is too simplistic to predict that all scatter-hoarding animals will outperform non-hoarding animals on all tests of spatial memory. Instead, we argue that we should look at the detailed ecology and natural history of each species. This understanding of natural history then allows us to make predictions about which aspects of spatial memory should be better in which species. We use the natural hoarding behaviour of the three best-studied groups of scatter-hoarding animals to make predictions about three aspects of their spatial memory: duration, capacity and spatial resolution, and we test these predictions against the existing literature. Having laid out how ecology and natural history can be used to predict detailed cognitive abilities, we then suggest using this approach to guide the study of the neural basis of these abilities. We believe that this complementary approach will reveal aspects of memory processing that would otherwise be difficult to discover.


2016 ◽  
Vol 2 ◽  
pp. e40
Author(s):  
Abe Kazemzadeh ◽  
James Gibson ◽  
Panayiotis Georgiou ◽  
Sungbok Lee ◽  
Shrikanth Narayanan

We describe and experimentally validate a question-asking framework for machine-learned linguistic knowledge about human emotions. Using the Socratic method as a theoretical inspiration, we develop an experimental method and computational model for computers to learn subjective information about emotions by playing emotion twenty questions (EMO20Q), a game of twenty questions limited to words denoting emotions. Using human–human EMO20Q data we bootstrap a sequential Bayesian model that drives a generalized pushdown automaton-based dialog agent that further learns from 300 human–computer dialogs collected on Amazon Mechanical Turk. The human–human EMO20Q dialogs show the capability of humans to use a large, rich, subjective vocabulary of emotion words. Training on successive batches of human–computer EMO20Q dialogs shows that the automated agent is able to learn from subsequent human–computer interactions. Our results show that the training procedure enables the agent to learn a large set of emotion words. The fully trained agent successfully completes EMO20Q at 67% of human performance and 30% better than the bootstrapped agent. Even when the agent fails to guess the human opponent’s emotion word in the EMO20Q game, the agent’s behavior of searching for knowledge makes it appear human-like, which enables the agent to maintain user engagement and learn new, out-of-vocabulary words. These results lead us to conclude that the question-asking methodology and its implementation as a sequential Bayes pushdown automaton are a successful model for the cognitive abilities involved in learning, retrieving, and using emotion words by an automated agent in a dialog setting.


2010 ◽  
Vol 3 (2) ◽  
pp. 183-191 ◽  
Author(s):  
Cary Cherniss

The commentaries on my target article expand on it in many useful and enlightening ways, and some provide a glimpse at important new research. The commentaries also point to a few issues raised in the original article that require clarification or elaboration. In this response, I begin by recalling the “big idea” that initially led to interest in emotional intelligence (EI) as a concept, which is that success in life and work depends on more than just the basic cognitive abilities measured by IQ tests. I then clarify what I mean by emotional and social competence (ESC): It is not a single, unitary psychological construct but rather a very broad label for a large set of constructs. After considering whether we really need the ESC concept, I discuss whether the single, comprehensive definition of EI that I proposed in the target article is the best one in light of alternatives suggested in some of the commentaries. Next, I return to the issue of measurement and note new ideas and suggestions that emerge in the commentaries. I conclude by considering the question of how much EI or ESC adds conceptually or predictively to IQ or personality.


eLife ◽  
2020 ◽  
Vol 9 ◽  
Author(s):  
Pierre Enel ◽  
Joni D Wallis ◽  
Erin L Rich

Optimal decision-making requires that stimulus-value associations are kept up to date by constantly comparing the expected value of a stimulus with its experienced outcome. To do this, value information must be held in mind when a stimulus and outcome are separated in time. However, little is known about the neural mechanisms of working memory (WM) for value. Contradicting theories have suggested WM requires either persistent or transient neuronal activity, with stable or dynamic representations, respectively. To test these hypotheses, we recorded neuronal activity in the orbitofrontal and anterior cingulate cortex of two monkeys performing a valuation task. We found that features of all hypotheses were simultaneously present in prefrontal activity, and no single hypothesis was exclusively supported. Instead, mixed dynamics supported robust, time invariant value representations while also encoding the information in a temporally specific manner. We suggest that this hybrid coding is a critical mechanism supporting flexible cognitive abilities.


Sign in / Sign up

Export Citation Format

Share Document