scholarly journals Rule-based and stimulus-based cues bias auditory decisions via different computational and physiological mechanisms

2021 ◽  
Author(s):  
Nathan Tardiff ◽  
Lalitta Suriya-Arunroj ◽  
Yale E. Cohen ◽  
Joshua I. Gold

AbstractThe varied effects of expectations on auditory perception are not well understood. For example, both top-down rules and bottom-up stimulus regularities generate expectations that can bias subsequent perceptual judgments. However, it is unknown whether these different sources of bias use the same or different computational and physiological mechanisms. We examined how rule-based and stimulus-based expectations influenced human subjects’ behavior and pupil-linked arousal, a marker of certain forms of expectation-based processing, during an auditory frequency-discrimination task. Rule-based cues biased choice and response times (RTs) toward the more-probable stimulus. In contrast, stimulus-based cues had a complex combination of effects, including choice and RT biases toward and away from the frequency of recently heard stimuli. These different behavioral patterns also had distinct computational signatures, including different modulations of key components of a novel form of a drift-diffusion model, and distinct physiological signatures, including substantial bias-dependent modulations of pupil size in response to rule-based but not stimulus-based cues. These results imply that different sources of expectations can modulate auditory perception via distinct mechanisms: one that uses arousal-linked, rule-based information and another that uses arousal-independent, stimulus-based information to bias the speed and accuracy of auditory perceptual decisions.

2020 ◽  
pp. 1-15
Author(s):  
Simon Lacey ◽  
James Nguyen ◽  
Peter Schneider ◽  
K. Sathian

Abstract The crossmodal correspondence between auditory pitch and visuospatial elevation (in which high- and low-pitched tones are associated with high and low spatial elevation respectively) has been proposed as the basis for Western musical notation. One implication of this is that music perception engages visuospatial processes and may not be exclusively auditory. Here, we investigated how music perception is influenced by concurrent visual stimuli. Participants listened to unfamiliar five-note musical phrases with four kinds of pitch contour (rising, falling, rising–falling, or falling–rising), accompanied by incidental visual contours that were either congruent (e.g., auditory rising/visual rising) or incongruent (e.g., auditory rising/visual falling) and judged whether the final note of the musical phrase was higher or lower in pitch than the first. Response times for the auditory judgment were significantly slower for incongruent compared to congruent trials, i.e., there was a congruency effect, even though the visual contours were incidental to the auditory task. These results suggest that music perception, although generally regarded as an auditory experience, may actually be multisensory in nature.


1965 ◽  
Vol 85 (4) ◽  
pp. 419-425 ◽  
Author(s):  
Walter D. Block ◽  
Richard W. Hubbard ◽  
Betty F. Steele

2003 ◽  
Vol 14 (2) ◽  
pp. 169-174 ◽  
Author(s):  
John G. Thomas ◽  
Haley R. Milner ◽  
Karl F. Haberlandt

How do people retrieve information in forward and backward recall? To address this issue, we examined response times in directional recall as a function of serial position and list length. Participants memorized lists of four to six words and entered responses at the keyboard. Recall direction was postcued. Response times exhibited asymmetry in terms of direction. In forward recall, response times peaked at the first position, leveling off for subsequent positions. Response times were slower in backward recall than in forward recall and exhibited an inverse U-shaped function with an initial slowdown followed by a continuous speedup. These asymmetries have implications for theoretical models of retrieval in serial recall, including temporal-code, rule-based, and network models. The response time pattern suggests that forward recall proceeds in equal steps across positions, whereas backward recall involves repeated covet cycles of forward recall. Thus, retrieval in both directions involves a forward search.


Author(s):  
Siefer Simone ◽  
Wacker Roland ◽  
Wilhelm Manfred ◽  
Schoen Christiane

1994 ◽  
Vol 78 (2) ◽  
pp. 197-209 ◽  
Author(s):  
Bryan E. Pfingst ◽  
Lisa A. Holloway ◽  
Natee Poopat ◽  
Arohan R. Subramanya ◽  
Melissa F. Warren ◽  
...  

2021 ◽  
Author(s):  
Kevin Tan ◽  
Amy Daitch ◽  
Pedro Pinheiro-Chagas ◽  
Kieran Fox ◽  
Josef Parvizi ◽  
...  

Abstract Hundreds of neuroimaging studies show that mentalizing (i.e., theory of mind) recruits default mode network (DMN) regions with remarkable consistency. Nevertheless, the social-cognitive functions of individual DMN regions remain unclear, perhaps due to the limited spatiotemporal resolution of neuroimaging. We used electrocorticography (ECoG) to record neuronal population activity while 16 human subjects judged the psychological traits of themselves and others. Self- and other-mentalizing recruited near-identical neuronal populations in a common spatiotemporal sequence: activations were earliest in visual cortex, followed by temporoparietal DMN regions, and finally medial prefrontal cortex. Critically, regions with later activations showed greater functional specificity for mentalizing, greater self/other differentiation, and stronger associations with behavioral response times. Moreover, other-mentalizing evoked slower and lengthier activations than self-mentalizing across successive DMN regions, suggesting temporally extended demands on higher-level processes. Our results reveal a common neurocognitive pathway for self- and other-mentalizing that follows a hierarchy of functional specialization across DMN regions.


Author(s):  
Alexandre L. S. Filipowicz ◽  
Jonathan Levine ◽  
Eugenio Piasini ◽  
Gaia Tavoni ◽  
Joseph W. Kable ◽  
...  

AbstractDifferent learning strategies are thought to fall along a continuum that ranges from simple, inflexible, and fast “model-free” strategies, to more complex, flexible, and deliberative “model-based strategies”. Here we show that, contrary to this proposal, strategies at both ends of this continuum can be equally flexible, effective, and time-intensive. We analyzed behavior of adult human subjects performing a canonical learning task used to distinguish between model-free and model-based strategies. Subjects using either strategy showed similarly high information complexity, a measure of strategic flexibility, and comparable accuracy and response times. This similarity was apparent despite the generally higher computational complexity of model-based algorithms and fundamental differences in how each strategy learned: model-free learning was driven primarily by observed past responses, whereas model-based learning was driven primarily by inferences about latent task features. Thus, model-free and model-based learning differ in the information they use to learn but can support comparably flexible behavior.Statement of RelevanceThe distinction between model-free and model-based learning is an influential framework that has been used extensively to understand individual- and task-dependent differences in learning by both healthy and clinical populations. A common interpretation of this distinction that model-based strategies are more complex and therefore more flexible than model-free strategies. However, this interpretation conflates computational complexity, which relates to processing resources and generally higher for model-based algorithms, with information complexity, which reflects flexibility but has rarely been measured. Here we use a metric of information complexity to demonstrate that, contrary to this interpretation, model-free and model-based strategies can be equally flexible, effective, and time-intensive and are better distinguished by the nature of the information from which they learn. Our results counter common interpretations of model-free versus model-based learning and demonstrate the general usefulness of information complexity for assessing different forms of strategic flexibility.


2021 ◽  
Vol 6 (1) ◽  
pp. 27
Author(s):  
Muniba Nazir

Wheat is used as staple food worldwide and it ranked third in cereals. Its productivity a the global level decreases by many stresses mainly by salinity stress which is associated with different physiological and biochemical processes of plants. To overcome these growth and yield reduction issues, salinity resistance in wheat can be achieved. The introduction of resistance to salinity-induced water stress and ion toxicity in wheat lead to more reliable results. Salt tolerance mechanisms at tissues and whole plant levels along with sequestration of toxic ions can improve overall growth, yield, and salinity resistance capability in wheat. Different sources and measurements of salinity play important role in the production of salinity tolerant wheat. This article mainly reviews different physiological mechanisms, genetics, omics, and quality trait loci approaches for the production of salinity tolerant wheat.


eLife ◽  
2019 ◽  
Vol 8 ◽  
Author(s):  
Lingxi Lu ◽  
Qian Wang ◽  
Jingwei Sheng ◽  
Zhaowei Liu ◽  
Lang Qin ◽  
...  

The subjective inner experience of mental imagery is among the most ubiquitous human experiences in daily life. Elucidating the neural implementation underpinning the dynamic construction of mental imagery is critical to understanding high-order cognitive function in the human brain. Here, we applied a frequency-tagging method to isolate the top-down process of speech mental imagery from bottom-up sensory-driven activities and concurrently tracked the neural processing time scales corresponding to the two processes in human subjects. Notably, by estimating the source of the magnetoencephalography (MEG) signals, we identified isolated brain networks activated at the imagery-rate frequency. In contrast, more extensive brain regions in the auditory temporal cortex were activated at the stimulus-rate frequency. Furthermore, intracranial stereotactic electroencephalogram (sEEG) evidence confirmed the participation of the inferior frontal gyrus in generating speech mental imagery. Our results indicate that a disassociated neural network underlies the dynamic construction of speech mental imagery independent of auditory perception.


Sign in / Sign up

Export Citation Format

Share Document