processing stream
Recently Published Documents


TOTAL DOCUMENTS

61
(FIVE YEARS 19)

H-INDEX

17
(FIVE YEARS 2)

2021 ◽  
pp. 1-38
Author(s):  
Samantha Wray ◽  
Linnaea Stockall ◽  
Alec Marantz

Abstract Neuro- and psycholinguistic experimentation supports the early decomposition of morphologically complex words within the ventral processing stream, which MEG has localized to the M170 response in the (left) visual word form area (VWFA). Decomposition into an exhaustive parse of visual morpheme forms extends beyond words like “farmer” to those imitating complexity (e.g. “brother”, Lewis et al. 2011), and to “unique” stems occurring in only one word but following the syntax and semantics of their affix (e.g. “vulnerable”, Gwilliams & Marantz 2018). Evidence comes primarily from suffixation; other morphological processes have been under-investigated. This study explores circumfixation, infixation, and reduplication in Tagalog. In addition to investigating whether these are parsed like suffixation, we address an outstanding question concerning semantically empty morphemes. Some words in Tagalog resemble English “winter” as decomposition is not supported (wint-er); these apparently reduplicated pseudoreduplicates lack the syntactic and semantic features of reduplicated forms. However, unlike “winter,” these words exhibit phonological behavior predicted only if they involve a reduplicating morpheme. If these are decomposed, this provides evidence that words are analyzed as complex, like English “vulnerable”, when the grammar demands it. In a lexical decision task with MEG, we find that VWFA activity correlates with stem:word transition probability for circumfixed, infixed and reduplicated words. Furthermore, a Bayesian analysis suggests that pseudoreduplicates with reduplicate-like phonology are also decomposed; other pseudoreduplicates are not. These findings are consistent with an interpretation that decomposition is modulated by phonology in addition to syntax and semantics.


2021 ◽  
Vol 15 ◽  
pp. e00671
Author(s):  
A. Karrech ◽  
M. Dong ◽  
J. Skut ◽  
M. Elchalakani ◽  
M.A. Shahin
Keyword(s):  

2021 ◽  
pp. 1-12
Author(s):  
Joonkoo Park ◽  
Sonia Godbole ◽  
Marty G. Woldorff ◽  
Elizabeth M. Brannon

Abstract Whether and how the brain encodes discrete numerical magnitude differently from continuous nonnumerical magnitude is hotly debated. In a previous set of studies, we orthogonally varied numerical (numerosity) and nonnumerical (size and spacing) dimensions of dot arrays and demonstrated a strong modulation of early visual evoked potentials (VEPs) by numerosity and not by nonnumerical dimensions. Although very little is known about the brain's response to systematic changes in continuous dimensions of a dot array, some authors intuit that the visual processing stream must be more sensitive to continuous magnitude information than to numerosity. To address this possibility, we measured VEPs of participants viewing dot arrays that changed exclusively in one nonnumerical magnitude dimension at a time (size or spacing) while holding numerosity constant and compared this to a condition where numerosity was changed while holding size and spacing constant. We found reliable but small neural sensitivity to exclusive changes in size and spacing; however, changing numerosity elicited a much more robust modulation of the VEPs. Together with previous work, these findings suggest that sensitivity to magnitude dimensions in early visual cortex is context dependent: The brain is moderately sensitive to changes in size and spacing when numerosity is held constant, but sensitivity to these continuous variables diminishes to a negligible level when numerosity is allowed to vary at the same time. Neurophysiological explanations for the encoding and context dependency of numerical and nonnumerical magnitudes are proposed within the framework of neuronal normalization.


2021 ◽  
Vol 17 (5) ◽  
pp. e1008969
Author(s):  
Kristjan Kalm ◽  
Dennis Norris

We contrast two computational models of sequence learning. The associative learner posits that learning proceeds by strengthening existing association weights. Alternatively, recoding posits that learning creates new and more efficient representations of the learned sequences. Importantly, both models propose that humans act as optimal learners but capture different statistics of the stimuli in their internal model. Furthermore, these models make dissociable predictions as to how learning changes the neural representation of sequences. We tested these predictions by using fMRI to extract neural activity patters from the dorsal visual processing stream during a sequence recall task. We observed that only the recoding account can explain the similarity of neural activity patterns, suggesting that participants recode the learned sequences using chunks. We show that associative learning can theoretically store only very limited number of overlapping sequences, such as common in ecological working memory tasks, and hence an efficient learner should recode initial sequence representations.


2021 ◽  
Vol 89 (9) ◽  
pp. S267
Author(s):  
Katherine M. Soderberg ◽  
Tiffany A. Nash ◽  
Philip D. Kohn ◽  
J. Shane Kippenhan ◽  
Madeline R. Hamborg ◽  
...  

2020 ◽  
Vol 7 ◽  
Author(s):  
Focko L. Higgen ◽  
Philipp Ruppel ◽  
Michael Görner ◽  
Matthias Kerzel ◽  
Norman Hendrich ◽  
...  

The quality of crossmodal perception hinges on two factors: The accuracy of the independent unimodal perception and the ability to integrate information from different sensory systems. In humans, the ability for cognitively demanding crossmodal perception diminishes from young to old age. Here, we propose a new approach to research to which degree the different factors contribute to crossmodal processing and the age-related decline by replicating a medical study on visuo-tactile crossmodal pattern discrimination utilizing state-of-the-art tactile sensing technology and artificial neural networks (ANN). We implemented two ANN models to specifically focus on the relevance of early integration of sensory information during the crossmodal processing stream as a mechanism proposed for efficient processing in the human brain. Applying an adaptive staircase procedure, we approached comparable unimodal classification performance for both modalities in the human participants as well as the ANN. This allowed us to compare crossmodal performance between and within the systems, independent of the underlying unimodal processes. Our data show that unimodal classification accuracies of the tactile sensing technology are comparable to humans. For crossmodal discrimination of the ANN the integration of high-level unimodal features on earlier stages of the crossmodal processing stream shows higher accuracies compared to the late integration of independent unimodal classifications. In comparison to humans, the ANN show higher accuracies than older participants in the unimodal as well as the crossmodal condition, but lower accuracies than younger participants in the crossmodal task. Taken together, we can show that state-of-the-art tactile sensing technology is able to perform a complex tactile recognition task at levels comparable to humans. For crossmodal processing, human inspired early sensory integration seems to improve the performance of artificial neural networks. Still, younger participants seem to employ more efficient crossmodal integration mechanisms than modeled in the proposed ANN. Our work demonstrates how collaborative research in neuroscience and embodied artificial neurocognitive models can help to derive models to inform the design of future neurocomputational architectures.


2020 ◽  
Author(s):  
Yrian Derreumaux;Derreumaux ◽  
Robin Bergh ◽  
Brent Hughes

America is increasingly ideologically polarized, fueling intergroup conflict and intensifying partisan biases in cognition and behavior. To date, research on intergroup bias has predominantly examined biases in how people search for information and how they interpret information in isolation. Here, we integrate these two perspectives to elucidate how partisan biases manifest across the information processing stream, beginning with (1) a biased selection of information, leading to (2) skewed samples of information that interact with (3) motivated interpretations to produce evaluative biases. Across 3 empirical studies and 3 internal meta-analyses, participants freely sampled information about ingroup and outgroup members or ingroup and outgroup political candidates until they felt confident to evaluate them. Replicating our results across these different sampling environments, we reliably find that the majority of participants begin sampling information from their own group, which was associated with individual differences in group-based motives, and that participants sampled overall more information from their own group. This, in turn, generates more variability in ingroup (relative to outgroup) experiences that subsequently fall prey to motivated interpretations. We further demonstrate that participants employ different sampling strategies over time when the ingroup is de facto worse than the outgroup, and that they asymmetrically integrate information into their evaluations based on the congeniality of initial experiences. The proposed framework extends classic findings in psychology by connecting people’s early experiences to downstream evaluative biases and has implications for intergroup bias interventions.


2020 ◽  
Vol 45 (7) ◽  
pp. 601-608
Author(s):  
Fábio Silva ◽  
Nuno Gomes ◽  
Sebastian Korb ◽  
Gün R Semin

Abstract Exposure to body odors (chemosignals) collected under different emotional states (i.e., emotional chemosignals) can modulate our visual system, biasing visual perception. Recent research has suggested that exposure to fear body odors, results in a generalized faster access to visual awareness of different emotional facial expressions (i.e., fear, happy, and neutral). In the present study, we aimed at replicating and extending these findings by exploring if these effects are limited to fear odor, by introducing a second negative body odor—that is, disgust. We compared the time that 3 different emotional facial expressions (i.e., fear, disgust, and neutral) took to reach visual awareness, during a breaking continuous flash suppression paradigm, across 3 body odor conditions (i.e., fear, disgust, and neutral). We found that fear body odors do not trigger an overall faster access to visual awareness, but instead sped-up access to awareness specifically for facial expressions of fear. Disgust odor, on the other hand, had no effects on awareness thresholds of facial expressions. These findings contrast with prior results, suggesting that the potential of fear body odors to induce visual processing adjustments is specific to fear cues. Furthermore, our results support a unique ability of fear body odors in inducing such visual processing changes, compared with other negative emotional chemosignals (i.e., disgust). These conclusions raise interesting questions as to how fear odor might interact with the visual processing stream, whilst simultaneously giving rise to future avenues of research.


Sign in / Sign up

Export Citation Format

Share Document