perceptual representations
Recently Published Documents


TOTAL DOCUMENTS

140
(FIVE YEARS 37)

H-INDEX

20
(FIVE YEARS 1)

Author(s):  
Pieter M. Goltstein ◽  
Sandra Reinert ◽  
Tobias Bonhoeffer ◽  
Mark Hübener

AbstractAssociative memories are stored in distributed networks extending across multiple brain regions. However, it is unclear to what extent sensory cortical areas are part of these networks. Using a paradigm for visual category learning in mice, we investigated whether perceptual and semantic features of learned category associations are already represented at the first stages of visual information processing in the neocortex. Mice learned categorizing visual stimuli, discriminating between categories and generalizing within categories. Inactivation experiments showed that categorization performance was contingent on neuronal activity in the visual cortex. Long-term calcium imaging in nine areas of the visual cortex identified changes in feature tuning and category tuning that occurred during this learning process, most prominently in the postrhinal area (POR). These results provide evidence for the view that associative memories form a brain-wide distributed network, with learning in early stages shaping perceptual representations and supporting semantic content downstream.


2021 ◽  
Author(s):  
Ning Mei ◽  
Dobromir Rahnev ◽  
David Soto

Our perceptual system appears hardwired to exploit regularities of input features across space and time in seemingly stable environments. This can lead to serial dependence effects whereby recent perceptual representations bias current perception. Serial dependence has also been demonstrated for more abstract representations such as perceptual confidence. Here we ask whether temporal patterns in the generation of confidence judgments across trials generalize across observers and different cognitive domains. Data from the Confidence Database across perceptual, memory, and cognitive paradigms was re-analyzed. Machine learning classifiers were used to predict the confidence on the current trial based on the history of confidence judgments on the previous trials. Cross-observer and cross-domain decoding results showed that a model trained to predict confidence in the perceptual domain generalized across observers to predict confidence across the different cognitive domains. Intriguingly, these serial dependence effects also generalized across correct and incorrect trials, indicating that serial dependence in confidence generation is uncoupled to metacognition (i.e. how we evaluate the precision of our own behavior). We discuss the ramifications of these findings for the ongoing debate on domain-generality vs. specificity of metacognition.


2021 ◽  
Author(s):  
Daniel A Stehr ◽  
Xiaojue Zhou ◽  
Mariel Tisby ◽  
Patrick T Hwu ◽  
John A Pyles ◽  
...  

Abstract The posterior superior temporal sulcus (pSTS) is a brain region characterized by perceptual representations of human body actions that promote the understanding of observed behavior. Increasingly, action observation is recognized as being strongly shaped by the expectations of the observer (Kilner 2011; Koster-Hale and Saxe 2013; Patel et al. 2019). Therefore, to characterize top-down influences on action observation, we evaluated the statistical structure of multivariate activation patterns from the action observation network (AON) while observers attended to the different dimensions of action vignettes (the action kinematics, goal, or identity of avatars jumping or crouching). Decoding accuracy varied as a function of attention instruction in the right pSTS and left inferior frontal cortex (IFC), with the right pSTS classifying actions most accurately when observers attended to the action kinematics and the left IFC classifying most accurately when observed attended to the actor’s goal. Functional connectivity also increased between the right pSTS and right IFC when observers attended to the actions portrayed in the vignettes. Our findings are evidence that the attentive state of the viewer modulates sensory representations in the pSTS, consistent with proposals that the pSTS occupies an interstitial zone mediating top-down context and bottom-up perceptual cues during action observation.


2021 ◽  
Author(s):  
Christian Xerri ◽  
Yoh’i Zennou-Azogui

Perceptual representations are built through multisensory interactions underpinned by dense anatomical and functional neural networks that interconnect primary and associative cortical areas. There is compelling evidence that primary sensory cortical areas do not work in segregation, but play a role in early processes of multisensory integration. In this chapter, we firstly review previous and recent literature showing how multimodal interactions between primary cortices may contribute to refining perceptual representations. Secondly, we discuss findings providing evidence that, following peripheral damage to a sensory system, multimodal integration may promote sensory substitution in deprived cortical areas and favor compensatory plasticity in the spared sensory cortices.


Author(s):  
Michael Rescorla

The representational theory of mind (RTM) holds that the mind is stocked with mental representations: mental items that represent. They can be stored in memory, manipulated during mental activity, and combined to form complex representations. RTM is widely presupposed within cognitive science, which offers many successful theories that cite mental representations. Nevertheless, mental representations are still viewed warily in some scientific and philosophical circles. This chapter develops a novel version of RTM: the capacities-based representational theory of mind (C-RTM). According to C-RTM, a mental representation is an abstract type that marks the exercise of a representational capacity. Talk about mental representations embodies an ontologically loaded way of classifying mental states through representational capacities that the states deploy. Complex mental representations mark the appropriate joint exercise of multiple representational capacities. The chapter supports C-RTM with examples drawn from cognitive science, including perceptual representations and cognitive maps, and applies C-RTM to long-standing debates over the existence, nature, individuation, structure, and explanatory role of mental representations.


2020 ◽  
Vol 63 (11) ◽  
pp. 3659-3679
Author(s):  
Julie D. Anderson ◽  
Stacy A. Wagovich ◽  
Levi Ofoe

Purpose The purpose of this study was to examine cognitive flexibility for semantic and perceptual information in preschool children who stutter (CWS) and who do not stutter (CWNS). Method Participants were 44 CWS and 44 CWNS between the ages of 3;0 and 5;11 (years;months). Cognitive flexibility was measured using semantic and perceptual categorization tasks. In each task, children were required to match a target object with two different semantic or perceptual associates. Main dependent variables were reaction time and accuracy. Results The accuracy with which CWS and CWNS shifted between one semantic and perceptual representation to another was similar, but the CWS did so significantly more slowly. Both groups of children had more difficulty switching between perceptual representations than semantic ones. Conclusion CWS are less efficient (slower), though not less accurate, than CWNS in their ability to switch between different representations in both the verbal and nonverbal domains.


2020 ◽  
Author(s):  
Neha Dhupia ◽  
D. Samuel Schwarzkopf ◽  
Derek H. Arnold

AbstractVisual objects that extend across physiological blind spots seem to encapsulate the extent of blindness, due to a process commonly referred to as a perceptual filling-in of spatial vision. It is unclear if temporal perception is similar, so we examined temporal relationships governing causality perception across the blind spot. We found the human brain does not allow for the time an object should take to traverse the blind-spot when engaging in a causal interaction. We also used electroencephalogram (EEG), to examine temporal signatures of elements flickering on and off in tandem, or in counter-phase. At a control site, we found more brain activity was entrained at the duty cycle by flicker relative to counter-phase changes, whereas these conditions were indistinguishable about blind spots. Our data suggest a common pool of neurons might encode temporal properties on either side of physiological blind-spots. This would explain the absence of any allowance for the extent of blindness in causality perception, and the weakened differences between temporal representations of flicker and counter-phased changes about the blind spot. Overall, our data suggest that, unlike spatial vision, there is no temporal filling-in for perceptual representations about physiological blind spots.


2020 ◽  
Author(s):  
Casey L Roark ◽  
Lori L. Holt

Research suggests that the auditory system rapidly and efficiently encodes statistical structure of acoustic information through passive exposure. We investigated how exposure to short-term acoustic regularities may change representations and categorization behavior in humans. In Experiment 1, we found that passive exposure to a correlation between two acoustic dimensions had limited influence on similarity-based representations. In Experiment 2, we found that, early in category learning, performance and decision-bound strategies did not differ based on prior exposure. Instead, there were large and persistent differences in performance based on the whether the distinction between categories involved a positively-sloped boundary or a negatively-sloped boundary in the two-dimensional acoustic input space. These experiments demonstrate that short-term passive exposure to acoustic regularities has limited impact on perceptual representations and behavior, and that other perceptual biases may place stronger constraints on the course of learning.


Sign in / Sign up

Export Citation Format

Share Document