perceptual category
Recently Published Documents


TOTAL DOCUMENTS

73
(FIVE YEARS 16)

H-INDEX

15
(FIVE YEARS 2)

2022 ◽  
Author(s):  
Roland Pusch ◽  
Julian Packheiser ◽  
Charlotte Koenen ◽  
Fabrizio Iovine ◽  
Onur Güntürkün

AbstractPigeons are classic model animals to study perceptual category learning. To achieve a deeper understanding of the cognitive mechanisms of categorization, a careful consideration of the employed stimulus material and a thorough analysis of the choice behavior is mandatory. In the present study, we combined the use of “virtual phylogenesis”, an evolutionary algorithm to generate artificial yet naturalistic stimuli termed digital embryos and a machine learning approach on the pigeons’ pecking responses to gain insight into the underlying categorization strategies of the animals. In a forced-choice procedure, pigeons learned to categorize these stimuli and transferred their knowledge successfully to novel exemplars. We used peck tracking to identify where on the stimulus the animals pecked and further investigated whether this behavior was indicative of the pigeon’s choice. Going beyond the classical analysis of the binary choice, we were able to predict the presented stimulus class based on pecking location using a k-nearest neighbor classifier, indicating that pecks are related to features of interest. By analyzing error trials with this approach, we further identified potential strategies of the pigeons to discriminate between stimulus classes. These strategies remained stable during category transfer, but differed between individuals indicating that categorization learning is not limited to a single learning strategy.


2021 ◽  
Author(s):  
Roland Pusch ◽  
Julian Packheiser ◽  
Charlotte Koenen ◽  
Fabrizio Iovine ◽  
Onur Guentuerkuen

Pigeons are classic model animals to study perceptual category learning. A theoretical understanding of the cognitive mechanisms of categorization requires a careful consideration of the employed stimulus material. Optimally, stimuli should not consist of real-world objects that might be associated with prior experience. The number of exemplars should be theoretically infinite and easy to produce. In addition, the experimenter should have the freedom to produce 2D- and 3D-versions of the stimuli and, finally, the stimulus set should provide the opportunity to identify the diagnostic elements that the animals use. To this end, we used the approach of "virtual phylogenesis" of "digital embryos" to produce two stimulus sets of objects that meet these criteria. In our experiment pigeons learned to categorize these stimuli in a forced-choice procedure. In addition, we used peck tracking to identify where on the stimulus the animals pecked to signal their choice. Pigeons learned the task and transferred successfully to novel exemplars. Using a k-nearest neighbor classifier, we were able to predict the presented stimulus class based on pecking location indicating that pecks are related to features of interest. We further identified potential strategies of the pigeons through this approach, namely that they were either learning one or two categories to discriminate between stimulus classes. These strategies remained stable during category transfer, but differed between individuals indicating that categorization learning is not limited to a single learning strategy.


2020 ◽  
Author(s):  
Casey L Roark ◽  
Giorgio Paulon ◽  
Abhra Sarkar ◽  
Bharath Chandrasekaran

Category learning is a fundamental process in human cognition that spans the senses. However, much still remains unknown about the mechanisms supporting learning in different modalities. In the current study, we directly compared auditory and visual category learning in the same individuals. Thirty participants (22 F; 18-32 years old) completed two unidimensional rule-based category learning tasks in a single day—one with auditory stimuli and another with visual stimuli. We replicated the results in a second experiment with a larger online sample (N = 99, 45 F, 18-35 years old). The categories were identically structured in the two modalities to facilitate comparison. We compared categorization accuracy, decision processes as assessed through drift-diffusion models, and the generalizability of resulting category representation through a generalization test. We found that individuals learned auditory and visual categories to similar extents and that accuracies were highly correlated across the two tasks. Participants had similar evidence accumulation rates in later learning, but early on had slower rates for visual than auditory learning. Participants also demonstrated differences in the decision thresholds across modalities. Participants had more categorical generalizable representations for visual than auditory categories. These results suggest that some modality-general cognitive processes support category learning but also suggest that the modality of the stimuli may also affect category learning behavior and outcomes.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Lauri Kangassalo ◽  
Michiel Spapé ◽  
Tuukka Ruotsalo

Abstract Brain–computer interfaces enable active communication and execution of a pre-defined set of commands, such as typing a letter or moving a cursor. However, they have thus far not been able to infer more complex intentions or adapt more complex output based on brain signals. Here, we present neuroadaptive generative modelling, which uses a participant’s brain signals as feedback to adapt a boundless generative model and generate new information matching the participant’s intentions. We report an experiment validating the paradigm in generating images of human faces. In the experiment, participants were asked to specifically focus on perceptual categories, such as old or young people, while being presented with computer-generated, photorealistic faces with varying visual features. Their EEG signals associated with the images were then used as a feedback signal to update a model of the user’s intentions, from which new images were generated using a generative adversarial network. A double-blind follow-up with the participant evaluating the output shows that neuroadaptive modelling can be utilised to produce images matching the perceptual category features. The approach demonstrates brain-based creative augmentation between computers and humans for producing new information matching the human operator’s perceptual categories.


2020 ◽  
pp. 3-13
Author(s):  
C.E.R. Edmunds ◽  
A.B. Inkster ◽  
P.M. Jones ◽  
F. Milton ◽  
A.J. Wills

Analogical transfer has been previously reported to occur between rule-based, but not information-integration, perceptual category structures (Casale, Roeder, & Ashby, 2012). The current study investigated whether a similar pattern of results would be observed in cross-modality transfer. Participants were trained on either a rule-based structure, or an information-integration structure, using visual stimuli. They were then tested on auditory stimuli that had the same underlying abstract category structure. Transfer performance was assessed relative to a control group who did not receive training on the visual stimuli. No cross-modality transfer was found, irrespective of the category structure employed.


2020 ◽  
Vol 30 (5) ◽  
pp. 3167-3183 ◽  
Author(s):  
Joset A Etzel ◽  
Ya’el Courtney ◽  
Caitlin E Carey ◽  
Maria Z Gehred ◽  
Arpana Agrawal ◽  
...  

Abstract Pattern similarity analyses are increasingly used to characterize coding properties of brain regions, but relatively few have focused on cognitive control processes in FrontoParietal regions. Here, we use the Human Connectome Project (HCP) N-back task functional magnetic resonance imaging (fMRI) dataset to examine individual differences and genetic influences on the coding of working memory load (0-back, 2-back) and perceptual category (Face, Place). Participants were grouped into 105 monozygotic twin, 78 dizygotic twin, 99 nontwin sibling, and 100 unrelated pairs. Activation pattern similarity was used to test the hypothesis that FrontoParietal regions would have higher similarity for same load conditions, while Visual regions would have higher similarity in same perceptual category conditions. Results confirmed this highly robust regional double dissociation in neural coding, which also predicted individual differences in behavioral performance. In pair-based analyses, anatomically selective genetic relatedness effects were observed: relatedness predicted greater activation pattern similarity in FrontoParietal only for load coding and in Visual only for perceptual coding. Further, in related pairs, the similarity of load coding in FrontoParietal regions was uniquely associated with behavioral performance. Together, these results highlight the power of task fMRI pattern similarity analyses for detecting key coding and heritability features of brain regions.


Sign in / Sign up

Export Citation Format

Share Document