exemplar models
Recently Published Documents


TOTAL DOCUMENTS

52
(FIVE YEARS 14)

H-INDEX

9
(FIVE YEARS 1)

PLoS ONE ◽  
2021 ◽  
Vol 16 (10) ◽  
pp. e0259230
Author(s):  
Adriana Hanulíková

An unresolved issue in social perception concerns the effect of perceived ethnicity on speech processing. Bias-based accounts assume conscious misunderstanding of native speech in the case of a speaker classification as nonnative, resulting in negative ratings and poorer comprehension. In contrast, exemplar models of socially indexed speech perception suggest that such negative effects arise only when a contextual cue to the social identity is misleading, i.e. when ethnicity and speech clash with listeners’ expectations. To address these accounts, and to assess ethnicity effects across different age groups, three non-university populations (N = 172) were primed with photographs of Asian and white European women and asked to repeat and rate utterances spoken in three accents (Korean-accented German, a regional German accent, standard German), all embedded in background noise. In line with exemplar models, repetition accuracy increased when the expected and perceived speech matched, but the effect was limited to the foreign accent, and—at the group level—to teens and older adults. In contrast, Asian speakers received the most negative accent ratings across all accents, consistent with a bias-based view, but group distinctions again came into play here, with the effect most pronounced in older adults, and limited to standard German for teens. Importantly, the effects varied across ages, with younger adults showing no effects of ethnicity in either task. The findings suggest that theoretical contradictions are a consequence of methodological choices, which reflect distinct aspects of social information processing.


2021 ◽  
Author(s):  
Sarah Solomon ◽  
Anna Schapiro

Concepts contain rich structures that support flexible semantic cognition. These structures can be characterized by patterns of feature covariation: certain clusters of features tend to occur in the same items (e.g., feathers, wings, can fly). Existing computational models demonstrate how this kind of structure can be leveraged to slowly learn the distinctions between categories, on developmental timescales. It is not clear whether and how we leverage feature structure to quickly learn a novel category. We thus investigated how the internal structure of a new category is extracted from experience and what kinds of representations guide this learning. We predicted that humans can leverage feature clusters within an individual category to benefit learning and that this relies on the rapid formation of distributed representations. Novel categories were designed with patterns of feature associations determined by carefully constructed graph structures (Modular, Random, and Lattice). In Experiment 1, a feature inference task using verbal stimuli revealed that Modular categories—containing clusters of reliably covarying features—were more easily learned than non-Modular categories. Experiment 2 replicated this effect using visual categories. In Experiment 3, a temporal statistical learning paradigm revealed that this Modular benefit persisted even when category structure was incidental to the task. We found that a neural network model employing distributed representations was able to account for the effects, whereas prototype and exemplar models could not. The findings constrain theories of category learning and of structure learning more broadly, suggesting that humans quickly form distributed representations that reflect coherent feature structure.


2021 ◽  
Author(s):  
Caitlin Bowman ◽  
Dagmar Zeithamova

A major question for the study of learning and memory is how to tailor learning experiences to promote knowledge that generalizes to new situations. Using category learning as a representative domain, the present study tested two factors thought to influence acquisition of conceptual knowledge: the number of training examples (set size) and the similarity of training examples to the category average (set coherence). Across participants, size and coherence of category training sets were varied in a fully-crossed design. After training, participants demonstrated the breadth of their category knowledge by categorizing novel examples varying in their distance from the category center. Results showed better generalization following more coherent training sets, even when categorizing items furthest from the category center. There was little effect of set size. We also tested the types of representations underlying categorization decisions by fitting formal prototype and exemplar models. Prototype models posit abstract category representations based on the category’s central tendency, whereas exemplar models posit that categories are represented by individual category members. We show that more subjects rely on a prototype strategy following high coherence training, suggesting that more coherent training sets facilitate extraction of the category average. Together, these results provide strong evidence for the benefit of training on examples that are similar to one another and to the category center.


Author(s):  
David Izydorczyk ◽  
Arndt Bröder

AbstractExemplar models are often used in research on multiple-cue judgments to describe the underlying process of participants’ responses. In these experiments, participants are repeatedly presented with the same exemplars (e.g., poisonous bugs) and instructed to memorize these exemplars and their corresponding criterion values (e.g., the toxicity of a bug). We propose that there are two possible outcomes when participants judge one of the already learned exemplars in some later block of the experiment. They either have memorized the exemplar and their respective criterion value and are thus able to recall the exact value, or they have not learned the exemplar and thus have to judge its criterion value, as if it was a new stimulus. We argue that psychologically, the judgments of participants in a multiple-cue judgment experiment are a mixture of these two qualitatively distinct cognitive processes: judgment and recall. However, the cognitive modeling procedure usually applied does not make any distinction between these processes and the data generated by them. We investigated potential effects of disregarding the distinction between these two processes on the parameter recovery and the model fit of one exemplar model. We present results of a simulation as well as the reanalysis of five experimental data sets showing that the current combination of experimental design and modeling procedure can bias parameter estimates, impair their validity, and negatively affect the fit and predictive performance of the model. We also present a latent-mixture extension of the original model as a possible solution to these issues.


Author(s):  
Whitney Chappell ◽  
Matthew Kanwit

Abstract Learners must develop the ability to perceive linguistic and social meaning in their second language (L2) to interact effectively, but relatively little is known about how learners link social meaning to a single phonetic variable. Using a matched-guise test targeting coda /s/ (realized as [s] or debuccalized [h]), we explore whether L2 Spanish learners identify native speakers’ social characteristics based on phonetic variants. Our results indicate that advanced learners were more sensitive to sociophonetic information; advanced listeners who had completed a phonetics course were significantly more likely to categorize /s/ reducers as Caribbean and those who had studied abroad in aspirating regions recognized a relationship between coda /s/ and status. To account for the complex interplay among proficiency, explicit instruction, and dialectal exposure in the development of L2 sociophonetic perception, we suggest the union of the L2 Linguistic Perception Model with exemplar models of phonological representation and indexical meaning.


2020 ◽  
Vol 40 (5-6) ◽  
pp. 581-584
Author(s):  
Joshua K. Hartshorne
Keyword(s):  
The Many ◽  

Ambridge argues that the existence of exemplar models for individual phenomena (words, inflection rules, etc.) suggests the feasibility of a unified, exemplars-everywhere model that eschews abstraction. The argument would be strengthened by a description of such a model. However, none is provided. I show that any attempt to do so would immediately run into significant difficulties – difficulties that illustrate the utility of abstractions. I conclude with a brief review of modern symbolic approaches that address the concerns Ambridge raises about abstractions.


2020 ◽  
Vol 40 (5-6) ◽  
pp. 631-635
Author(s):  
Kathryn D. Schuler ◽  
Jordan Kodner ◽  
Spencer Caplan

In ‘Against Stored Abstractions,’ Ambridge uses neural and computational evidence to make his case against abstract representations. He argues that storing only exemplars is more parsimonious – why bother with abstraction when exemplar models with on-the-fly calculation can do everything abstracting models can and more – and implies that his view is well supported by neuroscience and computer science. We argue that there is substantial neural, experimental, and computational evidence to the contrary: while both brains and machines can store exemplars, forming categories and storing abstractions is a fundamental part of what they do.


2020 ◽  
Author(s):  
Kyle Mahowald ◽  
George Kachergis ◽  
Michael C. Frank

Ambridge (2019) calls for exemplar-based accounts of language acquisition. Do modern neural networks such as transformers or word2vec – which have been extremely successful in modern natural language processing (NLP) applications – count? Although these models often have ample parametric complexity to store exemplars from their training data, they also go far beyond simple storage by processing and compressing the input via their architectural constraints. The resulting representations have been shown to encode emergent abstractions. If these models are exemplar-based then Ambridge’s theory only weakly constrains future work. On the other hand, if these systems are not exemplar models, why is it that true exemplar models are not contenders in modern NLP?


2020 ◽  
Vol 40 (5-6) ◽  
pp. 608-611 ◽  
Author(s):  
Kyle Mahowald ◽  
George Kachergis ◽  
Michael C. Frank

Ambridge calls for exemplar-based accounts of language acquisition. Do modern neural networks such as transformers or word2vec – which have been extremely successful in modern natural language processing (NLP) applications – count? Although these models often have ample parametric complexity to store exemplars from their training data, they also go far beyond simple storage by processing and compressing the input via their architectural constraints. The resulting representations have been shown to encode emergent abstractions. If these models are exemplar-based then Ambridge’s theory only weakly constrains future work. On the other hand, if these systems are not exemplar models, why is it that true exemplar models are not contenders in modern NLP?


Sign in / Sign up

Export Citation Format

Share Document