The induction of implicit categories in an artificial language

2001 ◽  
Author(s):  
Roman Taraban
Keyword(s):  
2014 ◽  
Vol 30 (3) ◽  
pp. 231-237 ◽  
Author(s):  
Markus Quirin ◽  
Regina C. Bode

Self-report measures for the assessment of trait or state affect are typically biased by social desirability or self-delusion. The present work provides an overview of research using a recently developed measure of automatic activation of cognitive representation of affective experiences, the Implicit Positive and Negative Affect Test (IPANAT). In the IPANAT, participants judge the extent to which nonsense words from an alleged artificial language express a number of affective states or traits. The test demonstrates appropriate factorial validity and reliabilities. We review findings that support criterion validity and, additionally, present novel variants of this procedure for the assessment of the discrete emotions such as happiness, anger, sadness, and fear.


2020 ◽  
Author(s):  
Laetitia Zmuda ◽  
Charlotte Baey ◽  
Paolo Mairano ◽  
Anahita Basirat

It is well-known that individuals can identify novel words in a stream of an artificial language using statistical dependencies. While underlying computations are thought to be similar from one stream to another (e.g. transitional probabilities between syllables), performance are not similar. According to the “linguistic entrenchment” hypothesis, this would be due to the fact that individuals have some prior knowledge regarding co-occurrences of elements in speech which intervene during verbal statistical learning. The focus of previous studies was on task performance. The goal of the current study is to examine the extent to which prior knowledge impacts metacognition (i.e. ability to evaluate one’s own cognitive processes). Participants were exposed to two different artificial languages. Using a fully Bayesian approach, we estimated an unbiased measure of metacognitive efficiency and compared the two languages in terms of task performance and metacognition. While task performance was higher in one of the languages, the metacognitive efficiency was similar in both languages. In addition, a model assuming no correlation between the two languages better accounted for our results compared to a model where correlations were introduced. We discuss the implications of our findings regarding the computations which underlie the interaction between input and prior knowledge during verbal statistical learning.


2014 ◽  
Vol 41 (S1) ◽  
pp. 64-77 ◽  
Author(s):  
SUSAN GOLDIN-MEADOW

ABSTRACTYoung children are skilled language learners. They apply their skills to the language input they receive from their parents and, in this way, derive patterns that are statistically related to their input. But being an excellent statistical learner does not explain why children who are not exposed to usable linguistic input nevertheless communicate using systems containing the fundamental properties of language. Nor does it explain why learners sometimes alter the linguistic input to which they are exposed (input from either a natural or an artificial language). These observations suggest that children are prepared to learn language. Our task now, as it was in 1974, is to figure out what they are prepared with – to identify properties of language that are relatively easy to learn, the resilient properties, as well as properties of language that are more difficult to learn, the fragile properties. The new tools and paradigms for describing and explaining language learning that have been introduced into the field since 1974 offer great promise for accomplishing this task.


2020 ◽  
Vol 56 (07) ◽  
pp. 40-46
Author(s):  
Khayala Mugamat Mursaliyeva ◽  

The explosion of information and the ever-increasing number of international languages make the modern language situation very difficult. The interaction of languages ultimately leads to the creation of international artificial languages that operate in parallel with the world`s languages. The expansion of interlinguistic issues is a natural consequence of the aggravation of the linguistic landscape of the modern world. The modern interlinguistic dialect, which is defined as a field of linguistics that studies international languages and international languages as a means of communication, deals with the importance of overcoming the barrier.The problem of international artificial languages is widely covered in the writings of I.A.Baudouin de Courtenay, V.P.Qrigorev, N.L.Gudskov, E.K.Drezen, A.D.Dulchenko, M.I.Isayev, S.N.Kuznechov, A.D.Melnikov and many other scientists. Key words:the concept of natural language, the concept of artificial language, the degree of artificiality of language, the authenticity of language


2021 ◽  
Author(s):  
Sara Finley

The present study explores morphological bootstrapping in cross-situational word learning. Adult, English-speaking participants were exposed to novel words from an artificial language from three different semantic categories: fruit, animals, and vehicles. In the Experimental conditions, the final CV syllable was consistent across categories (e.g., /-ke/ for fruits), while in the Control condition, the endings were the same, but were assigned to words randomly. After initial training on the morphology under various degrees of referential uncertainty, participants were given a cross-situational word learning task with high referential uncertainty. With poor statistical cues to learn the words across trials, participants were forced to rely on the morphological cues to word meaning. In Experiments 1-3, participants in the Experimental conditions repeatedly outperformed participants in the Control conditions. In Experiment 4, when referential uncertainty was high in both parts of the experiment, there was no evidence of learning or making use of the morphological cues. These results suggest that learners apply morphological cues to word meaning only once they are reliably available.


2020 ◽  
Author(s):  
Merel Muylle ◽  
Bernolet Sarah ◽  
Robert Hartsuiker

Several studies found cross-linguistic structural priming with various language combinations. Here, we investigated the role of two important domains of language variation: case marking and word order. We varied these features in an artificial language (AL) learning paradigm, using three different AL versions in a between-subjects design. Priming was assessed between Dutch (no case marking, SVO word order) and a) a baseline version with SVO word order, b) a case marking version, and c) a version with SOV word order. Similar within- language and cross-linguistic priming was found in all versions for transitive sentences, indicating that cross-linguistic structural priming was not hindered. In contrast, for ditransitive sentences we found similar within-language priming for all versions, but no cross-linguistic priming. The finding that cross-linguistic priming is possible between languages that vary in morphological marking or word order, is compatible with studies showing cross-linguistic priming between natural languages that differ on these dimensions.


Author(s):  
Tal Linzen ◽  
Gillian Gallagher

<p>There is considerable evidence that speakers show sensitivity to the phonotactic patterns of their language. These patterns can involve specific sound sequences (e.g. the consonant combination b-b) or more general classes of sequences (e.g. two identical consonants). In some models of phonotactic learning, generalizations can only be formed once some of their specific instantiations have been acquired (the specific-before-general assumption). To test this assumption, we designed an artificial language with both general and specific phonotactic patterns, and gave participants different amounts of exposure to the language. Contrary to the predictions of specific-before-general models, the general pattern required less exposure to be learned than did its specific instantiations. These results are most straightforwardly predicted by learning models that learn general and specific patterns simultaneously. We discuss the importance of modeling learners’ sensitivity to the amount of evidence supporting each phonotactic generalization, and show how specific-before-general models can be adapted to accommodate the results.</p>


2017 ◽  
Vol 146 (12) ◽  
pp. 1738-1748 ◽  
Author(s):  
Felix Hao Wang ◽  
Jason D. Zevin ◽  
Toben H. Mintz

Sign in / Sign up

Export Citation Format

Share Document