artificial languages
Recently Published Documents


TOTAL DOCUMENTS

161
(FIVE YEARS 34)

H-INDEX

14
(FIVE YEARS 2)

2021 ◽  
Author(s):  
Georgia Loukatou ◽  
Sabine Stoll ◽  
Damián Ezequiel Blasi ◽  
Alejandrina Cristia

How can infants detect where words or morphemes start and end in the continuous stream of speech? Previous computational studies have investigated this question mainly for English, where morpheme and word boundaries are often isomorphic. Yet in many languages, words are often multimorphemic, such that word and morpheme boundaries do not align. Our study employed corpora of two languages that differ in the complexity of inflectional morphology, Chintang (Sino-Tibetan) and Japanese (in Experiment 1), as well as corpora of artificial languages ranging in morphological complexity, as measured by the ratio and distribution of morphemes per word (in Experiments 2 and 3). We used two baselines and three conceptually diverse word segmentation algorithms, two of which rely purely on sublexical information using distributional cues, and one that builds a lexicon. The algorithms’ performance was evaluated on both word- and morpheme-level representations of the corpora.Segmentation results were better for the morphologically simpler languages than for the morphologically more complex languages, in line with the hypothesis that languages with greater inflectional complexity could be more difficult to segment into words. We further show that the effect of morphological complexity is relatively small, compared to that of algorithm and evaluation level. We therefore recommend that infant researchers look for signatures of the different segmentation algorithms and strategies, before looking for differences in infant segmentation landmarks across languages varying in complexity.


Entropy ◽  
2021 ◽  
Vol 23 (10) ◽  
pp. 1335
Author(s):  
Shane Steinert-Threlkeld

While the languages of the world vary greatly, they exhibit systematic patterns, as well. Semantic universals are restrictions on the variation in meaning exhibit cross-linguistically (e.g., that, in all languages, expressions of a certain type can only denote meanings with a certain special property). This paper pursues an efficient communication analysis to explain the presence of semantic universals in a domain of function words: quantifiers. Two experiments measure how well languages do in optimally trading off between competing pressures of simplicity and informativeness. First, we show that artificial languages which more closely resemble natural languages are more optimal. Then, we introduce information-theoretic measures of degrees of semantic universals and show that these are not correlated with optimality in a random sample of artificial languages. These results suggest both that efficient communication shapes semantic typology in both content and function word domains, as well as that semantic universals may not stand in need of independent explanation.


Author(s):  
Anna Bajer

Abstract The article discusses the attempt to understand a source code under the conception of philosophical hermeneutics guided by language. Based on a confrontation between H.-G. Gadamer’s and Paul Ricoeur’s philosophy, our main goal would be searching for the essence of the source code in language. Thus, a closer look is taken into cultural symbols, natural language, and artificial languages. Especially, there would be discussed the problem of abstraction, linguistic community, self-forgetfulness, vitality of formal languages, and display of individuality. This is where the cultural layer of the code can be traced, hence we may find our world-view verbal in nature. In line with the Critical Code Studies approach, in this article, the source code is treated as text. Because of its complexity, the issue should be studied within philosophical inquiry and computer science knowledge. Hence, the perspective developed here goes back to origins and provides a philosophical foundation for Critical Code Studies thinking. The article presents academics with a philosophical challenge: how to understand the source code with an adaptation of a philosophy rejecting artificiality. With philosophical reflection, the source code gains additional meaning and experiences increase in being. Understanding happens in language, which realizes as discourse.


2021 ◽  
Vol 12 ◽  
Author(s):  
Tingyu Huang ◽  
Youngah Do

This study investigates the hypothesis that tone alternation directionality becomes a basis of structural bias for tone alternation learning, where “structural bias” refers to a tendency to prefer uni-directional tone deletions to bi-directional ones. Two experiments were conducted. In the first, Mandarin speakers learned three artificial languages, with bi-directional tone deletions, uni-directional, left-dominant deletions, and uni-directional, right-dominant deletions, respectively. The results showed a learning bias toward uni-directional, right-dominant patterns. As Mandarin tone sandhi is right-dominant while Cantonese tone change is lexically restricted and does not have directionality asymmetry, a follow-up experiment trained Cantonese speakers either on left- or right-dominant deletions to see whether the right-dominant preference was due to L1 transfer from Mandarin. The results of the experiment also showed a learning bias toward right-dominant patterns. We argue that structural simplicity affects tone deletion learning but the simplicity should be grounded on phonetics factors, such as syllables’ contour-tone bearing ability. The experimental results are consistent with the findings of a survey on other types of tone alternation’s directionality, i.e., tone sandhi across 17 Chinese varieties. This suggests that the directionality asymmetry found across different tone alternations reflects a phonetically grounded structural learning bias.


2021 ◽  
pp. 206-255
Author(s):  
Stefano Evangelista

This chapter explores the relationship between the proliferation of artificial languages and literary cosmopolitanism at the turn of the century: both strove to promote ideas of world citizenship, universal communication, and peaceful international relations. The two most successful artificial languages of this period, Volapük and Esperanto, employed literature, literary translation, and the periodical medium to create a new type of cosmopolitan literacy intended to quench divisive nationalisms and to challenge Herder’s theories on the link between national language and individual identity. Starting with Henry James’s lampooning of Volapük in his short story ‘The Pupil’ (1891), the chapter charts the uneasy relationship between literature and artificial language movements. Ludwik L. Zamenhof, the creator of Esperanto, stressed the importance of literary translation for his utopian ideal and used original literature to explore the complex affect of his cosmopolitan identity. The chapter closes with an analysis of the growth of the Esperanto movement in turn-of-the-century Britain, focusing on its overlap with literary, artistic, and radical circles, on contributions by Max Müller, W. T. Stead, and Felix Moscheles, and on the 1907 Cambridge Esperanto World Congress.


2021 ◽  
pp. 1-11
Author(s):  
Youngah DO ◽  
Shannon MOONEY

Abstract This article examines whether children alter a variable phonological pattern in an artificial language towards a phonetically-natural form. We address acquisition of a variable rounding harmony pattern through the use of two artificial languages; one with dominant harmony pattern, and another with dominant non-harmony pattern. Overall, children favor harmony pattern in their production of the languages. In the language where harmony is non-dominant, children's subsequent production entirely reverses the pattern so that harmony predominates. This differs starkly from adults. Our results compare to the regularization found in child learning of morphosyntactic variation, suggesting a role for naturalness in variable phonological learning.


2021 ◽  
Vol 64 (3) ◽  
pp. 854-869
Author(s):  
Jonah Katz ◽  
Michelle W. Moore

Purpose The aim of the study was to investigate the effects of specific acoustic patterns on word learning and segmentation in 8- to 11-year-old children and in college students. Method Twenty-two children (ages 8;2–11;4 [years;months]) and 36 college students listened to synthesized “utterances” in artificial languages consisting of six iterated “words,” which followed either a phonetically natural lenition–fortition pattern or an unnatural (cross-linguistically unattested) antilenition pattern. A two-alternative forced-choice task tested whether they could discriminate between occurring and nonoccurring sequences. Participants were exposed to both languages, counterbalanced for order across subjects, in sessions spaced at least 1 month apart. Results Children showed little evidence for learning in either the phonetically natural or unnatural condition nor evidence of differences in learning across the two conditions. Adults showed the predicted (and previously attested) interaction between learning and phonetic condition: The phonetically natural language was learned better. The adults also showed a strong effect of session: Subjects performed much worse during the second session than the first. Conclusions School-age children not only failed to demonstrate the phonetic asymmetry demonstrated by adults in previous studies but also failed to show strong evidence for any learning at all. The fact that the phonetic asymmetry (and general learning effect) was replicated with adults suggests that the child result is not due to inadequate stimuli or procedures. The strong carryover effect for adults also suggests that they retain knowledge about the sound patterns of an artificial language for over a month, longer than has been reported in laboratory studies of purely phonetic/phonological learning. Supplemental Material https://doi.org/10.23641/asha.13641284


2021 ◽  
pp. 253-273
Author(s):  
Olga Burenina-Petrova ◽  

In the history of culture, projects of artificial languages were mainly associated with the search for some universal and, if possible, ideal means of communication, as evidenced, in particular, by the projects of Rene Descartes, John Wilkins, Johann Martin Schleier, Ludwik Zamenhof, Edgar de Waal, Jacob Linzbach, and others. In the late nineteenth-early twentieth centuries, not only scientists but also science fiction writers, the first of whom was H.G. Wells, offered illustrations and sketches of fictional artificial languages. The esssay mainly examines cases of artificial languages employed for interplanetary communication that take place in Russian science fiction novels (“The Red Star” by Alexander Bogdanov and “Aelita” by Alexey Tolstoy). In addition, it covers experiments in the field of inventing an interplanetary language by Konstantin Tsiolkovsky in a number of popular science and fiction works, as well as by Wolf Gordin in his works on the pan-methodological language of AO. In line with the philosophical ideas of Roland Barthes about the discourse of power, as wellas considering two types of sociolects and, accordingly, two types of languages (encratic and acratic), artificial languages are classified as acratic, since they are usually created in order to confront the mechanisms of power as such. The projects of the artificial and artistic (fictional) languages of the early twentieth century not only an attempted to find a language of communication between the inhabitants of different planets; they also urged to invent a universal means of language communication that would bring together the people of the East and the West who were separated by revolutions and wars.


Sign in / Sign up

Export Citation Format

Share Document