scholarly journals Learning Tier-based Strictly 2-Local Languages

Author(s):  
Adam Jardine ◽  
Jeffrey Heinz

The Tier-based Strictly 2-Local (TSL2) languages are a class of formal languages which have been shown to model long-distance phonotactic generalizations in natural language (Heinz et al., 2011). This paper introduces the Tier-based Strictly 2-Local Inference Algorithm (2TSLIA), the first nonenumerative learner for the TSL2 languages. We prove the 2TSLIA is guaranteed to converge in polynomial time on a data sample whose size is bounded by a constant.

Author(s):  
Kevin McMullin ◽  
Gunnar Ólafur Hansson

This paper shows that the properties of locality observed for patterns of long-distance consonant agreement and disagreement belong to a well-defined and relatively simple class of subregular formal languages (stringsets) called the Tier-based Strictly 2-Local languages, and argues that analyzing them as such has desirable theoretical implications. Specifically, treating the two elements of a long-distance dependency as adjacent segments on the computationally defined notion of a tier allows for a unified account of locality that necessarily extends to the cross-linguistically variable behavior of neutral segments (transparency and blocking). This result is significant in light of the long-standing and persistent problems that long-distance dependencies have raised for phonological theory, with current approaches still predicting several pathological patterns that have little or no empirical support.


Author(s):  
Rohan Pandey ◽  
Vaibhav Gautam ◽  
Ridam Pal ◽  
Harsh Bandhey ◽  
Lovedeep Singh Dhingra ◽  
...  

BACKGROUND The COVID-19 pandemic has uncovered the potential of digital misinformation in shaping the health of nations. The deluge of unverified information that spreads faster than the epidemic itself is an unprecedented phenomenon that has put millions of lives in danger. Mitigating this ‘Infodemic’ requires strong health messaging systems that are engaging, vernacular, scalable, effective and continuously learn the new patterns of misinformation. OBJECTIVE We created WashKaro, a multi-pronged intervention for mitigating misinformation through conversational AI, machine translation and natural language processing. WashKaro provides the right information matched against WHO guidelines through AI, and delivers it in the right format in local languages. METHODS We theorize (i) an NLP based AI engine that could continuously incorporate user feedback to improve relevance of information, (ii) bite sized audio in the local language to improve penetrance in a country with skewed gender literacy ratios, and (iii) conversational but interactive AI engagement with users towards an increased health awareness in the community. RESULTS A total of 5026 people who downloaded the app during the study window, among those 1545 were active users. Our study shows that 3.4 times more females engaged with the App in Hindi as compared to males, the relevance of AI-filtered news content doubled within 45 days of continuous machine learning, and the prudence of integrated AI chatbot “Satya” increased thus proving the usefulness of an mHealth platform to mitigate health misinformation. CONCLUSIONS We conclude that a multi-pronged machine learning application delivering vernacular bite-sized audios and conversational AI is an effective approach to mitigate health misinformation. CLINICALTRIAL Not Applicable


Author(s):  
Kit Fine

Please keep the original abstract. A number of philosophers have flirted with the idea of impossible worlds and some have even become enamored of it. But it has not met with the same degree of acceptance as the more familiar idea of a possible world. Whereas possible worlds have played a broad role in specifying the semantics for natural language and for a wide range of formal languages, impossible worlds have had a much more limited role; and there has not even been general agreement as to how a reasonable theory of impossible worlds is to be developed or applied. This chapter provides a natural way of introducing impossible states into the framework of truthmaker semantics and shows how their introduction permits a number of useful applications.


Triangle ◽  
2018 ◽  
pp. 1
Author(s):  
Leonor Becerra-Bonache

This paper is meant to be an introductory guide to Grammatical Inference (GI), i.e., the study of machine learning of formal languages. It is designed for non-specialists in Computer Science, but with a special interest in language learning. It covers basic concepts and models developed in the framework of GI, and tries to point out the relevance of these studies for natural language acquisition.


Author(s):  
Alexis G. Burgess ◽  
John P. Burgess

This chapter examines Saul Kripke's mathematically rigorous, paradox-free treatment of truth for certain formal languages. Kripke adds hints about how his formal construction might model some features of natural language, but his hints steer a path between an inconsistency view and a vindicationist one. The chapter first compares Kripke's notion of truth with that of Alfred Tarski before discussing what Kripke calls the minimum fixed point, the first level where no new sentences get classified as true that were not already so classified at some earlier level. It also considers the ungroundedness of a sentence, along with the concepts of transfinite construction, revision theories, and axiomatic theories of truth.


Author(s):  
Scott Soames

This chapter begins with a discussion of Kripke-style possible worlds semantics. It considers one of the most important applications of possible worlds semantics, the account of counterfactual conditionals given in Robert Stalnaker and David Lewis. It then goes on to examine the work of Richard Montague. Montague specified syntactic rules that generate English, or English-like, structures directly, while pairing each such rule with a truth-theoretic rule interpreting it. This close parallel between syntax and semantics is what makes the languages of classical logic so transparently tractable, and what they were designed to embody. Montague's bold contention is that we do not have to replace natural language natural languages with formal substitutes to achieve such transparency. The same techniques employed to create formal languages can be used to describe natural languages in mathematically revealing ways.


2017 ◽  
Vol 28 (05) ◽  
pp. 583-601 ◽  
Author(s):  
Suna Bensch ◽  
Johanna Björklund ◽  
Martin Kutrib

We introduce and investigate stack transducers, which are one-way stack automata with an output tape. A one-way stack automaton is a classical pushdown automaton with the additional ability to move the stack head inside the stack without altering the contents. For stack transducers, we distinguish between a digging and a non-digging mode. In digging mode, the stack transducer can write on the output tape when its stack head is inside the stack, whereas in non-digging mode, the stack transducer is only allowed to emit symbols when its stack head is at the top of the stack. These stack transducers have a motivation from natural-language interface applications, as they capture long-distance dependencies in syntactic, semantic, and discourse structures. We study the computational capacity for deterministic digging and non-digging stack transducers, as well as for their non-erasing and checking versions. We finally show that even for the strongest variant of stack transducers the stack languages are regular.


1989 ◽  
Vol 1 (3) ◽  
pp. 372-381 ◽  
Author(s):  
Axel Cleeremans ◽  
David Servan-Schreiber ◽  
James L. McClelland

We explore a network architecture introduced by Elman (1988) for predicting successive elements of a sequence. The network uses the pattern of activation over a set of hidden units from time-step t−1, together with element t, to predict element t + 1. When the network is trained with strings from a particular finite-state grammar, it can learn to be a perfect finite-state recognizer for the grammar. When the network has a minimal number of hidden units, patterns on the hidden units come to correspond to the nodes of the grammar, although this correspondence is not necessary for the network to act as a perfect finite-state recognizer. We explore the conditions under which the network can carry information about distant sequential contingencies across intervening elements. Such information is maintained with relative ease if it is relevant at each intermediate step; it tends to be lost when intervening elements do not depend on it. At first glance this may suggest that such networks are not relevant to natural language, in which dependencies may span indefinite distances. However, embeddings in natural language are not completely independent of earlier information. The final simulation shows that long distance sequential contingencies can be encoded by the network even if only subtle statistical properties of embedded strings depend on the early information.


2002 ◽  
Vol 3 (4) ◽  
pp. 521-541 ◽  
Author(s):  
Robert Givan ◽  
David Mcallester

1988 ◽  
Vol 53 (4) ◽  
pp. 1009-1026 ◽  
Author(s):  
J. P. Ressayre

Abstracti) We show for each context-free language L that by considering each word of L as a structure in a natural way, one turns L into a finite union of classes which satisfy a finitary analog of the characteristic properties of complete universal first order classes of structures equipped with elementary embeddings. We show this to hold for a much larger class of languages which we call free local languages, ii) We define local languages, a class of languages between free local and context-sensitive languages. Each local language L has a natural extension L∞ to infinite words, and we prove a series of “pumping lemmas”, analogs for each local language L of the “uvxyz theorem” of context free languages: they relate the existence of large words in L or L∞ to the existence of infinite “progressions” of words included in L, and they imply the decidability of various questions about L or L∞. iii) We show that the pumping lemmas of ii) are independent from strong axioms, ranging from Peano arithmetic to ZF + Mahlo cardinals.We hope that these results are useful for a model-theoretic approach to the theory of formal languages.


Sign in / Sign up

Export Citation Format

Share Document