Quantifiers in Natural Language A Survey of Some Recent Work

Author(s):  
Dag Westerståhl
Keyword(s):  
2017 ◽  
Vol 54 (4) ◽  
pp. 475-488
Author(s):  
MATTHEW McKEEVER

AbstractIn this article, I argue that recent work in analytic philosophy on the semantics of names and the metaphysics of persistence supports two theses in Buddhist philosophy, namely the impermanence of objects and a corollary about how referential language works. According to this latter package of views, the various parts of what we call one object (say, King Milinda) possess no unity in and of themselves. Unity comes rather from language, in that we have terms (say, ‘King Milinda’) which stand for all the parts taken together. Objects are mind- (or rather language-)generated fictions. I think this package can be cashed out in terms of two central contemporary views. The first is that there are temporal parts: just as an object is spatially extended by having spatial parts at different spatial locations, so it is temporally extended by having temporal parts at different temporal locations. The second is that names are predicates: rather than standing for any one thing, a name stands for a range of things. The natural language term ‘Milinda’ is not akin to a logical constant, but akin to a predicate.Putting this together, I'll argue that names are predicates with temporal parts in their extension, which parts have no unity apart from falling under the same predicate. ‘Milinda’ is a predicate which has in its extension all Milinda's parts. The result is an interesting and original synthesis of plausible positions in semantics and metaphysics, which makes good sense of a central Buddhist doctrine.


2015 ◽  
Vol 7 (1) ◽  
Author(s):  
António Teixeira ◽  
José Casimiro Pereira ◽  
Pedro Goucha Francisco ◽  
Nuno Almeida

Automatic translation is usually related to conversion between human languages. Nevertheless, in human-machine interaction scenarios new forms of translation emerged. This work presents two examples. First, from the area of Natural Language Generation, is presented a data-to-text system, where data stored in a database regarding a medication plan is translated to Portuguese. As second example, is presented a system addressing the transmission of information from humans to computers, showing that automatic translation can be useful in the development of systems that use voice commands for interaction and having multilingualism as a requirement. The examples presented, part of our recent work, demonstrate the increase of application areas for automatic translation, area that received many and valuable contributions from Belinda Maia.


Author(s):  
Michaela Regneri ◽  
Marcus Rohrbach ◽  
Dominikus Wetzel ◽  
Stefan Thater ◽  
Bernt Schiele ◽  
...  

Recent work has shown that the integration of visual information into text-based models can substantially improve model predictions, but so far only visual information extracted from static images has been used. In this paper, we consider the problem of grounding sentences describing actions in visual information extracted from videos. We present a general purpose corpus that aligns high quality videos with multiple natural language descriptions of the actions portrayed in the videos, together with an annotation of how similar the action descriptions are to each other. Experimental results demonstrate that a text-based model of similarity between actions improves substantially when combined with visual information from videos depicting the described actions.


2019 ◽  
Vol 119 (2) ◽  
pp. 157-178 ◽  
Author(s):  
Nicholas K Jones

Abstract There are two broad approaches to theorizing about ontological categories. Quineans use first-order quantifiers to generalize over entities of each category, whereas type theorists use quantification on variables of different semantic types to generalize over different categories. Does anything of import turn on the difference between these approaches? If so, are there good reasons to go type-theoretic? I argue for positive answers to both questions concerning the category of propositions. I also discuss two prominent arguments for a Quinean conception of propositions, concerning their role in natural language semantics and apparent quantification over propositions within natural language. It will emerge that even if these arguments are sound, there need be no deep question about Quinean propositions’ true nature, contrary to much recent work on the metaphysics of propositions.


1993 ◽  
Vol 58 (1) ◽  
pp. 314-325 ◽  
Author(s):  
Edward L. Keenan

AbstractRecent work in natural language semantics leads to some new observations on generalized quantifiers. In §1 we show that English quantifiers of type 〈1, 1〉 are booleanly generated by theirgeneralized universalandgeneralized existentialmembers. These two classes also constitute thesortally reduciblemembers of this type.Section 2 presents our main result — the Generalized Prefix Theorem (GPT). This theorem characterizes the conditions under which formulas of the form (Q1x1…QnxnRx1…xnandq1x1…qnxnRx1…xnare logically equivalent for arbitrary generalized quantifiersQi,qi. GPT generalizes, perhaps in an unexpectedly strong form, the Linear Prefix Theorem (appropriately modified) of Keisler & Walkoe (1973).


2010 ◽  
Vol 36 (3) ◽  
pp. 341-387 ◽  
Author(s):  
Nitin Madnani ◽  
Bonnie J. Dorr

The task of paraphrasing is inherently familiar to speakers of all languages. Moreover, the task of automatically generating or extracting semantic equivalences for the various units of language—words, phrases, and sentences—is an important part of natural language processing (NLP) and is being increasingly employed to improve the performance of several NLP applications. In this article, we attempt to conduct a comprehensive and application-independent survey of data-driven phrasal and sentential paraphrase generation methods, while also conveying an appreciation for the importance and potential use of paraphrases in the field of NLP research. Recent work done in manual and automatic construction of paraphrase corpora is also examined. We also discuss the strategies used for evaluating paraphrase generation techniques and briefly explore some future trends in paraphrase generation.


2013 ◽  
Vol 846-847 ◽  
pp. 1239-1242
Author(s):  
Yang Yang ◽  
Hui Zhang ◽  
Yong Qi Wang

This paper presents our recent work towards the development of a voice calculator based on speech error correction and natural language processing. The calculator enhances the accuracy of speech recognition by classifying and summarizing recognition errors on numerical calculation speech recognition area, then constructing Pinyin-text-mapping library and replacement rules, and combing priority correction mechanism and memory correction mechanism of Pinyin-text-mapping. For the expression after correctly recognizing, the calculator uses recursive-descent parsing algorithm and synthesized attribute computing algorithm to calculate the final result and output the result using TTS engine. The implementation of this voice calculator makes a calculator more humane and intelligent.


2020 ◽  
Vol 46 (2) ◽  
pp. 487-497 ◽  
Author(s):  
Malvina Nissim ◽  
Rik van Noord ◽  
Rob van der Goot

Analogies such as man is to king as woman is to X are often used to illustrate the amazing power of word embeddings. Concurrently, they have also been used to expose how strongly human biases are encoded in vector spaces trained on natural language, with examples like man is to computer programmer as woman is to homemaker. Recent work has shown that analogies are in fact not an accurate diagnostic for bias, but this does not mean that they are not used anymore, or that their legacy is fading. Instead of focusing on the intrinsic problems of the analogy task as a bias detection tool, we discuss a series of issues involving implementation as well as subjective choices that might have yielded a distorted picture of bias in word embeddings. We stand by the truth that human biases are present in word embeddings, and, of course, the need to address them. But analogies are not an accurate tool to do so, and the way they have been most often used has exacerbated some possibly non-existing biases and perhaps hidden others. Because they are still widely popular, and some of them have become classics within and outside the NLP community, we deem it important to provide a series of clarifications that should put well-known, and potentially new analogies, into the right perspective.


2006 ◽  
Vol 13 (3) ◽  
pp. 191-233 ◽  
Author(s):  
I. ANDROUTSOPOULOS ◽  
J. OBERLANDER ◽  
V. KARKALETSIS

We present the source authoring facilities of a natural language generation system that produces personalised descriptions of objects in multiple natural languages starting from language-independent symbolic information in ontologies and databases as well as pieces of canned text. The system has been tested in applications ranging from museum exhibitions to presentations of computer equipment for sale. We discuss the architecture of the overall system, the resources that the authors manipulate, the functionality of the authoring facilities, the system's personalisation mechanisms, and how they relate to source authoring. A usability evaluation of the authoring facilities is also presented, followed by more recent work on reusing information extracted from existing databases and documents, and supporting the OWL ontology specification language.


2020 ◽  
Vol 8 ◽  
pp. 621-633
Author(s):  
Lifu Tu ◽  
Garima Lalwani ◽  
Spandana Gella ◽  
He He

Recent work has shown that pre-trained language models such as BERT improve robustness to spurious correlations in the dataset. Intrigued by these results, we find that the key to their success is generalization from a small amount of counterexamples where the spurious correlations do not hold. When such minority examples are scarce, pre-trained models perform as poorly as models trained from scratch. In the case of extreme minority, we propose to use multi-task learning (MTL) to improve generalization. Our experiments on natural language inference and paraphrase identification show that MTL with the right auxiliary tasks significantly improves performance on challenging examples without hurting the in-distribution performance. Further, we show that the gain from MTL mainly comes from improved generalization from the minority examples. Our results highlight the importance of data diversity for overcoming spurious correlations. 1


Sign in / Sign up

Export Citation Format

Share Document