scholarly journals The Value of Methodological Deductivism in Argument Construction

2018 ◽  
Vol 38 (4) ◽  
pp. 471-501 ◽  
Author(s):  
Fábio Perin Shecaira

“Deductivism” is a broad label for various theories that emphasize the importance of deductive argument in contexts of rational discussion. This paper makes a case for a very specific form of deductivism. The paper highlights the dialectical importance of advancing deductively valid arguments (with plausible premises) in natural-language reasoning. Sections 2 and 3 explain the various forms that deductivism has taken. Section 4 makes a case for a particular form of deductivism. Section 5 discusses the value of deductive argument in law. Section 6 concludes and acknowledges critical questions that need to be addressed more fully in future work. 

Author(s):  
Yixin Nie ◽  
Yicheng Wang ◽  
Mohit Bansal

Success in natural language inference (NLI) should require a model to understand both lexical and compositional semantics. However, through adversarial evaluation, we find that several state-of-the-art models with diverse architectures are over-relying on the former and fail to use the latter. Further, this compositionality unawareness is not reflected via standard evaluation on current datasets. We show that removing RNNs in existing models or shuffling input words during training does not induce large performance loss despite the explicit removal of compositional information. Therefore, we propose a compositionality-sensitivity testing setup that analyzes models on natural examples from existing datasets that cannot be solved via lexical features alone (i.e., on which a bag-of-words model gives a high probability to one wrong label), hence revealing the models’ actual compositionality awareness. We show that this setup not only highlights the limited compositional ability of current NLI models, but also differentiates model performance based on design, e.g., separating shallow bag-of-words models from deeper, linguistically-grounded tree-based models. Our evaluation setup is an important analysis tool: complementing currently existing adversarial and linguistically driven diagnostic evaluations, and exposing opportunities for future work on evaluating models’ compositional understanding.


Author(s):  
Toru Sugimoto ◽  
◽  
Noriko Ito ◽  
Shino Iwashita ◽  
Michio Sugeno ◽  
...  

We present a processing model of a natural language interface that accepts task specification texts consisting of more than one sentence. Such an interface enables users to easily specify complex requests as coherent texts, in other words, to write a program in everyday language, to operate computing systems. Corresponding to the characteristics of task specification texts, processing consists of paraphrasing, detection of loop structures, and executable program generation using rhetorical information. Algorithms have been fully implemented in our everyday language programming system that deals with personal email management tasks. In this paper, we explain our processing model using an example from the email management domain, give evaluation results, and discuss its effectiveness and future work.


2018 ◽  
Vol 24 (3) ◽  
pp. 393-413 ◽  
Author(s):  
STELLA FRANK ◽  
DESMOND ELLIOTT ◽  
LUCIA SPECIA

AbstractTwo studies on multilingual multimodal image description provide empirical evidence towards two questions at the core of the task: (i) whether target language speakers prefer descriptions generated directly in their native language, as compared to descriptions translated from a different language; (ii) whether images improve human translation of descriptions. These results provide guidance for future work in multimodal natural language processing by first showing that on the whole, translations are not distinguished from native language descriptions, and second delineating and quantifying the information gained from the image during the human translation task.


2020 ◽  
pp. 095892872097392
Author(s):  
Mary Daly

This review focuses on the concept of care, a concept that has never been more popular as a focus of study. It undertakes a critical review, motivated by the breadth of the field and the lack of coherence and linkages across a diverse literature. The review concentrates first on organizing and reviewing the literature in terms of key focus and, second, drawing out the strengths and weaknesses of existing work and making suggestions for how future work might proceed in COVID-19 times. While the existing literature offers many insights, some quite basic things need to be reconsidered, not least definition and conceptualization. Defining care as based on the meeting of perceived welfare-related need, I develop it as comprising need, relations/actors, resources and ideas and values. Each of these dimensions has an inherent disposition towards the study of inequality and it is possible, either by looking at them individually or all together, to identify care as situated in relations of relative power and inequality. The framework allows a set of critical questions to be posed in relation to COVID-19 and the policies and resources that have been mustered in response.


Author(s):  
Wenfeng Feng ◽  
Hankz Hankui Zhuo ◽  
Subbarao Kambhampati

Extracting action sequences from texts is challenging, as it requires commonsense inferences based on world knowledge. Although there has been work on extracting action scripts, instructions, navigation actions, etc., they require either the set of candidate actions be provided in advance, or action descriptions are restricted to a specific form, e.g., description templates. In this paper we aim to extract action sequences from texts in \emph{free} natural language, i.e., without any restricted templates, provided the set of actions is unknown. We propose to extract action sequences from texts based on the deep reinforcement learning framework. Specifically, we view ``selecting'' or ``eliminating'' words from texts as ``actions'', and texts associated with actions as ``states''. We build Q-networks to learn policies of extracting actions and extract plans from the labeled texts. We demonstrate the effectiveness of our approach on several datasets with comparison to state-of-the-art approaches.


Information ◽  
2019 ◽  
Vol 10 (6) ◽  
pp. 205 ◽  
Author(s):  
Paulo Quaresma ◽  
Vítor Beires Nogueira ◽  
Kashyap Raiyani ◽  
Roy Bayot

Text information extraction is an important natural language processing (NLP) task, which aims to automatically identify, extract, and represent information from text. In this context, event extraction plays a relevant role, allowing actions, agents, objects, places, and time periods to be identified and represented. The extracted information can be represented by specialized ontologies, supporting knowledge-based reasoning and inference processes. In this work, we will describe, in detail, our proposal for event extraction from Portuguese documents. The proposed approach is based on a pipeline of specialized natural language processing tools; namely, a part-of-speech tagger, a named entities recognizer, a dependency parser, semantic role labeling, and a knowledge extraction module. The architecture is language-independent, but its modules are language-dependent and can be built using adequate AI (i.e., rule-based or machine learning) methodologies. The developed system was evaluated with a corpus of Portuguese texts and the obtained results are presented and analysed. The current limitations and future work are discussed in detail.


Author(s):  
David LaVergne ◽  
Judith Tiferes ◽  
Michael Jenkins ◽  
Geoff Gross ◽  
Ann Bisantz

Qualitative linguistic data provides unique, valuable information that can only come from human observers. Data fusion systems find it challenging to incorporate this “soft data” as they are primarily designed to analyze quantitative, hard-sensor data with consistent formats and qualified error characteristics. This research investigates how people produce linguistic descriptions of human physical attributes. Thirty participants were asked to describe seven actors’ ages, heights, and weights in two naturalistic video scenes, using both numeric estimates and linguistic descriptors. Results showed that not only were a large number of linguistic descriptors used, but they were also used inconsistently. Only 10% of the 189 unique terms produced were used by four or more participants. Especially for height and weight, we found that linguistic terms are poor devices for transmitting estimated values due to the large and overlapping ranges of numeric estimates associated with each term. Future work should attempt to better define the boundaries of inclusion for more frequently used terms and to create a controlled language lexicon to gauge whether or not that improves the precision of natural language terms.


Author(s):  
Lei Chen ◽  
Yong Zeng

In this paper, a novel approach is proposed to transform a requirement text described by natural language into two UML diagrams — use case and class diagrams. The transformation consists of two steps: from natural language to an intermediate graphic language called recursive object model (ROM) and from ROM to UML. The ROM diagram corresponding to a text includes the main semantic information implied in the text by modeling the relations between words in a text. Based on the semantics in the ROM diagram, a set of generation rules are proposed to generate UML diagrams from a ROM diagram. A software prototype R2U is presented as a proof of concept for this approach. A case study shows that the proposed approach is feasible. The proposed approach can be applied to requirements modeling in various engineering fields such as software engineering, automotive engineering, and aerospace engineering. The future work is pointed out at the end of this paper.


Author(s):  
Tom Williams ◽  
Matthias Scheutz

As robots become increasingly prevalent in our society, it becomes increasingly important to endow them with natural language capabilities, including the ability to both understand and generate so-called referring expressions. In recent work, we have sought to enable referring expression understanding capabilities by leveraging the Givenness Hierarchy (GH), which provides an elegant linguistic framework for reasoning about notions of reference in human discourse. This chapter first provides an overview of the GH and discusses previous GH-theoretic approaches to reference resolution. It then describes our own GH-theoretic approach, the GH-POWER algorithm, and suggests future refinements of our algorithm with respect to the theoretical commitments of the GH. Next, the chapter briefly surveys other prominent approaches to reference resolution in robotics, and discusses how these compare to our approach. Finally, it concludes with a discussion of possible directions for future work.


Sign in / Sign up

Export Citation Format

Share Document