AWide-Reflective-Equilibrium Conception of Reconstructive Formalization

2014 ◽  
Vol 17 (1) ◽  
pp. 130-151
Author(s):  
Winfried Löffler

I propose that a logical formalization of a natural language text (especially an argument) may be regarded as adequate if the following three groups of beliefs can be integrated into a wide reflective equilibrium: (1) our initial, spontaneous beliefs about the structure and logical quality of the text; (2) our beliefs about its structure and logical quality as reflected in the proposed formalization, and (3) our background beliefs about the original text’s author, his thought and other contextually relevant factors. Unlike a good part of the literature, I stress the indispensable role of initial beliefs in achieving such a wide reflective equilibrium. In the final sections I show that my approach does not succumb to undue subjectivism or the mere perpetuation of prejudice. The examples I use to illustrate my claims are chiefly taken from Anselm’s Proslogion 2–3 and the various attempts to formalize these texts.

2020 ◽  
Vol 9 (1) ◽  
pp. 19-41
Author(s):  
Manfred Stede

Abstract Argumentation mining is a subfield of Computational Linguistics that aims (primarily) at automatically finding arguments and their structural components in natural language text. We provide a short introduction to this field, intended for an audience with a limited computational background. After explaining the subtasks involved in this problem of deriving the structure of arguments, we describe two other applications that are popular in computational linguistics: sentiment analysis and stance detection. From the linguistic viewpoint, they concern the semantics of evaluation in language. In the final part of the paper, we briefly examine the roles that these two tasks play in argumentation mining, both in current practice, and in possible future systems.


Information ◽  
2018 ◽  
Vol 9 (12) ◽  
pp. 294 ◽  
Author(s):  
William Teahan

A novel compression-based toolkit for modelling and processing natural language text is described. The design of the toolkit adopts an encoding perspective—applications are considered to be problems in searching for the best encoding of different transformations of the source text into the target text. This paper describes a two phase `noiseless channel model’ architecture that underpins the toolkit which models the text processing as a lossless communication down a noise-free channel. The transformation and encoding that is performed in the first phase must be both lossless and reversible. The role of the verification and decoding second phase is to verify the correctness of the communication of the target text that is produced by the application. This paper argues that this encoding approach has several advantages over the decoding approach of the standard noisy channel model. The concepts abstracted by the toolkit’s design are explained together with details of the library calls. The pseudo-code for a number of algorithms is also described for the applications that the toolkit implements including encoding, decoding, classification, training (model building), parallel sentence alignment, word segmentation and language segmentation. Some experimental results, implementation details, memory usage and execution speeds are also discussed for these applications.


Author(s):  
Matheus C. Pavan ◽  
Vitor G. Santos ◽  
Alex G. J. Lan ◽  
Joao Martins ◽  
Wesley Ramos Santos ◽  
...  

2012 ◽  
Vol 30 (1) ◽  
pp. 1-34 ◽  
Author(s):  
Antonio Fariña ◽  
Nieves R. Brisaboa ◽  
Gonzalo Navarro ◽  
Francisco Claude ◽  
Ángeles S. Places ◽  
...  

Author(s):  
S.G. Antonov

In the article discuss the application aspects of wordforms of natural language text for decision the mistakes correction problem. Discuss the merits and demerits of two known approaches for decision – deterministic and based on probabilities/ Construction principles of natural language corpus described, wich apply in probability approach. Declare conclusion about necessity of complex using these approaches in dependence on properties of texts.


2022 ◽  
Vol 40 (1) ◽  
pp. 1-43
Author(s):  
Ruqing Zhang ◽  
Jiafeng Guo ◽  
Lu Chen ◽  
Yixing Fan ◽  
Xueqi Cheng

Question generation is an important yet challenging problem in Artificial Intelligence (AI), which aims to generate natural and relevant questions from various input formats, e.g., natural language text, structure database, knowledge base, and image. In this article, we focus on question generation from natural language text, which has received tremendous interest in recent years due to the widespread applications such as data augmentation for question answering systems. During the past decades, many different question generation models have been proposed, from traditional rule-based methods to advanced neural network-based methods. Since there have been a large variety of research works proposed, we believe it is the right time to summarize the current status, learn from existing methodologies, and gain some insights for future development. In contrast to existing reviews, in this survey, we try to provide a more comprehensive taxonomy of question generation tasks from three different perspectives, i.e., the types of the input context text, the target answer, and the generated question. We take a deep look into existing models from different dimensions to analyze their underlying ideas, major design principles, and training strategies We compare these models through benchmark tasks to obtain an empirical understanding of the existing techniques. Moreover, we discuss what is missing in the current literature and what are the promising and desired future directions.


Sign in / Sign up

Export Citation Format

Share Document