scholarly journals Accessible Visualization via Natural Language Descriptions: A Four-Level Model of Semantic Content

Author(s):  
Alan Lundgard ◽  
Arvind Satyanarayan
Author(s):  
Anton Dries ◽  
Angelika Kimmig ◽  
Jesse Davis ◽  
Vaishak Belle ◽  
Luc de Raedt

The ability to solve probability word problems such as those found in introductory discrete mathematics textbooks, is an important cognitive and intellectual skill. In this paper, we develop a two-step end-to-end fully automated approach for solving such questions that is able to automatically provide answers to exercises about probability formulated in natural language.In the first step, a question formulated in natural language is analysed and transformed into a high-level model specified in a declarative language. In the second step, a solution to the high-level model is computed using a probabilistic programming system. On a dataset of 2160 probability problems, our solver is able to correctly answer 97.5% of the questions given a correct model. On the end-to-end evaluation, we are able to answer 12.5% of the questions (or 31.1% if we exclude examples not supported by design).


Author(s):  
SONGSAK CHANNARUKUL ◽  
SUSAN W. MCROY ◽  
SYED S. ALI

We present a natural language realization component, called YAG, that is suitable for intelligent tutoring systems that use dialog. Dialog imposes unique requirements on a generation component, namely: dialog systems must interact in real-time; they must be capable of producing fragmentary output; and they may be re-deployed in a number of different domains. Our approach to real-time natural language realization combines a declarative, template-based approach for the representation of text structure with knowledge-based methods for representing semantic content. Possible text structures are defined in a declarative language that is easy to understand, maintain, and re-use. A dialog system can use YAG to realize text structures by specifying a template and content from its knowledge base. Content can be specified in one of two ways: (1) as a sequence of propositions along with some control features; or (2) as a set of feature-value pairs. YAG's template realization algorithm realizes text without any search (in contrast to systems that must find rules that unify with a feature structure).


2019 ◽  
Vol 31 (2) ◽  
pp. 571-574
Author(s):  
Ardian Fera

A preposition is a word or set of words that indicates location or some other relationship between a noun or pronoun and other parts of the sentence. It refers to the word or phrase which shows the relationship between one thing and another, linking nouns, pronouns and phrases to other words in a sentence. They are abstract words that have no concrete meaning. They merely show the relationships between groups of words. Within a preposition, there are many different variations in meaning that are conveyed. The proper interpretation of prepositions is an important issue for automatic natural language understanding. Although the complexity of preposition usage has been argued for and documented by various scholars in linguistics, psycholinguistics, and computational linguistics, very few studies have been done on the function of prepositions in natural language processing (NLP) applications. The reason is that prepositions are probably the most polysemous category and thus, their linguistic realizations are difficult to predict and their cross-linguistic regularities difficult to identify. Prepositions play a major role in the syntactic structures of the English language and they often make an essential contribution to sentence meaning by signifying temporal and spatial relationships, as well as abstract relations involving cause and purpose, agent and instrument, manner and accompaniment, support and much more. They are sensitive linguistic elements that are culturally acceptable and very well known to all members of the same linguistic community. According to cognitive semantics, the figurative senses of a preposition are extended from its spatial senses through conceptual metaphors. In a pedagogical context, it may be useful to draw learners' attention to those aspects of a preposition's spatial sense that are especially relevant for its metaphorization processes. Prepositions have type restrictions on their arguments, they assign thematic roles, and they have a semantic content, possibly underspecified. The only difference with the other open-class categories like nouns, verbs or adjectives is that they do not have any morphology.


Author(s):  
Patrick Duffley ◽  
Maryse Arseneau

AbstractThis study investigates temporal and control interpretations with verbs of risk followed by non-finite complements in English. It addresses two questions: Why does the gerund-participle show variation in the temporal relation between the event it denotes and that of the main verb whereas the to-infinitive manifests a constant temporal relation? Why does the gerund-participle construction allow variation in control while the to-infinitive shows constant subject control readings? The study is based on a corpus of 1345 attested uses. The explanation is framed in a natural-language semantics involving the meanings of the gerund-participle, the infinitive, the preposition to, and the meaning-relation between the matrix and its complement. Temporal and control interpretations are shown to arise as implications grounded in the semantic content of what is linguistically expressed. It is argued that the capacity of a natural-language semantic approach to account for the data obviates the need to have recourse to purely syntactic operations to account for control.


2021 ◽  
Author(s):  
Jayaraj Poroor

Formal verification provides strong guarantees of correctness of software, which are especially important in safety or security critical systems. Hoare logic is a widely used formalism for rigorous verification of software against specifications in the form of pre-condition/post-condition assertions. The advancement of semantic parsing techniques and higher computational capabilities enable us to extract semantic content from natural language text as formal logical forms, with increasing accuracy and coverage. This paper proposes a formal framework for Hoare logic-based formal verification of imperative programs using logical forms generated from compositional semantic parsing of natural language assertions. We call our reasoning approach Natural Hoare Logic. This enables formal verification of software directly against safety requirements specified by a domain expert in natural language. We consider both declarative assertions of program invariants and state change as well as imperative assertions that specify commands which alter the program state. We discuss how the reasoning approach can be extended using domain knowledge and a practical approach for guarding against semantic parser errors.


2021 ◽  
Vol 4 (2) ◽  
pp. 30-39
Author(s):  
Petr Kusliy

The paper is a reply to Alexander Nikiforov’s discussion of the notion of progress and a critical evaluation of that discussion. The author accepts Nikiforov’s arguments about the relativity of the denotation of the term “progress” but rejects his attempts to explain the term’s meaning by an appeal to a positive develop-ment in a concrete, albeit abstract, realm. The author argues that even though “progress” is an evaluative predicate whose denotation heavily depends on context, this does not mean that its semantics lacks an objective component. Building on some literature from formal semantics of natural language, the author outlines an approach to the semantics of “progress” that would not have the shortcomings of the approach suggested by Nikiforov.


2020 ◽  
Vol 34 (05) ◽  
pp. 8018-8025 ◽  
Author(s):  
Di Jin ◽  
Zhijing Jin ◽  
Joey Tianyi Zhou ◽  
Peter Szolovits

Machine learning algorithms are often vulnerable to adversarial examples that have imperceptible alterations from the original counterparts but can fool the state-of-the-art models. It is helpful to evaluate or even improve the robustness of these models by exposing the maliciously crafted adversarial examples. In this paper, we present TextFooler, a simple but strong baseline to generate adversarial text. By applying it to two fundamental natural language tasks, text classification and textual entailment, we successfully attacked three target models, including the powerful pre-trained BERT, and the widely used convolutional and recurrent neural networks. We demonstrate three advantages of this framework: (1) effective—it outperforms previous attacks by success rate and perturbation rate, (2) utility-preserving—it preserves semantic content, grammaticality, and correct types classified by humans, and (3) efficient—it generates adversarial text with computational complexity linear to the text length.1


In Software industries developers and testers spend most of the time in code maintainability than in developing or testing the code. Even after spending lot of time in analyzing the code the final analysis will serve only 50% of their purpose. The analysis involves many complex tasks like code comprehension, code extensibility and code portability. The traditional approach is to analyze the code manually, which is a tedious and time consuming task. Existing techniques does not provide the required summary, most of them are complex and they are not in natural language format. In this paper we propose a novel summarization technique that summarizes the source code similar to natural language text. The proposed system provides summary is based on the semantic content in the source code, in case the source code does not contain semantic contents it uses the java stereotypes to get the semantic content. The proposed system delivers the summaries which simplifies the task of the developer and provides the required summary.


Author(s):  
Akira Takagi ◽  
◽  
Hideki Asoh ◽  
Yukihiro Itoh ◽  
Makoto Kondo ◽  
...  

One of the biggest problems in natural language processing is that its processing target (i.e. the surface expressions of sentences) has a great deal of diversity. In order to reduce the difficulty, it is desirable to extract the semantic content denoted by a sentence in such a way that it does not depend on the surface expressions as much as possible. This paper proposes a new semantic representation and general interpretive procedures that enable us to obtain the result of semantic interpretation from a variety of surface expressions of the input independently of their dependency structures. In the semantic representation to be proposed, a variety of surface dependency relations are compressed into attribute nouns, and the meaning expressed by dependency relation is represented in a uniform style (i.e. attribute = value). This approach enables us to establish correspondence between meanings by using the attribute-value pair as a basic unit. With this semantic representation and the general interpretive procedures, the same interpretive result can be obtained from sentences with different dependency structures. We will further demonstrate that semantic contents of multiple sentences can be integrated by interpreting them based on the correspondence between meanings.


Sign in / Sign up

Export Citation Format

Share Document