Linguistic Data Model for Natural Languages and Artificial Intelligence. Part 5. Introduction to Logic

Discourse ◽  
2020 ◽  
Vol 6 (3) ◽  
pp. 109-117
Author(s):  
O. M. Polyakov

Introduction. The article continues the series of publications on the linguistics of relations (hereinafter R–linguistics) and is devoted to an introduction to the logic of natural language in relation to the approach considered in the series. The problem of natural language logic still remains relevant, since this logic differs significantly from traditional mathematical logic. Moreover, with the appearance of artificial intelligence systems, the importance of this problem only increases. The article analyzes logical problems that prevent the application of classical logic methods to natural languages. This is possible because R-linguistics forms the semantics of a language in the form of world model structures in which language sentences are interpreted.Methodology and sources. The results obtained in the previous parts of the series are used as research tools. To develop the necessary mathematical representations in the field of logic and semantics, the formulated concept of the interpretation operator is used.Results and discussion. The problems that arise when studying the logic of natural language in the framework of R–linguistics are analyzed. These issues are discussed in three aspects: the logical aspect itself; the linguistic aspect; the aspect of correlation with reality. A very General approach to language semantics is considered and semantic axioms of the language are formulated. The problems of the language and its logic related to the most General view of semantics are shown.Conclusion. It is shown that the application of mathematical logic, regardless of its type, to the study of natural language logic faces significant problems. This is a consequence of the inconsistency of existing approaches with the world model. But it is the coherence with the world model that allows us to build a new logical approach. Matching with the model means a semantic approach to logic. Even the most General view of semantics allows to formulate important results about the properties of languages that lack meaning. The simplest examples of semantic interpretation of traditional logic demonstrate its semantic problems (primarily related to negation).

Discourse ◽  
2020 ◽  
Vol 6 (2) ◽  
pp. 107-114
Author(s):  
O. M. Polyakov

Introduction. The paper continues a series of publications on linguistics of relations (hereinafter R–linguistics) and is devoted to questions of the formation of a language from a linguistic model of the world. Moreover, the language is considered in its most general form, without taking into account the grammatical component. This allows you to focus on the general problems of language formation. Namely, this allows us to show why language adequately reflects the model of the world and what are the features of the transition from model to language. This new approach to language is relevant in connection with the formation of an understanding of the common core in all natural languages, as well as in connection with the needs for the formation of artificial intelligence subsystems of interaction with humans.Methodology and sources. Research methods consist in the formulation and proof of theorems about language spaces and their properties. The materials of the paper and the given proofs are based on the previously stated ideas about linguistic spaces and their decompositions into signs.Results and discussion. The paper shows how, in the most general form, the formation of language structures takes place. Namely, why does language adequately reflect the linguistic model, and what is the difference between linguistic and language spaces? The concepts of an open and closed form of the language are formulated, as well as the law of form. Examples of open and closed forms of the language are shown. It is shown that the formation of the language allows you to compensate for the lack of real signs in the surrounding world while maintaining the prognostic properties of the model.Conclusion. Any natural language is a reflection of the human world model. Moreover, all natural languages are similar in terms of the principles of forming the core of the language (language space). Language spaces standardize the models of the world by equalizing real and fictional signs of categories. In addition, the transition to language simplifies some of the problems of pattern recognition and opens the way to the logic of natural language.


Discourse ◽  
2021 ◽  
Vol 7 (2) ◽  
pp. 127-134
Author(s):  
O. M. Polyakov

Introduction. The article continues a series of publications on the linguistics of the relationship (hereafter R-linguistics) and is concerned with the semantic interpretation in terms of the linguistic model that is the initial stage to consider the logic of natural language (external logic).Methodology and sources. The results obtained in the previous parts of the series are used as research tools. In particular, the verbal categorization method is used to represent concepts and verbs. To develop the necessary mathematical representations in the field of logic and semantics of natural language, the previously formulated concept of the interpretation operator is used. The interpretation operator maps the sentences of the language into the model, taking into account the previously interpreted sentences.Results and discussion. The problems that arise during the operation of the natural language interpretation operator are analyzed using examples of text translation and utterance algebra. The source of problems is the dependence of the interpretation of sentences on the already accumulated results of interpretation. The features of the interpretation of negation and double negation in the language are analyzed. In particular, the negation of a sentence affects the interpretation of previous sentences, and double negation usually denotes a single negation with an indication of its scope. It is shown that even from the point of view of classical logic, linguistic negation is not unconditional, and the operation of concatenation is not commutative and associative. General rules of text interpretation in the form of step-by-step mapping of sentence elements into a linguistic model are formulated.Conlcusion. From the considered examples of the implementation of the interpretation operator, it follows that the negation of a sentence requires a change in the meaning of the operation of attributing sentences in the text. For this reason, the negative particle ”not” in the language is actually a label for changing the interpretation rule. The double negation rule in sentence logic does not hold, so sentences containing double negations are likely to contain information about the scope of the sentence negation in the text. Based on the analysis, the contours of the interpretation operator for the linguistic model are indicated.


Author(s):  
Massimo Mugnai

In his 1677 Dialogue, Leibniz answers the question of how it is possible that speakers of different languages agree on the same truths by postulating “a certain correspondence between characters and things”. In the mid-1680s, he arguably attempts to specify this “correspondence” by explaining how linguistic particles are connected to our perception of spatial relations among things in the world. Firstly, this paper focuses on the role that, according to Leibniz, signs and characters play in our knowledge. Secondly, it introduces the solution that can be found in the Dialogue to the problem of how the same truth can be expressed in different languages. After briefly expounding Leibniz’s theory of natural languages, the paper gives an account of Leibniz’s analysis of the nature of prepositions and of how they contribute, in a natural language, to determine the correspondence between characters and things that is mentioned in the Dialogue.


Author(s):  
أ.د. محمد أديب غنيمي أ.د. محمد أديب غنيمي

. This paper gives an overview of Web intelligence which will enable the current Web to reach the Wisdom level by containing Distributed, Integrated, and Active knowledge. In this case it will be capable of performing tasks like problem solving and questionanswering. In addition, it will be capable of processing and understanding natural languages. Web intelligence draws results from a number of disciplines like: Artificial intelligence, Information technology. Mathematics and Physics, Psychology and Linguistics. The paper covers the following topics: Web evolution and architecture, Topics related to Web intelligence, The Deep Web, Semantic computing and the Semantic Web, The Wisdom Web, Precisiated Natural Language.


2018 ◽  
Vol 3 (1) ◽  
pp. 492
Author(s):  
Denis Cedeño Moreno ◽  
Miguel Vargas Lombardo

At present, the convergence of several areas of knowledge has led to the design and implementation of ICT systems that support the integration of heterogeneous tools, such as artificial intelligence (AI), statistics and databases (BD), among others. Ontologies in computing are included in the world of AI and refer to formal representations of an area of knowledge or domain. The discipline that is in charge of the study and construction of tools to accelerate the process of creation of ontologies from the natural language is the ontological engineering. In this paper, we propose a knowledge management model based on the clinical histories of patients (HC) in Panama, based on information extraction (EI), natural language processing (PLN) and the development of a domain ontology.Keywords: Knowledge, information extraction, ontology, automatic population of ontologies, natural language processing.


Author(s):  
Prakash Mondal

Logical form in logic and logical form (LF) in the Minimalist architecture of language are two different forms of representational models of semantic facts. They are distinct in their form and in how they represent some natural language phenomena. This paper aims to argue that the differences between logical form and LF have profound implications for the question about the nature of semantic interpretation. First, this can tell us whether semantic interpretation is computational and if so, in what sense. Second, this can also shed light on the ontology of semantic interpretation in the sense that the forms (that is, logical form and LF) in which semantic facts are expressed may also uncover where in the world semantic interpretation as such can be located. This can have surprising repercussions for reasoning in natural language as ell.


Author(s):  
Konstantin kolin

The capabilities of machine translation are closely related to the improvement of modeling the processes of understanding and generating texts in natural language, which traditionally belongs to the class of artificial intelligence problems. The article attempts to analyze the main approaches to the creation of machine translation technologies. It is concluded that these approaches have not yet provide for the formation and use of dynamic models of the world, but are moving mainly in the direction of a grammatically consistent translation of word sequences.


Author(s):  
Jody Azzouni

It’s shown that the existence concept that we express in natural languages and that we use to think about what we—philosophers and non-philosophers—take to exist in the world is criterion-transcendent, transcendent, and univocal. That is, speakers use a notion that they take to be fixed in its extension across languages and to be the same one they’ve used in the past and will use in the future. Furthermore, the existence concept has no meaning entailments. We do not understand what exists to have certain properties (or not to have certain properties) on the basis of the meaning of the word “exist.” “Exist” and “there is,” when used to express or deny ontological commitments, are neither ambiguous nor polysemous. Language-usage evidence is presented that confirms these claims.


Author(s):  
Ruket Çakici

Annotated data have recently become more important, and thus more abundant, in computational linguistics . They are used as training material for machine learning systems for a wide variety of applications from Parsing to Machine Translation (Quirk et al., 2005). Dependency representation is preferred for many languages because linguistic and semantic information is easier to retrieve from the more direct dependency representation. Dependencies are relations that are defined on words or smaller units where the sentences are divided into its elements called heads and their arguments, e.g. verbs and objects. Dependency parsing aims to predict these dependency relations between lexical units to retrieve information, mostly in the form of semantic interpretation or syntactic structure. Parsing is usually considered as the first step of Natural Language Processing (NLP). To train statistical parsers, a sample of data annotated with necessary information is required. There are different views on how informative or functional representation of natural language sentences should be. There are different constraints on the design process such as: 1) how intuitive (natural) it is, 2) how easy to extract information from it is, and 3) how appropriately and unambiguously it represents the phenomena that occur in natural languages. In this article, a review of statistical dependency parsing for different languages will be made and current challenges of designing dependency treebanks and dependency parsing will be discussed.


Author(s):  
Yashaswini S

To understand language, we need an understanding of the world around us. Language describes the world and provides symbols with which we represent meaning. Still, much knowledge about the world is so obvious that it is rarely explicitly stated. It is uncommon for people to state that chairs are usually on the floor and upright, and that you usually eat a cake from a plate on a table. Knowledge of such common facts provides the context within which people communicate with language. Therefore, to create practical systems that can interact with the world and communicate with people, we need to leverage such knowledge to interpret language in context. Scene generation can be used to achieve an ability to generate 3D scenes on basis of text description. A model capable of learning natural language semantics or interesting pattern to generate abstract idea behind scene composition is interesting [1].Scene generation from text involves several fields like NLP, artificial intelligence, computer vision and machine learning. This paper focuses on optimally arranging objects in a room with focus on the orientation of the objects with respect to the floor, wall and ceiling of a room along with textures. Our model suggest a novel framework which can be used as a tool to generate scene where anyone without 3D Modeling.


Sign in / Sign up

Export Citation Format

Share Document