scholarly journals Linguistic Data Model for Natural Languages and Artificial Intelligence. Part 6. The Еxternal Logic

Discourse ◽  
2021 ◽  
Vol 7 (2) ◽  
pp. 127-134
Author(s):  
O. M. Polyakov

Introduction. The article continues a series of publications on the linguistics of the relationship (hereafter R-linguistics) and is concerned with the semantic interpretation in terms of the linguistic model that is the initial stage to consider the logic of natural language (external logic).Methodology and sources. The results obtained in the previous parts of the series are used as research tools. In particular, the verbal categorization method is used to represent concepts and verbs. To develop the necessary mathematical representations in the field of logic and semantics of natural language, the previously formulated concept of the interpretation operator is used. The interpretation operator maps the sentences of the language into the model, taking into account the previously interpreted sentences.Results and discussion. The problems that arise during the operation of the natural language interpretation operator are analyzed using examples of text translation and utterance algebra. The source of problems is the dependence of the interpretation of sentences on the already accumulated results of interpretation. The features of the interpretation of negation and double negation in the language are analyzed. In particular, the negation of a sentence affects the interpretation of previous sentences, and double negation usually denotes a single negation with an indication of its scope. It is shown that even from the point of view of classical logic, linguistic negation is not unconditional, and the operation of concatenation is not commutative and associative. General rules of text interpretation in the form of step-by-step mapping of sentence elements into a linguistic model are formulated.Conlcusion. From the considered examples of the implementation of the interpretation operator, it follows that the negation of a sentence requires a change in the meaning of the operation of attributing sentences in the text. For this reason, the negative particle ”not” in the language is actually a label for changing the interpretation rule. The double negation rule in sentence logic does not hold, so sentences containing double negations are likely to contain information about the scope of the sentence negation in the text. Based on the analysis, the contours of the interpretation operator for the linguistic model are indicated.

Discourse ◽  
2020 ◽  
Vol 6 (3) ◽  
pp. 109-117
Author(s):  
O. M. Polyakov

Introduction. The article continues the series of publications on the linguistics of relations (hereinafter R–linguistics) and is devoted to an introduction to the logic of natural language in relation to the approach considered in the series. The problem of natural language logic still remains relevant, since this logic differs significantly from traditional mathematical logic. Moreover, with the appearance of artificial intelligence systems, the importance of this problem only increases. The article analyzes logical problems that prevent the application of classical logic methods to natural languages. This is possible because R-linguistics forms the semantics of a language in the form of world model structures in which language sentences are interpreted.Methodology and sources. The results obtained in the previous parts of the series are used as research tools. To develop the necessary mathematical representations in the field of logic and semantics, the formulated concept of the interpretation operator is used.Results and discussion. The problems that arise when studying the logic of natural language in the framework of R–linguistics are analyzed. These issues are discussed in three aspects: the logical aspect itself; the linguistic aspect; the aspect of correlation with reality. A very General approach to language semantics is considered and semantic axioms of the language are formulated. The problems of the language and its logic related to the most General view of semantics are shown.Conclusion. It is shown that the application of mathematical logic, regardless of its type, to the study of natural language logic faces significant problems. This is a consequence of the inconsistency of existing approaches with the world model. But it is the coherence with the world model that allows us to build a new logical approach. Matching with the model means a semantic approach to logic. Even the most General view of semantics allows to formulate important results about the properties of languages that lack meaning. The simplest examples of semantic interpretation of traditional logic demonstrate its semantic problems (primarily related to negation).


Author(s):  
Ruket Çakici

Annotated data have recently become more important, and thus more abundant, in computational linguistics . They are used as training material for machine learning systems for a wide variety of applications from Parsing to Machine Translation (Quirk et al., 2005). Dependency representation is preferred for many languages because linguistic and semantic information is easier to retrieve from the more direct dependency representation. Dependencies are relations that are defined on words or smaller units where the sentences are divided into its elements called heads and their arguments, e.g. verbs and objects. Dependency parsing aims to predict these dependency relations between lexical units to retrieve information, mostly in the form of semantic interpretation or syntactic structure. Parsing is usually considered as the first step of Natural Language Processing (NLP). To train statistical parsers, a sample of data annotated with necessary information is required. There are different views on how informative or functional representation of natural language sentences should be. There are different constraints on the design process such as: 1) how intuitive (natural) it is, 2) how easy to extract information from it is, and 3) how appropriately and unambiguously it represents the phenomena that occur in natural languages. In this article, a review of statistical dependency parsing for different languages will be made and current challenges of designing dependency treebanks and dependency parsing will be discussed.


2006 ◽  
Vol 3 (1-2) ◽  
pp. 63-74
Author(s):  
Gašper Ilc

Negation has a very long history of study. In the realm of logic, negation is seen as a simple operation that turns an affirmative to a negative. This assumption strongly affected the linguistic study of negation, and led to some misconceptions. For example, negation in natural languages is seen as something unnatural, artificial and syntactically as well as semantically dependant on affirmation. It is perceived as a logical/mathematical operation that turns affirmatives into negatives by way of syntactic transformation and semantic cancellation of multiple negatives. To refute some of these misconceptions, the paper investigates the nature of negation as a linguistic phenomenon, and shows that negation in logic and linguistics should not and cannot be treated in the same fashion. Special attention is paid to the problems of structural complexity, the syntactic notion of multiple negation and its different semantic interpretations. With regard to the semantic interpretation of multiple negation, languages, by and large, allow for two possibilities: negative concord and double negation. Negative concord, which interprets two negatives as a single negation, seems to represent the natural course of language development, while double negation, which allows the cancellation of two negatives resulting in affirmation, was introduced into languages under the influence of logic in the 17th and 18th centuries.


2012 ◽  
pp. 2117-2124
Author(s):  
Ruket Çakici

Annotated data have recently become more important, and thus more abundant, in computational linguistics . They are used as training material for machine learning systems for a wide variety of applications from Parsing to Machine Translation (Quirk et al., 2005). Dependency representation is preferred for many languages because linguistic and semantic information is easier to retrieve from the more direct dependency representation. Dependencies are relations that are defined on words or smaller units where the sentences are divided into its elements called heads and their arguments, e.g. verbs and objects. Dependency parsing aims to predict these dependency relations between lexical units to retrieve information, mostly in the form of semantic interpretation or syntactic structure. Parsing is usually considered as the first step of Natural Language Processing (NLP). To train statistical parsers, a sample of data annotated with necessary information is required. There are different views on how informative or functional representation of natural language sentences should be. There are different constraints on the design process such as: 1) how intuitive (natural) it is, 2) how easy to extract information from it is, and 3) how appropriately and unambiguously it represents the phenomena that occur in natural languages. In this article, a review of statistical dependency parsing for different languages will be made and current challenges of designing dependency treebanks and dependency parsing will be discussed.


2020 ◽  
Author(s):  
Tony Seimon ◽  
Janeth Robinson

Computational Linguistics and Artificial Intelligence are increasingly demanding more effective contributions from language studies to Natural Language Processing. This fact has driven Applied Linguistics to produce knowledge to offer reliable models of linguistic production, which are not based only on formal rules of context-free grammars; but, in another way, take the natural language understanding as processing parameter. In a complementary way, there has been an increase in the scope of Applied Linguistics, the need to implement the processing of natural languages in the interaction between human and computer, incorporating the machine into its research and application practices. Among these demands, the search for models that extrapolate the order of prayer stands out, in particular by turning to the structure of texts and, consequently, to textual genres. Situating in this context, this article aims to contribute with solutions to the demands in relation to the study of conversational structures. Thus, it aims to offer a linguistic model of the grammatical systems that perform the potential structures for the conversations in various contexts. More specifically, it produces a model capable of describing the way in which the system networks are made and, consequently, how this dynamic explains the organization of the conversations.


Discourse ◽  
2020 ◽  
Vol 6 (2) ◽  
pp. 107-114
Author(s):  
O. M. Polyakov

Introduction. The paper continues a series of publications on linguistics of relations (hereinafter R–linguistics) and is devoted to questions of the formation of a language from a linguistic model of the world. Moreover, the language is considered in its most general form, without taking into account the grammatical component. This allows you to focus on the general problems of language formation. Namely, this allows us to show why language adequately reflects the model of the world and what are the features of the transition from model to language. This new approach to language is relevant in connection with the formation of an understanding of the common core in all natural languages, as well as in connection with the needs for the formation of artificial intelligence subsystems of interaction with humans.Methodology and sources. Research methods consist in the formulation and proof of theorems about language spaces and their properties. The materials of the paper and the given proofs are based on the previously stated ideas about linguistic spaces and their decompositions into signs.Results and discussion. The paper shows how, in the most general form, the formation of language structures takes place. Namely, why does language adequately reflect the linguistic model, and what is the difference between linguistic and language spaces? The concepts of an open and closed form of the language are formulated, as well as the law of form. Examples of open and closed forms of the language are shown. It is shown that the formation of the language allows you to compensate for the lack of real signs in the surrounding world while maintaining the prognostic properties of the model.Conclusion. Any natural language is a reflection of the human world model. Moreover, all natural languages are similar in terms of the principles of forming the core of the language (language space). Language spaces standardize the models of the world by equalizing real and fictional signs of categories. In addition, the transition to language simplifies some of the problems of pattern recognition and opens the way to the logic of natural language.


1979 ◽  
Vol 18 (01) ◽  
pp. 15-17 ◽  
Author(s):  
Ileana C. Johnson ◽  
S. L. Tsao ◽  
I. D. J. Bross ◽  
D. P. Shedd

A series of computer programs are now available for processing data whose basic form is narrative (natural language), numerical or a combination of the two. This system was developed in the Department of Biostatistics of Roswell Park Memorial Institute, to enable an investigator to have complete control of the data — from the initial stage of entering the data to the final stage of analysis. Specialized knowledge of computers is not necessary in order to implement the different procedures. By following the instructions specified in various manuals and by basically just pushing a button of a remote terminal the procedures are activated and executed. This system is actually used for maintaining the data of a head and neck cancer project.


Author(s):  
LI LI ◽  
HONGLAI LIU ◽  
QINGSHI GAO ◽  
PEIFENG WANG

The sentences in several different natural languages can be produced congruously and synchronous by the new generating system USGS = {↔, GI|GI = (TI, N, B-RISU, C-treeI, S, PI, FI), I = 0, 1, 2, …, n}, based on Semantic Language(SL) theory, all are legitimate and reasonable, where, B-RISU is the set of basic-RISU, C-treeI is the set of category-trees, and FI is the set of functions in I-natural language. The characteristic of this new generating system is unified, synchronous and one by one corresponding, based on semantic unit theory and that the number of rules is several millions.


Traditional encryption systems and techniques have always been vulnerable to brute force cyber-attacks. This is due to bytes encoding of characters utf8 also known as ASCII characters. Therefore, an opponent who intercepts a cipher text and attempts to decrypt the signal by applying brute force with a faulty pass key can detect some of the decrypted signals by employing a mixture of symbols that are not uniformly dispersed and contain no meaningful significance. Honey encoding technique is suggested to curb this classical authentication weakness by developing cipher-texts that provide correct and evenly dispersed but untrue plaintexts after decryption with a false key. This technique is only suitable for passkeys and PINs. Its adjustment in order to promote the encoding of the texts of natural languages such as electronic mails, records generated by man, still remained an open-end drawback. Prevailing proposed schemes to expand the encryption of natural language messages schedule exposes fragments of the plaintext embedded with coded data, thus they are more prone to cipher text attacks. In this paper, amending honey encoded system is proposed to promote natural language message encryption. The main aim was to create a framework that would encrypt a signal fully in binary form. As an end result, most binary strings semantically generate the right texts to trick an opponent who tries to decipher an error key in the cipher text. The security of the suggested system is assessed..


Sign in / Sign up

Export Citation Format

Share Document