Potential Structures for Conversations in Various Contexts

2020 ◽  
Author(s):  
Tony Seimon ◽  
Janeth Robinson

Computational Linguistics and Artificial Intelligence are increasingly demanding more effective contributions from language studies to Natural Language Processing. This fact has driven Applied Linguistics to produce knowledge to offer reliable models of linguistic production, which are not based only on formal rules of context-free grammars; but, in another way, take the natural language understanding as processing parameter. In a complementary way, there has been an increase in the scope of Applied Linguistics, the need to implement the processing of natural languages in the interaction between human and computer, incorporating the machine into its research and application practices. Among these demands, the search for models that extrapolate the order of prayer stands out, in particular by turning to the structure of texts and, consequently, to textual genres. Situating in this context, this article aims to contribute with solutions to the demands in relation to the study of conversational structures. Thus, it aims to offer a linguistic model of the grammatical systems that perform the potential structures for the conversations in various contexts. More specifically, it produces a model capable of describing the way in which the system networks are made and, consequently, how this dynamic explains the organization of the conversations.

Author(s):  
Al-Mahmud ◽  
Bishnu Sarker ◽  
K. M. Azharul Hasan

Parsing plays a very prominent role in computational linguistics. Parsing a Bangla sentence is a primary need in Bangla language processing. This chapter describes the Context Free Grammar (CFG) for parsing Bangla language, and hence, a Bangla parser is proposed based on the Bangla grammar. This approach is very simple to apply in Bangla sentences, and the method is well accepted for parsing grammar. This chapter introduces a parser for Bangla language, which is, by nature, a predictive parser, and the parse table is constructed for recognizing Bangla grammar. Parse table is an important tool to recognize syntactical mistakes of Bangla sentences when there is no entry for a terminal in the parse table. If a natural language can be successfully parsed then grammar checking of this language becomes possible. The parsing scheme in this chapter works based on a top-down parsing method. CFG suffers from a major problem called left recursion. The technique of left factoring is applied to avoid the problem.


1981 ◽  
Vol 11 ◽  
pp. 7-16
Author(s):  
M. Boot

In computational linguistics three classes of models have been developed for the automated treatment of texts in natural language. One class is best characterized as a set model. Language is defined as a set of words. On these words normal arithmetic computations are performed. The set model has led to frequency counts. Frequency counts of natural language material have proved to be of little importance to language analysis and the study of language learning. The second class of models is best characterized as formal linguistic models. Here language is defined as not merely a set of words but more as a set of sentences. On these sets of sentences more than purely arithmetic operations can be performed. Important notions in these models are transformations, recursivity or even grammars. This class of models has led to the adaptation of context free grammars to natural language. The weak point in this class of models is the inappropriateness of formal grammars to human language. The third class of models can be defined as artificial intelligence models. Here the computer is used to simulate human verbal behavior. Language processes are defined as processes of understanding language. Linguistic knowledge is not defined outside the vocabulary or outside these processes. This class of models has led to the application of Minsky's frame theory to natural language processing. The lexicon here is defined as a procedural fact device in the language processor itself. This class of models is most promising for the study of language learning and the role of vocabulary in this language learning process.


Author(s):  
Ruket Çakici

Annotated data have recently become more important, and thus more abundant, in computational linguistics . They are used as training material for machine learning systems for a wide variety of applications from Parsing to Machine Translation (Quirk et al., 2005). Dependency representation is preferred for many languages because linguistic and semantic information is easier to retrieve from the more direct dependency representation. Dependencies are relations that are defined on words or smaller units where the sentences are divided into its elements called heads and their arguments, e.g. verbs and objects. Dependency parsing aims to predict these dependency relations between lexical units to retrieve information, mostly in the form of semantic interpretation or syntactic structure. Parsing is usually considered as the first step of Natural Language Processing (NLP). To train statistical parsers, a sample of data annotated with necessary information is required. There are different views on how informative or functional representation of natural language sentences should be. There are different constraints on the design process such as: 1) how intuitive (natural) it is, 2) how easy to extract information from it is, and 3) how appropriately and unambiguously it represents the phenomena that occur in natural languages. In this article, a review of statistical dependency parsing for different languages will be made and current challenges of designing dependency treebanks and dependency parsing will be discussed.


2012 ◽  
pp. 2117-2124
Author(s):  
Ruket Çakici

Annotated data have recently become more important, and thus more abundant, in computational linguistics . They are used as training material for machine learning systems for a wide variety of applications from Parsing to Machine Translation (Quirk et al., 2005). Dependency representation is preferred for many languages because linguistic and semantic information is easier to retrieve from the more direct dependency representation. Dependencies are relations that are defined on words or smaller units where the sentences are divided into its elements called heads and their arguments, e.g. verbs and objects. Dependency parsing aims to predict these dependency relations between lexical units to retrieve information, mostly in the form of semantic interpretation or syntactic structure. Parsing is usually considered as the first step of Natural Language Processing (NLP). To train statistical parsers, a sample of data annotated with necessary information is required. There are different views on how informative or functional representation of natural language sentences should be. There are different constraints on the design process such as: 1) how intuitive (natural) it is, 2) how easy to extract information from it is, and 3) how appropriately and unambiguously it represents the phenomena that occur in natural languages. In this article, a review of statistical dependency parsing for different languages will be made and current challenges of designing dependency treebanks and dependency parsing will be discussed.


Author(s):  
John Nerbonne

This article examines the application of natural language processing to computer-assisted language learning (CALL) including the history of work in this field over the last thirtyfive years and focuses on current developments and opportunities. It always refers to programs designed to help people learn foreign languages. CALL is a large field — much larger than computational linguistics. This article outlines the areas of CALL to which computational linguistics (CL) can be applied. CL programs process natural languages such as English and Spanish, and the techniques are therefore often referred to as natural language processing (NLP). NLP is enlisted in several ways in CALL to provide lemmatized access to corpora for advanced learners seeking subtleties unavailable in grammars and dictionaries. It also provides morphological analysis and subsequent dictionary access for words unknown to readers and to parse user input and diagnose morphological and syntactic errors.


2014 ◽  
pp. 933-950
Author(s):  
Al-Mahmud ◽  
Bishnu Sarker ◽  
K. M. Azharul Hasan

Parsing plays a very prominent role in computational linguistics. Parsing a Bangla sentence is a primary need in Bangla language processing. This chapter describes the Context Free Grammar (CFG) for parsing Bangla language, and hence, a Bangla parser is proposed based on the Bangla grammar. This approach is very simple to apply in Bangla sentences, and the method is well accepted for parsing grammar. This chapter introduces a parser for Bangla language, which is, by nature, a predictive parser, and the parse table is constructed for recognizing Bangla grammar. Parse table is an important tool to recognize syntactical mistakes of Bangla sentences when there is no entry for a terminal in the parse table. If a natural language can be successfully parsed then grammar checking of this language becomes possible. The parsing scheme in this chapter works based on a top-down parsing method. CFG suffers from a major problem called left recursion. The technique of left factoring is applied to avoid the problem.


Author(s):  
Shreyashi Chowdhury ◽  
Asoke Nath

Natural language processing (NLP) is a subfield of linguistics, computer science, and artificial intelligence concerned with the interactions between computers and human language, in particular how to program computers to process and analyse large amounts of natural language data. The goal is a computer capable of "understanding" the contents of documents, including the contextual nuances of the language within them.NLP combines computational linguistics—rule-based modelling of human language—with statistical, machine learning, and deep learning models. Together, these technologies enable computers to process human language in the form of text or voice data and to ‘understand’ its full meaning, complete with the speaker or writer’s intent and sentiment. Challenges in natural language processing frequently involve speech recognition, natural language understanding, and natural language generation. This paper discusses on the various scope and challenges , current trends and future scopes of Natural Language Processing.


2021 ◽  
Vol 11 (7) ◽  
pp. 3095
Author(s):  
Suhyune Son ◽  
Seonjeong Hwang ◽  
Sohyeun Bae ◽  
Soo Jun Park ◽  
Jang-Hwan Choi

Multi-task learning (MTL) approaches are actively used for various natural language processing (NLP) tasks. The Multi-Task Deep Neural Network (MT-DNN) has contributed significantly to improving the performance of natural language understanding (NLU) tasks. However, one drawback is that confusion about the language representation of various tasks arises during the training of the MT-DNN model. Inspired by the internal-transfer weighting of MTL in medical imaging, we introduce a Sequential and Intensive Weighted Language Modeling (SIWLM) scheme. The SIWLM consists of two stages: (1) Sequential weighted learning (SWL), which trains a model to learn entire tasks sequentially and concentrically, and (2) Intensive weighted learning (IWL), which enables the model to focus on the central task. We apply this scheme to the MT-DNN model and call this model the MTDNN-SIWLM. Our model achieves higher performance than the existing reference algorithms on six out of the eight GLUE benchmark tasks. Moreover, our model outperforms MT-DNN by 0.77 on average on the overall task. Finally, we conducted a thorough empirical investigation to determine the optimal weight for each GLUE task.


Author(s):  
TIAN-SHUN YAO

With the word-based theory of natural language processing, a word-based Chinese language understanding system has been developed. In the light of psychological language analysis and the features of the Chinese language, this theory of natural language processing is presented with the description of the computer programs based on it. The heart of the system is to define a Total Information Dictionary and the World Knowledge Source used in the system. The purpose of this research is to develop a system which can understand not only Chinese sentences but also the whole text.


Sign in / Sign up

Export Citation Format

Share Document