natural language generation
Recently Published Documents


TOTAL DOCUMENTS

465
(FIVE YEARS 114)

H-INDEX

20
(FIVE YEARS 4)

2021 ◽  
Author(s):  
Mahir Morshed

In the lead-up to the launch of Abstract Wikipedia, a sufficient body of linguistic information, based on which the text within for a given language can be generated, must be in place so that different sets of functions, some working with concepts and others turning these into word sequences, can work together to produce something natural in that language. To achieve that information body's development requires more thorough consideration of a number of linguistic aspects sooner rather than later. This session will thus discuss aspects of language planning with respect to Wikidata lexicographical data and natural language generation, including the compositionality and manipulability of lexical units, the breadth and interconnectedness of units of meaning, and the treatment of variation among a language’s lects broadly construed. Special reference to the handling of each of these aspects for Bengali and those linguistic varieties often grouped with it will be presented.


Information ◽  
2021 ◽  
Vol 12 (8) ◽  
pp. 337
Author(s):  
Alessandro Mazzei ◽  
Mattia Cerrato ◽  
Roberto Esposito ◽  
Valerio Basile

In natural language generation, word ordering is the task of putting the words composing the output surface form in the correct grammatical order. In this paper, we propose to apply general learning-to-rank algorithms to the task of word ordering in the broader context of surface realization. The major contributions of this paper are: (i) the design of three deep neural architectures implementing pointwise, pairwise, and listwise approaches for ranking; (ii) the testing of these neural architectures on a surface realization benchmark in five natural languages belonging to different typological families. The results of our experiments show promising results, in particular highlighting the performance of the pairwise approach, paving the way for a more transparent surface realization from arbitrary tree- and graph-like structures.


Sensors ◽  
2021 ◽  
Vol 21 (16) ◽  
pp. 5515
Author(s):  
Francisco de Arriba-Pérez ◽  
Silvia García-Méndez ◽  
Francisco J. González-Castaño ◽  
Enrique Costa-Montenegro

We recently proposed a novel intelligent newscaster chatbot for digital inclusion. Its controlled dialogue stages (consisting of sequences of questions that are generated with hybrid Natural Language Generation techniques based on the content) support entertaining personalisation, where user interest is estimated by analysing the sentiment of his/her answers. A differential feature of our approach is its automatic and transparent monitoring of the abstraction skills of the target users. In this work we improve the chatbot by introducing enhanced monitoring metrics based on the distance of the user responses to an accurate characterisation of the news content. We then evaluate abstraction capabilities depending on user sentiment about the news and propose a Machine Learning model to detect users that experience discomfort with precision, recall, F1 and accuracy levels over 80%.


Author(s):  
Fei Hao ◽  
Jie Gao ◽  
Carmen Bisogni ◽  
Geyong Min ◽  
Vincenzo Loia ◽  
...  

2021 ◽  
Vol 4 ◽  
Author(s):  
Kai-Uwe Carstensen

Quantification is one of the central topics in language and computation, and the interplay of collectivity, distributivity, cumulativity, and plurality is at the heart of the semantics of quantification expressions. However, its aspects are usually discussed piecemeal, distributed, and only from an interpretative perspective with selected linguistic examples, often blurring the overall picture. In this article, quantification phenomena are investigated from the perspective of natural language generation. Starting with a small-scale, but realistic scenario, the necessary steps toward generating quantifier expressions for a perceived situation are explained. Together with the automatically generated descriptions of the scenario, the observations made are shown to present new insights into the interplay, and the semantics of quantification expressions and plurals, in general. The results highlight the importance of taking different points of view in the field of language and computation.


Author(s):  
Asoke Nath ◽  
Rupamita Sarkar ◽  
Swastik Mitra ◽  
Rohitaswa Pradhan

In the early days of Artificial Intelligence, it was observed that tasks which humans consider ‘natural’ and ‘commonplace’, such as Natural Language Understanding, Natural Language Generation and Vision were the most difficult task to carry over to computers. Nevertheless, attempts to crack the proverbial NLP nut were made, initially with methods that fall under ‘Symbolic NLP’. One of the products of this era was ELIZA. At present the most promising forays into the world of NLP are provided by ‘Neural NLP’, which uses Representation Learning and Deep Neural networks to model, understand and generate natural language. In the present paper the authors tried to develop a Conversational Intelligent Chatbot, a program that can chat with a user about any conceivable topic, without having domain-specific knowledge programmed into it. This is a challenging task, as it involves both ‘Natural Language Understanding’ (the task of converting natural language user input into representations that a machine can understand) and subsequently ‘Natural Language Generation’ (the task of generating an appropriate response to the user input in natural language). Several approaches exist for building conversational chatbots. In the present paper, two models have been used and their performance has been compared and contrasted. The first model is purely generative and uses a Transformer-based architecture. The second model is retrieval-based, and uses Deep Neural Networks.


2021 ◽  
Author(s):  
Ben Buchanan ◽  
◽  
Andrew Lohn ◽  
Micah Musser ◽  
Katerina Sedova

Growing popular and industry interest in high-performing natural language generation models has led to concerns that such models could be used to generate automated disinformation at scale. This report examines the capabilities of GPT-3--a cutting-edge AI system that writes text--to analyze its potential misuse for disinformation. A model like GPT-3 may be able to help disinformation actors substantially reduce the work necessary to write disinformation while expanding its reach and potentially also its effectiveness.


Sign in / Sign up

Export Citation Format

Share Document