language generation
Recently Published Documents


TOTAL DOCUMENTS

709
(FIVE YEARS 208)

H-INDEX

25
(FIVE YEARS 6)

Author(s):  
Justin Tonra ◽  
David Kelly

Eververse was a yearlong conceptual poetry project which used a poet’s biometric data as the basis for generating verse. This article describes the project’s conceptual contributions to the field of electronic literature and its technical development. Eververse operated by collecting biometric data from the poet with a commercial fitness tracking device; this data was sent to a custom-built poetry generator which deployed a number of processes from the domains of Natural Language Generation and Sentiment Analysis to generate poetry; the form and content of this poetry was designed to vary according to specific changes in the biometric data, resulting in a poetry that conspicuously correlated with the poet’s daily activities; this poetry was published in real-time on the project website and the full poem and associated data have now been archived. In addition to providing details on the technical implementation of Eververse, this article includes discussion that situates the work within the tradition of electronic literature and analyses its unique inscription of biometric data. The article examines that feature in the contemporary context of the quantified self, but also in its engagement with historic poetic theories of composition, creativity, and the textualisation of the body.


2021 ◽  
pp. 1-51
Author(s):  
Di Jin ◽  
Zhijing Jin ◽  
Zhiting Hu ◽  
Olga Vechtomova ◽  
Rada Mihalcea

Abstract Text style transfer is an important task in natural language generation, which aims to control certain attributes in the generated text, such as politeness, emotion, humor, and many others. It has a long history in the field of natural language processing, and recently has re-gained significant attention thanks to the promising performance brought by deep neural models. In this paper, we present a systematic survey of the research on neural text style transfer, spanning over 100 representative articles since the first neural text style transfer work in 2017. We discuss the task formulation, existing datasets and subtasks, evaluation, as well as the rich methodologies in the presence of parallel and non-parallel data. We also provide discussions on a variety of important topics regarding the future development of this task.


Author(s):  
Hangu Yeo ◽  
Elahe Khorasani ◽  
Vadim Sheinin ◽  
Ngoc Phuoc An Vo ◽  
Octavian Popescu ◽  
...  

2021 ◽  
Author(s):  
Philippe Blache ◽  
Matthis Houlès

This paper presents a dialogue system for training doctors to break bad news. The originality of this work lies in its knowledge representation. All information known before the dialogue (the universe of discourse, the context, the scenario of the dialogue) as well as the knowledge transferred from the doctor to the patient during the conversation is represented in a shared knowledge structure called common ground, that constitute the core of the system. The Natural Language Understanding and the Natural Language Generation modules of the system take advantage on this structure and we present in this paper different original techniques making it possible to implement them efficiently.


Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Lanlan Jiang ◽  
Shengjun Yuan ◽  
Jun Li

Discourse coherence is strongly associated with text quality, making it important to natural language generation and understanding. However, existing coherence models focus on measuring individual aspects of coherence, such as lexical overlap, entity centralization, rhetorical structure, etc., lacking measurement of the semantics of text. In this paper, we propose a discourse coherence analysis method combining sentence embedding and the dimension grid, we obtain sentence-level vector representation by deep learning, and we introduce a coherence model that captures the fine-grained semantic transitions in text. Our work is based on the hypothesis that each dimension in the embedding vector is exactly assigned a stated certainty and specific semantic. We take every dimension as an equal grid and compute its transition probabilities. The document feature vector is also enriched to model the coherence. Finally, the experimental results demonstrate that our method achieves excellent performance on two coherence-related tasks.


2021 ◽  
Author(s):  
Mahir Morshed

In the lead-up to the launch of Abstract Wikipedia, a sufficient body of linguistic information, based on which the text within for a given language can be generated, must be in place so that different sets of functions, some working with concepts and others turning these into word sequences, can work together to produce something natural in that language. To achieve that information body's development requires more thorough consideration of a number of linguistic aspects sooner rather than later. This session will thus discuss aspects of language planning with respect to Wikidata lexicographical data and natural language generation, including the compositionality and manipulability of lexical units, the breadth and interconnectedness of units of meaning, and the treatment of variation among a language’s lects broadly construed. Special reference to the handling of each of these aspects for Bengali and those linguistic varieties often grouped with it will be presented.


2021 ◽  
Author(s):  
Sara Thomas

How do you recover after a crisis?  This session will reflect on the work done by and with the sco.wiki community to recover and rebuild after the negative international press attention that surrounded the wiki in 2020. I’ll talk about on- and off- wiki community development, partnership development, the challenges that still face the project, and hopes for the future. I’ll also reflect on care in volunteer management, and why we should always remember that there are real people behind keyboards.  As Scotland Programme Coordinator for Wikimedia UK, I’ve been involved in supporting the community post-crisis, and have been impressed and heartened by the volume of work which has taken place since sco.wiki hit the headlines. I’d like to take this opportunity to tell the story of a group of editors and Scots speakers who are determined that the wiki should survive, grow, and thrive.  Abstract id. 11: In the lead-up to the launch of Abstract Wikipedia, a sufficient body of linguistic information, based on which the text within for a given language can be generated, must be in place so that different sets of functions, some working with concepts and others turning these into word sequences, can work together to produce something natural in that language. To achieve that information body's development requires more thorough consideration of a number of linguistic aspects sooner rather than later.  This session will thus discuss aspects of language planning with respect to Wikidata lexicographical data and natural language generation, including the compositionality and manipulability of lexical units, the breadth and interconnectedness of units of meaning, and the treatment of variation among a language’s lects broadly construed. Special reference to the handling of each of these aspects for Bengali and those linguistic varieties often grouped with it will be presented. 


Author(s):  
Hima Yeldo

Abstract: Natural Language Processing is the study that focuses the interplay between computer and the human languages NLP has spread its applications in various fields such as an email Spam detection, machine translation, summation, information extraction, and question answering etc. Natural Language Processing classifies two parts i.e. Natural Language Generation and Natural Language understanding which evolves the task to generate and understand the text.


Sign in / Sign up

Export Citation Format

Share Document