scholarly journals Story Realization: Expanding Plot Events into Sentences

2020 ◽  
Vol 34 (05) ◽  
pp. 7375-7382
Author(s):  
Prithviraj Ammanabrolu ◽  
Ethan Tien ◽  
Wesley Cheung ◽  
Zhaochen Luo ◽  
William Ma ◽  
...  

Neural network based approaches to automated story plot generation attempt to learn how to generate novel plots from a corpus of natural language plot summaries. Prior work has shown that a semantic abstraction of sentences called events improves neural plot generation and and allows one to decompose the problem into: (1) the generation of a sequence of events (event-to-event) and (2) the transformation of these events into natural language sentences (event-to-sentence). However, typical neural language generation approaches to event-to-sentence can ignore the event details and produce grammatically-correct but semantically-unrelated sentences. We present an ensemble-based model that generates natural language guided by events. We provide results—including a human subjects study—for a full end-to-end automated story generation system showing that our method generates more coherent and plausible stories than baseline approaches 1.

Author(s):  
Lili Yao ◽  
Nanyun Peng ◽  
Ralph Weischedel ◽  
Kevin Knight ◽  
Dongyan Zhao ◽  
...  

Automatic storytelling is challenging since it requires generating long, coherent natural language to describes a sensible sequence of events. Despite considerable efforts on automatic story generation in the past, prior work either is restricted in plot planning, or can only generate stories in a narrow domain. In this paper, we explore open-domain story generation that writes stories given a title (topic) as input. We propose a plan-and-write hierarchical generation framework that first plans a storyline, and then generates a story based on the storyline. We compare two planning strategies. The dynamic schema interweaves story planning and its surface realization in text, while the static schema plans out the entire storyline before generating stories. Experiments show that with explicit storyline planning, the generated stories are more diverse, coherent, and on topic than those generated without creating a full plan, according to both automatic and human evaluations.


Author(s):  
Zhuoxuan Jiang ◽  
Jie Ma ◽  
Jingyi Lu ◽  
Guangyuan Yu ◽  
Yipeng Yu ◽  
...  

We propose a general framework for goal-driven conversation assistant based on Planning methods. It aims to rapidly build a dialogue agent with less handcrafting and make the more interpretable and efficient dialogue management in various scenarios. By employing the Planning method, dialogue actions can be efficiently defined and reusable, and the transition of the dialogue are managed by a Planner. The proposed framework consists of a pipeline of Natural Language Understanding (intent labeler), Planning of Actions (with a World Model), and Natural Language Generation (learned by an attention-based neural network). We demonstrate our approach by creating conversational agents for several independent domains.


2006 ◽  
Vol 32 (2) ◽  
pp. 223-262 ◽  
Author(s):  
Diana Inkpen ◽  
Graeme Hirst

Choosing the wrong word in a machine translation or natural language generation system can convey unwanted connotations, implications, or attitudes. The choice between near-synonyms such as error, mistake, slip, and blunder—words that share the same core meaning, but differ in their nuances—can be made only if knowledge about their differences is available. We present a method to automatically acquire a new type of lexical resource: a knowledge base of near-synonym differences. We develop an unsupervised decision-list algorithm that learns extraction patterns from a special dictionary of synonym differences. The patterns are then used to extract knowledge from the text of the dictionary. The initial knowledge base is later enriched with information from other machine-readable dictionaries. Information about the collocational behavior of the near-synonyms is acquired from free text. The knowledge base is used by Xenon, a natural language generation system that shows how the new lexical resource can be used to choose the best near-synonym in specific situations.


2006 ◽  
Vol 13 (3) ◽  
pp. 191-233 ◽  
Author(s):  
I. ANDROUTSOPOULOS ◽  
J. OBERLANDER ◽  
V. KARKALETSIS

We present the source authoring facilities of a natural language generation system that produces personalised descriptions of objects in multiple natural languages starting from language-independent symbolic information in ontologies and databases as well as pieces of canned text. The system has been tested in applications ranging from museum exhibitions to presentations of computer equipment for sale. We discuss the architecture of the overall system, the resources that the authors manipulate, the functionality of the authoring facilities, the system's personalisation mechanisms, and how they relate to source authoring. A usability evaluation of the authoring facilities is also presented, followed by more recent work on reusing information extracted from existing databases and documents, and supporting the OWL ontology specification language.


2019 ◽  
Vol 10 (1) ◽  
pp. 34-86
Author(s):  
Stephanie M. Lukin ◽  
Marilyn A. Walker

Storytelling is an integral part of daily life and a key part of how we share information and connect with others. The ability to use Natural Language Generation (NLG) to produce stories that are tailored and adapted to the individual reader could have large impact in many different applications. However, one reason that this has not become a reality to date is the NLG story gap, a disconnect between the plan-type representations that story generation engines produce, and the linguistic representations needed by NLG engines. Here we describe Fabula Tales, a storytelling system supporting both story generation and NLG. With manual annotation of texts from existing stories using an intuitive user interface, Fabula Tales automatically extracts the underlying story representation and its accompanying syntactically grounded representation. Narratological and sentence planning parameters are applied to these structures to generate different versions of the story. We show how our storytelling system can alter the story at the sentence level, as well as the discourse level. We also show that our approach can be applied to different kinds of stories by testing our approach on both Aesop’s Fables and first-person blogs posted on social media. The content and genre of such stories varies widely, supporting our claim that our approach is general and domain independent. We then conduct several user studies to evaluate the generated story variations and show that Fabula Tales’ automatically produced variations are perceived as more immediate, interesting, and correct, and are preferred to a baseline generation system that does not use narrative parameters.


Sign in / Sign up

Export Citation Format

Share Document