scholarly journals A Narrative Sentence Planner and Structurer for Domain Independent, Parameterizable Storytelling

2019 ◽  
Vol 10 (1) ◽  
pp. 34-86
Author(s):  
Stephanie M. Lukin ◽  
Marilyn A. Walker

Storytelling is an integral part of daily life and a key part of how we share information and connect with others. The ability to use Natural Language Generation (NLG) to produce stories that are tailored and adapted to the individual reader could have large impact in many different applications. However, one reason that this has not become a reality to date is the NLG story gap, a disconnect between the plan-type representations that story generation engines produce, and the linguistic representations needed by NLG engines. Here we describe Fabula Tales, a storytelling system supporting both story generation and NLG. With manual annotation of texts from existing stories using an intuitive user interface, Fabula Tales automatically extracts the underlying story representation and its accompanying syntactically grounded representation. Narratological and sentence planning parameters are applied to these structures to generate different versions of the story. We show how our storytelling system can alter the story at the sentence level, as well as the discourse level. We also show that our approach can be applied to different kinds of stories by testing our approach on both Aesop’s Fables and first-person blogs posted on social media. The content and genre of such stories varies widely, supporting our claim that our approach is general and domain independent. We then conduct several user studies to evaluate the generated story variations and show that Fabula Tales’ automatically produced variations are perceived as more immediate, interesting, and correct, and are preferred to a baseline generation system that does not use narrative parameters.

2020 ◽  
Vol 34 (05) ◽  
pp. 7375-7382
Author(s):  
Prithviraj Ammanabrolu ◽  
Ethan Tien ◽  
Wesley Cheung ◽  
Zhaochen Luo ◽  
William Ma ◽  
...  

Neural network based approaches to automated story plot generation attempt to learn how to generate novel plots from a corpus of natural language plot summaries. Prior work has shown that a semantic abstraction of sentences called events improves neural plot generation and and allows one to decompose the problem into: (1) the generation of a sequence of events (event-to-event) and (2) the transformation of these events into natural language sentences (event-to-sentence). However, typical neural language generation approaches to event-to-sentence can ignore the event details and produce grammatically-correct but semantically-unrelated sentences. We present an ensemble-based model that generates natural language guided by events. We provide results—including a human subjects study—for a full end-to-end automated story generation system showing that our method generates more coherent and plausible stories than baseline approaches 1.


2021 ◽  
Vol 2 (1) ◽  
Author(s):  
Zhu Feng

Guided by the framework of structural construction and based on the E-Prime experimental generation system, the reading time method and detection technology are used to investigate the differences of reading comprehension between conditional adverbial clauses which presented by two ways — preposition and postposition among native Chinese college students. It is found that there are significant differences in reading time and understanding between the preposition and postposition of conditional adverbial clauses in English reading. The results show that both at the sentence level and at the text level, the postposition structure is more difficult to characterize than the preposition structure, which affects the reading speed and the accuracy of understanding. The study aims to improve English reading teaching.


First, this chapter introduces an idea that deals with narrative phenomena as the integration between the individual (narrative generation and reception system) and social levels (narrative production and consumption system); this idea is called the “multiple narrative structures model.” This chapter describes the future image of a human-machine symbiosis system that includes narrators and receivers as artificial intelligence. Furthermore, based on the concept of “visible narratives” and “invisible narratives,” the author analyzes the narrative components or elements to consider methods for synthesizing the analyzed elements. This idea of the analysis and synthesis of various narrative elements will be systematized in the “integrated narrative generation system.”


Author(s):  
Daniel Oro

Complex social animal groups behave as self-organized, single structures: they feed together, they defend against predators together, they escape from perturbations and disperse and migrate together and they share information. It is modestly evident that many individuals sharing information about their environment may be more successful in coping with perturbations than solitary individuals gathering information on their own. The group exists for and by means of all the individuals, and these exist for and by means of the group. Social groups have emergent properties that cannot be easily explained by either selection or self-organization. Yet, sociality has been shaped by the two forces. How sociality has evolved by selection is puzzling also because it confronts the benefits of the group versus the benefits of the individual, which is a historically debated theme. There are many other open questions about sociality that I have explored in this book. But in the end, the process that has fascinated me the most is social copying. Despite the sophisticated mechanisms evolved in increasing information in social groups—which has culminated in humans with language and technological interconnections—it is impressive how a simple behaviour such as social copying has maintained its strength when individuals make any kind of decisions, from insignificant to transcendent....


Author(s):  
Jian Guan ◽  
Fei Huang ◽  
Zhihao Zhao ◽  
Xiaoyan Zhu ◽  
Minlie Huang

Story generation, namely, generating a reasonable story from a leading context, is an important but challenging task. In spite of the success in modeling fluency and local coherence, existing neural language generation models (e.g., GPT-2) still suffer from repetition, logic conflicts, and lack of long-range coherence in generated stories. We conjecture that this is because of the difficulty of associating relevant commonsense knowledge, understanding the causal relationships, and planning entities and events with proper temporal order. In this paper, we devise a knowledge-enhanced pretraining model for commonsense story generation. We propose to utilize commonsense knowledge from external knowledge bases to generate reasonable stories. To further capture the causal and temporal dependencies between the sentences in a reasonable story, we use multi-task learning, which combines a discriminative objective to distinguish true and fake stories during fine-tuning. Automatic and manual evaluation shows that our model can generate more reasonable stories than state-of-the-art baselines, particularly in terms of logic and global coherence.


2020 ◽  
Author(s):  
Yarden Cohen ◽  
David Nicholson ◽  
Alexa Sanchioni ◽  
Emily K. Mallaber ◽  
Viktoriya Skidanova ◽  
...  

AbstractSongbirds have long been studied as a model system of sensory-motor learning. Many analyses of birdsong require time-consuming manual annotation of the individual elements of song, known as syllables or notes. Here we describe the first automated algorithm for birdsong annotation that is applicable to complex song such as canary song. We developed a neural network architecture, “TweetyNet”, that is trained with a small amount of hand-labeled data using supervised learning methods. We first show TweetyNet achieves significantly lower error on Bengalese finch song than a similar method, using less training data, and maintains low error rates across days. Applied to canary song, TweetyNet achieves fully automated annotation of canary song, accurately capturing the complex statistical structure previously discovered in a manually annotated dataset. We conclude that TweetyNet will make it possible to ask a wide range of new questions focused on complex songs where manual annotation was impractical.


Sign in / Sign up

Export Citation Format

Share Document