Automatic Story Generation: State of the Art and Recent Trends

Author(s):  
Brian Daniel Herrera-González ◽  
Alexander Gelbukh ◽  
Hiram Calvo
Author(s):  
János Csaba Kun ◽  
Daniel Feszty

Recent trends in vehicle engineering require manufacturers to develop products with highly refined noise, vibration and harshness levels. The use of trim elements, which can be described as Poroelastic materials (PEM), are key to achieve quiet interiors. Finite Element Methods (FEM) provide established solutions to simple acoustic problems. However, the inclusion of poroelastic materials, especially at higher frequencies, proves to be a difficult issue to overcome. The goal of this paper was to summarize the state-of-the-art solutions to acoustic challenges involving FEM-PEM simulation methods. This involves investigation of measurement and simulation campaigns both on industrial and fundamental academic research levels.


Author(s):  
Jian Guan ◽  
Fei Huang ◽  
Zhihao Zhao ◽  
Xiaoyan Zhu ◽  
Minlie Huang

Story generation, namely, generating a reasonable story from a leading context, is an important but challenging task. In spite of the success in modeling fluency and local coherence, existing neural language generation models (e.g., GPT-2) still suffer from repetition, logic conflicts, and lack of long-range coherence in generated stories. We conjecture that this is because of the difficulty of associating relevant commonsense knowledge, understanding the causal relationships, and planning entities and events with proper temporal order. In this paper, we devise a knowledge-enhanced pretraining model for commonsense story generation. We propose to utilize commonsense knowledge from external knowledge bases to generate reasonable stories. To further capture the causal and temporal dependencies between the sentences in a reasonable story, we use multi-task learning, which combines a discriminative objective to distinguish true and fake stories during fine-tuning. Automatic and manual evaluation shows that our model can generate more reasonable stories than state-of-the-art baselines, particularly in terms of logic and global coherence.


Author(s):  
Kalim Deshmukh ◽  
Mohammad Talal Houkan ◽  
Mariam AlAli AlMaadeed ◽  
Kishor Kumar Sadasivuni

Author(s):  
Michele Bevilacqua ◽  
Tommaso Pasini ◽  
Alessandro Raganato ◽  
Roberto Navigli

Word Sense Disambiguation (WSD) aims at making explicit the semantics of a word in context by identifying the most suitable meaning from a predefined sense inventory. Recent breakthroughs in representation learning have fueled intensive WSD research, resulting in considerable performance improvements, breaching the 80% glass ceiling set by the inter-annotator agreement. In this survey, we provide an extensive overview of current advances in WSD, describing the state of the art in terms of i) resources for the task, i.e., sense inventories and reference datasets for training and testing, as well as ii) automatic disambiguation approaches, detailing their peculiarities, strengths and weaknesses. Finally, we highlight the current limitations of the task itself, but also point out recent trends that could help expand the scope and applicability of WSD, setting up new promising directions for the future.


Sign in / Sign up

Export Citation Format

Share Document