scholarly journals Using English for commonsense knowledge

Author(s):  
Allan Ramsay ◽  
Debora Field
2021 ◽  
pp. 1-19
Author(s):  
Ting-Ju Chen ◽  
Ronak Ranjitkumar Mohanty ◽  
Vinayak Krishnamurthy

Abstract Mind-mapping is useful for externalizing ideas and their relationships surrounding a central problem. However, balancing between the exploration of different aspects (breadth) of the problem with respect to the detailed exploration of each of its aspects (depth) can be challenging, especially for novices. The goal of this paper is to investigate the notion of “reflection-in-design” through a novel interactive digital mind-mapping workflow that we call “QCue”. The idea behind this workflow is to incorporate the notion of reflective thinking through two mechanisms: (1) offering suggestions to promote depth exploration through user's queries (Q), and (2) asking questions (Cue) to promote reflection for breadth exploration. This paper is an extension of our prior work where our focus was mainly on the algorithmic development and implementation of a cognitive support mechanism behind QCue enabled by ConceptNet (a graph-based rich ontology with “commonsense” knowledge). In this extended work, we first present a detailed summary of how QCue facilitated the breadth-depth balance in a mind-mapping task. Second, we present a comparison between QCue and conventional digital mind-mapping i.e. without our algorithm through a between-subjects user study. Third, we present new detailed analysis on the usage of different cognitive mechanisms provided by QCue. We further consolidate our prior quantitative analysis and build a connection with our observational analysis. Finally, we discuss in detail the different cognitive mechanisms provided by QCue to stimulate reflection in design.


2016 ◽  
Vol 16 (5-6) ◽  
pp. 800-816 ◽  
Author(s):  
DANIELA INCLEZAN

AbstractThis paper presents CoreALMlib, an $\mathscr{ALM}$ library of commonsense knowledge about dynamic domains. The library was obtained by translating part of the Component Library (CLib) into the modular action language $\mathscr{ALM}$. CLib consists of general reusable and composable commonsense concepts, selected based on a thorough study of ontological and lexical resources. Our translation targets CLibstates (i.e., fluents) and actions. The resulting $\mathscr{ALM}$ library contains the descriptions of 123 action classes grouped into 43 reusable modules that are organized into a hierarchy. It is made available online and of interest to researchers in the action language, answer-set programming, and natural language understanding communities. We believe that our translation has two main advantages over its CLib counterpart: (i) it specifies axioms about actions in a more elaboration tolerant and readable way, and (ii) it can be seamlessly integrated with ASP reasoning algorithms (e.g., for planning and postdiction). In contrast, axioms are described in CLib using STRIPS-like operators, and CLib's inference engine cannot handle planning nor postdiction.


Author(s):  
Hai Wan ◽  
Jialing Ou ◽  
Baoyi Wang ◽  
Jianfeng Du ◽  
Jeff Z. Pan ◽  
...  

Author(s):  
Jian Guan ◽  
Fei Huang ◽  
Zhihao Zhao ◽  
Xiaoyan Zhu ◽  
Minlie Huang

Story generation, namely, generating a reasonable story from a leading context, is an important but challenging task. In spite of the success in modeling fluency and local coherence, existing neural language generation models (e.g., GPT-2) still suffer from repetition, logic conflicts, and lack of long-range coherence in generated stories. We conjecture that this is because of the difficulty of associating relevant commonsense knowledge, understanding the causal relationships, and planning entities and events with proper temporal order. In this paper, we devise a knowledge-enhanced pretraining model for commonsense story generation. We propose to utilize commonsense knowledge from external knowledge bases to generate reasonable stories. To further capture the causal and temporal dependencies between the sentences in a reasonable story, we use multi-task learning, which combines a discriminative objective to distinguish true and fake stories during fine-tuning. Automatic and manual evaluation shows that our model can generate more reasonable stories than state-of-the-art baselines, particularly in terms of logic and global coherence.


Sign in / Sign up

Export Citation Format

Share Document