Retrieving Causally Related Functions From Natural-Language Text for Biomimetic Design

2014 ◽  
Vol 136 (8) ◽  
Author(s):  
Hyunmin Cheong ◽  
L. H. Shu

Identifying biological analogies is a significant challenge in biomimetic (biologically inspired) design. This paper builds on our previous work on finding biological phenomena in natural-language text. Specifically, a rule-based computational technique is used to identify biological analogies that contain causal relations. Causally related functions describe how one function is enabled by another function, and support the transfer of functional structure from analogies to design solutions. The causal-relation retrieval method uses patterns of syntactic information that represent causally related functions in individual sentences, and scored F-measures of 0.73–0.85. In a user study, novice designers found that of the total search matches, proportionally more of the matches obtained with the causal-relation retrieval method were relevant to design problems than those obtained with a single verb-keyword search. In addition, matches obtained with the causal-relation retrieval method increased the likelihood of using functional association to develop design concepts. Finally, the causal-relation retrieval method enables automatic extraction of biological analogies at the sentence level from a large amount of natural-language sources, which could support other approaches to biologically inspired design that require the identification of interesting biological phenomena.

2006 ◽  
Vol 13 (2) ◽  
pp. 165-183 ◽  
Author(s):  
STUART M. SHIEBER ◽  
RANI NELKEN

We address the problem of improving the efficiency of natural language text input under degraded conditions (for instance, on mobile computing devices or by disabled users), by taking advantage of the informational redundancy in natural language. Previous approaches to this problem have been based on the idea of prediction of the text, but these require the user to take overt action to verify or select the system's predictions. We propose taking advantage of the duality between prediction and compression. We allow the user to enter text in compressed form, in particular, using a simple stipulated abbreviation method that reduces characters by 26.4%, yet is simple enough that it can be learned easily and generated relatively fluently. We decode the abbreviated text using a statistical generative model of abbreviation, with a residual word error rate of 3.3%. The chief component of this model is an n-gram language model. Because the system's operation is completely independent from the user's, the overhead from cognitive task switching and attending to the system's actions online is eliminated, opening up the possibility that the compression-based method can achieve text input efficiency improvements where the prediction-based methods have not. We report the results of a user study evaluating this method.


Author(s):  
Matheus C. Pavan ◽  
Vitor G. Santos ◽  
Alex G. J. Lan ◽  
Joao Martins ◽  
Wesley Ramos Santos ◽  
...  

2012 ◽  
Vol 30 (1) ◽  
pp. 1-34 ◽  
Author(s):  
Antonio Fariña ◽  
Nieves R. Brisaboa ◽  
Gonzalo Navarro ◽  
Francisco Claude ◽  
Ángeles S. Places ◽  
...  

Author(s):  
S.G. Antonov

In the article discuss the application aspects of wordforms of natural language text for decision the mistakes correction problem. Discuss the merits and demerits of two known approaches for decision – deterministic and based on probabilities/ Construction principles of natural language corpus described, wich apply in probability approach. Declare conclusion about necessity of complex using these approaches in dependence on properties of texts.


2022 ◽  
Vol 40 (1) ◽  
pp. 1-43
Author(s):  
Ruqing Zhang ◽  
Jiafeng Guo ◽  
Lu Chen ◽  
Yixing Fan ◽  
Xueqi Cheng

Question generation is an important yet challenging problem in Artificial Intelligence (AI), which aims to generate natural and relevant questions from various input formats, e.g., natural language text, structure database, knowledge base, and image. In this article, we focus on question generation from natural language text, which has received tremendous interest in recent years due to the widespread applications such as data augmentation for question answering systems. During the past decades, many different question generation models have been proposed, from traditional rule-based methods to advanced neural network-based methods. Since there have been a large variety of research works proposed, we believe it is the right time to summarize the current status, learn from existing methodologies, and gain some insights for future development. In contrast to existing reviews, in this survey, we try to provide a more comprehensive taxonomy of question generation tasks from three different perspectives, i.e., the types of the input context text, the target answer, and the generated question. We take a deep look into existing models from different dimensions to analyze their underlying ideas, major design principles, and training strategies We compare these models through benchmark tasks to obtain an empirical understanding of the existing techniques. Moreover, we discuss what is missing in the current literature and what are the promising and desired future directions.


Sign in / Sign up

Export Citation Format

Share Document