scholarly journals A Comprehensive Survey of Cognitive Graphs: Techniques, Applications, Challenges

Author(s):  
Meiling Chen ◽  
Ye Tian ◽  
Zhaorui Wang ◽  
Hong Xu ◽  
Bo Jiang

The realization of the third-generation artificial intelligence (AI) requires the evolution from perceptual intelligence to cognitive intelligence, where knowledge graphs may not meet the practical needs anymore. Based on the dual channel theory, cognitive graphs are established and developed through coordinating the implicit extraction module and the explicit reasoning module as well as integrating knowledge graphs, cognitive reasoning and logical expressions, which have achieved successes in multi-hop question answering. It is desired for cognitive graphs to be widely used in advanced AI applications such as large-scale knowledge representations and intelligent responses, promoting the development of Al dramatically. This review discusses cognitive graphs systematically and elaborately, including basic concepts, generations, theories and technologies. Moreover, we try to predict the development of cognitive intelligence in the short-term future and further enlighten more researches and studies.

2020 ◽  
Vol 39 (5) ◽  
pp. 7281-7292
Author(s):  
Tongze He ◽  
Caili Guo ◽  
Yunfei Chu ◽  
Yang Yang ◽  
Yanjun Wang

Community Question Answering (CQA) websites has become an important channel for people to acquire knowledge. In CQA, one key issue is to recommend users with high expertise and willingness to answer the given questions, i.e., expert recommendation. However, a lot of existing methods consider the expert recommendation problem in a static context, ignoring that the real-world CQA websites are dynamic, with users’ interest and expertise changing over time. Although some methods that utilize time information have been proposed, their performance improvement can be limited due to fact that they fail they fail to consider the dynamic change of both user interests and expertise. To solve these problems, we propose a deep learning based framework for expert recommendation to exploit user interest and expertise in a dynamic environment. For user interest, we leverage Long Short-Term Memory (LSTM) to model user’s short-term interest so as to capture the dynamic change of users’ interests. For user expertise, we design user expertise network, which leverages feedback on users’ historical behavior to estimate their expertise on new question. We propose two methods in user expertise network according to whether the dynamic property of expertise is considered. The experimental results on a large-scale dataset from a real-world CQA site demonstrate the superior performance of our method.


2020 ◽  
Vol 34 (05) ◽  
pp. 7367-7374
Author(s):  
Khalid Al-Khatib ◽  
Yufang Hou ◽  
Henning Wachsmuth ◽  
Charles Jochim ◽  
Francesca Bonin ◽  
...  

This paper studies the end-to-end construction of an argumentation knowledge graph that is intended to support argument synthesis, argumentative question answering, or fake news detection, among others. The study is motivated by the proven effectiveness of knowledge graphs for interpretable and controllable text generation and exploratory search. Original in our work is that we propose a model of the knowledge encapsulated in arguments. Based on this model, we build a new corpus that comprises about 16k manual annotations of 4740 claims with instances of the model's elements, and we develop an end-to-end framework that automatically identifies all modeled types of instances. The results of experiments show the potential of the framework for building a web-based argumentation graph that is of high quality and large scale.


Author(s):  
Peng Wang ◽  
Qi Wu ◽  
Chunhua Shen ◽  
Anthony Dick ◽  
Anton van den Hengel

We describe a method for visual question answering which is capable of reasoning about an image on the basis of information extracted from a large-scale knowledge base. The method not only answers natural language questions using concepts not contained in the image, but can explain the reasoning by which it developed its answer. It is capable of answering far more complex questions than the predominant long short-term memory-based approach, and outperforms it significantly in testing. We also provide a dataset and a protocol by which to evaluate general visual question answering methods.


Information ◽  
2021 ◽  
Vol 12 (7) ◽  
pp. 271
Author(s):  
Mohammad Yani ◽  
Adila Alfa Krisnadhi

Simple questions are the most common type of questions used for evaluating a knowledge graph question answering (KGQA). A simple question is a question whose answer can be captured by a factoid statement with one relation or predicate. Knowledge graph question answering (KGQA) systems are systems whose aim is to automatically answer natural language questions (NLQs) over knowledge graphs (KGs). There are varieties of researches with different approaches in this area. However, the lack of a comprehensive study to focus on addressing simple questions from all aspects is tangible. In this paper, we present a comprehensive survey of answering simple questions to classify available techniques and compare their advantages and drawbacks in order to have better insights of existing issues and recommendations to direct future works.


2020 ◽  
Author(s):  
Shiqi Liang ◽  
Kurt Stockinger ◽  
Tarcisio Mendes de Farias ◽  
Maria Anisimova ◽  
Manuel Gil

Abstract Knowledge graphs are a powerful concept for querying large amounts of data. These knowledge graphs are typically enormous and are often not easily accessible to end-users because they require specialized knowledge in query languages such as SPARQL. Moreover, end-users need a deep understanding of the structure of the underlying data models often based on the Resource Description Framework (RDF). This drawback has led to the development of Question-Answering (QA) systems that enable end-users to express their information needs in natural language. While existing systems simplify user access, there is still room for improvement in the accuracy of these systems. In this paper we propose a new QA system for translating natural language questions into SPARQL queries. The key idea is to break up the translation process into 5 smaller, more manageable sub-tasks and use ensemble machine learning methods as well as Tree-LSTM-based neural network models to automatically learn and translate a natural language question into a SPARQL query. The performance of our proposed QA system is empirically evaluated using the two renowned benchmarks - the 7th Question Answering over Linked Data Challenge (QALD-7) and the Large-Scale Complex Question Answering Dataset (LC-QuAD). Experimental results show that our QA system outperforms the state-of-art systems by 15% on the QALD-7 dataset and by 48% on the LC-QuAD dataset, respectively. In addition, we make our source code available.


2021 ◽  
Author(s):  
Renzo Arturo Alva Principe ◽  
Andrea Maurino ◽  
Matteo Palmonari ◽  
Michele Ciavotta ◽  
Blerina Spahiu

AbstractProcessing large-scale and highly interconnected Knowledge Graphs (KG) is becoming crucial for many applications such as recommender systems, question answering, etc. Profiling approaches have been proposed to summarize large KGs with the aim to produce concise and meaningful representation so that they can be easily managed. However, constructing profiles and calculating several statistics such as cardinality descriptors or inferences are resource expensive. In this paper, we present ABSTAT-HD, a highly distributed profiling tool that supports users in profiling and understanding big and complex knowledge graphs. We demonstrate the impact of the new architecture of ABSTAT-HD by presenting a set of experiments that show its scalability with respect to three dimensions of the data to be processed: size, complexity and workload. The experimentation shows that our profiling framework provides informative and concise profiles, and can process and manage very large KGs.


2005 ◽  
Vol 27 (1) ◽  
pp. 95
Author(s):  
Herbert Brauer

This survey research is the first large-scale study to provide a description of short-term overseas study programs implemented by private junior high schools in Tokyo. In addition to fundamental quantitative parameters, this comprehensive survey returned descriptive data from 84% of the private junior high schools in the Tokyo region with programs in 2001 and 2002. This descriptive data included types and details of activities undertaken, program and activity objectives, integration between the overseas study programs and school curricula, follow-up activities, and program evaluation. The survey revealed several innovative programs and activities implemented by these schools and identified areas that might benefit from further research. この調査研究は東京の私立中学校が行っている短期海外研修プログラムについて詳しく説明するものである。この広範囲に渡る調査は、2001年及び2002年における東京地域にある、プログラムを実施している私立中学校の84%から得た、基本的な量的パラメーターの他に、実施された活動のタイプや詳細、プログラムや活動の目的、海外研修プログラムと学校のカリキュラムの調整、追跡活動、そしてプログラムの評価に関する質的なデータに依るものである。この調査により、幾つかの革新的なプログラムや活動が明らかとなり、更なる研究価値の在る分野が確認された。


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Shiqi Liang ◽  
Kurt Stockinger ◽  
Tarcisio Mendes de Farias ◽  
Maria Anisimova ◽  
Manuel Gil

AbstractKnowledge graphs are a powerful concept for querying large amounts of data. These knowledge graphs are typically enormous and are often not easily accessible to end-users because they require specialized knowledge in query languages such as SPARQL. Moreover, end-users need a deep understanding of the structure of the underlying data models often based on the Resource Description Framework (RDF). This drawback has led to the development of Question-Answering (QA) systems that enable end-users to express their information needs in natural language. While existing systems simplify user access, there is still room for improvement in the accuracy of these systems. In this paper we propose a new QA system for translating natural language questions into SPARQL queries. The key idea is to break up the translation process into 5 smaller, more manageable sub-tasks and use ensemble machine learning methods as well as Tree-LSTM-based neural network models to automatically learn and translate a natural language question into a SPARQL query. The performance of our proposed QA system is empirically evaluated using the two renowned benchmarks-the 7th Question Answering over Linked Data Challenge (QALD-7) and the Large-Scale Complex Question Answering Dataset (LC-QuAD). Experimental results show that our QA system outperforms the state-of-art systems by 15% on the QALD-7 dataset and by 48% on the LC-QuAD dataset, respectively. In addition, we make our source code available.


Sign in / Sign up

Export Citation Format

Share Document