natural language question
Recently Published Documents


TOTAL DOCUMENTS

81
(FIVE YEARS 33)

H-INDEX

11
(FIVE YEARS 3)

2021 ◽  
Vol 12 (1) ◽  
pp. 369
Author(s):  
Da Ma ◽  
Xingyu Chen ◽  
Ruisheng Cao ◽  
Zhi Chen ◽  
Lu Chen ◽  
...  

Generating natural language descriptions for structured representation (e.g., a graph) is an important yet challenging task. In this work, we focus on SQL-to-text, a task that maps a SQL query into the corresponding natural language question. Previous work represents SQL as a sparse graph and utilizes a graph-to-sequence model to generate questions, where each node can only communicate with k-hop nodes. Such a model will degenerate when adapted to more complex SQL queries due to the inability to capture long-term and the lack of SQL-specific relations. To tackle this problem, we propose a relation-aware graph transformer (RGT) to consider both the SQL structure and various relations simultaneously. Specifically, an abstract SQL syntax tree is constructed for each SQL to provide the underlying relations. We also customized self-attention and cross-attention strategies to encode the relations in the SQL tree. Experiments on benchmarks WikiSQL and Spider demonstrate that our approach yields improvements over strong baselines.


2021 ◽  
Author(s):  
Taaniya Arora ◽  
Neha Prabhugaonkar ◽  
Ganesh Subramanian ◽  
Kathy Leake

Business users across enterprises today rely on reports and dashboards created by IT organizations to understand the dynamics of their business better and get insights into the data. In many cases, these users are underserved and do not possess the technical skillset to query the data source to get the information they need. There is a need for users to access information in the most natural way possible. AI-based Business Analysts are going to change the future of business analytics and business intelligence by providing a natural language interface between the user and data. This natural language interface can understand ambiguous questions from users, the intent and convert the same into a database query. One of the important elements of an AI-based business analyst is to interpret a natural language question. It also requires identification of key business entities within the question and relationship between them to generate insights. The Artificial Named Entity Classifier (ANEC) helps us take a huge step forward in that direction by not only identifying but also classifying entities with the help of the sequence recognising prowess of BiLSTMs.


2021 ◽  
Author(s):  
Jing Sheng Lei ◽  
Shi Chao Ye ◽  
Sheng Ying Yang ◽  
Wei Song ◽  
Guan Mian Liang

The main purpose of the intelligent question answering system based on the knowledge graph is to accurately match the natural language question and the triple information in the knowledge graph. Among them, the entity recognition part is one of the key points. The wrong entity recognition result will cause the error to be done propagated, resulting in the ultimate failure to get the correct answer. In recent years, the lexical enhancement structure of word nodes combined with word nodes has been proved to be an effective method for Chinese named entity recognition. In order to solve the above problems, this paper proposes a vocabulary-enhanced entity recognition algorithm (KGFLAT) based on FLAT for intelligent question answering system. This method uses a new dictionary that combines the entity information of the knowledge graph, and only uses layer normalization for the removal of residual connection for the shallower network model. The system uses data provided by the NLPCC 2018 Task7 KBQA task for evaluation. The experimental results show that this method can effectively solve the entity recognition task in the intelligent question answering system and achieve the improvement of the FLAT model, and the average F1 value is 94.72


2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Xu Zhang ◽  
DeZhi Han ◽  
Chin-Chen Chang

Visual question answering (VQA) is the natural language question-answering of visual images. The model of VQA needs to make corresponding answers according to specific questions based on understanding images, the most important of which is to understand the relationship between images and language. Therefore, this paper proposes a new model, Representation of Dense Multimodality Fusion Encoder Based on Transformer, for short, RDMMFET, which can learn the related knowledge between vision and language. The RDMMFET model consists of three parts: dense language encoder, image encoder, and multimodality fusion encoder. In addition, we designed three types of pretraining tasks: masked language model, masked image model, and multimodality fusion task. These pretraining tasks can help to understand the fine-grained alignment between text and image regions. Simulation results on the VQA v2.0 data set show that the RDMMFET model can work better than the previous model. Finally, we conducted detailed ablation studies on the RDMMFET model and provided the results of attention visualization, which proves that the RDMMFET model can significantly improve the effect of VQA.


Author(s):  
Xinmeng Li ◽  
Mamoun Alazab ◽  
Qian Li ◽  
Keping Yu ◽  
Quanjun Yin

AbstractKnowledge graph question answering is an important technology in intelligent human–robot interaction, which aims at automatically giving answer to human natural language question with the given knowledge graph. For the multi-relation question with higher variety and complexity, the tokens of the question have different priority for the triples selection in the reasoning steps. Most existing models take the question as a whole and ignore the priority information in it. To solve this problem, we propose question-aware memory network for multi-hop question answering, named QA2MN, to update the attention on question timely in the reasoning process. In addition, we incorporate graph context information into knowledge graph embedding model to increase the ability to represent entities and relations. We use it to initialize the QA2MN model and fine-tune it in the training process. We evaluate QA2MN on PathQuestion and WorldCup2014, two representative datasets for complex multi-hop question answering. The result demonstrates that QA2MN achieves state-of-the-art Hits@1 accuracy on the two datasets, which validates the effectiveness of our model.


Sensors ◽  
2021 ◽  
Vol 21 (9) ◽  
pp. 2911
Author(s):  
JaeYun Lee ◽  
Incheol Kim

Visual commonsense reasoning is an intelligent task performed to decide the most appropriate answer to a question while providing the rationale or reason for the answer when an image, a natural language question, and candidate responses are given. For effective visual commonsense reasoning, both the knowledge acquisition problem and the multimodal alignment problem need to be solved. Therefore, we propose a novel Vision–Language–Knowledge Co-embedding (ViLaKC) model that extracts knowledge graphs relevant to the question from an external knowledge base, ConceptNet, and uses them together with the input image to answer the question. The proposed model uses a pretrained vision–language–knowledge embedding module, which co-embeds multimodal data including images, natural language texts, and knowledge graphs into a single feature vector. To reflect the structural information of the knowledge graph, the proposed model uses the graph convolutional neural network layer to embed the knowledge graph first and then uses multi-head self-attention layers to co-embed it with the image and natural language question. The effectiveness and performance of the proposed model are experimentally validated using the VCR v1.0 benchmark dataset.


Author(s):  
Tasmia Tasmia ◽  
Md Sultan Al Nahian ◽  
Brent Harrison

In this work, we propose a deep neural architecture that uses an attention mechanism which utilizes region based image features, the natural language question asked, and semantic knowledge extracted from the regions of an image to produce open-ended answers for questions asked in a visual question answering (VQA) task. The combination of both region based features and region based textual information about the image bolsters a model to more accurately respond to questions and potentially do so with less required training data. We evaluate our proposed architecture on a VQA task against a strong baseline and show that our method achieves excellent results on this task.


2021 ◽  
pp. 1-18
Author(s):  
Ziyu Liu ◽  
Ying Li ◽  
Lixia Zhao ◽  
Pengtao Guo

The intelligent inquiry system for metro electro-mechanical equipment faults based on the knowledge graph can effectively consolidate various semi-structured failure messages, and can provide users with quick, accurate and high-quality intelligent inquiry services such as equipment fault causes-researching and solutions-delivering, which could be really relevant to this research field and application areas. The recorded date which related to metro electromechanical equipment failures were in this research collected, consolidated and converted, so that these failures could be stored in our databases. In this context, various functions of the intelligent inquiry system have been implemented, including: natural language question analysis, language Cypher-based question and answer design, Naive Bayesian classification based on characteristic core words, and user interaction interface realization. The experimental results show that the system can effectively solve the problems related to fault handling in metro mechanical and electrical equipment, thus improving the efficiency of equipment fault maintenance.


Sign in / Sign up

Export Citation Format

Share Document