Vietnamese Legal Question Answering with combined features and deep learning

Author(s):  
Luu Hoai Linh ◽  
Nguyen Hai Long ◽  
Nguyen Hai Yen ◽  
Thi-Hai-Yen Vuong
2021 ◽  
Author(s):  
Truong-Thinh Tieu ◽  
Chieu-Nguyen Chau ◽  
Nguyen-Minh-Hoang Bui ◽  
Truong-Son Nguyen ◽  
Le-Minh Nguyen

2021 ◽  
Vol 47 (05) ◽  
Author(s):  
NGUYỄN CHÍ HIẾU

Knowledge Graphs are applied in many fields such as search engines, semantic analysis, and question answering in recent years. However, there are many obstacles for building knowledge graphs as methodologies, data and tools. This paper introduces a novel methodology to build knowledge graph from heterogeneous documents.  We use the methodologies of Natural Language Processing and deep learning to build this graph. The knowledge graph can use in Question answering systems and Information retrieval especially in Computing domain


Author(s):  
Thara S. ◽  
Sampath E. ◽  
Venkata Sitarami Reddy B. ◽  
Vidhya Sai Bhagavan M. ◽  
Phanindra Reddy M.

IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 94341-94356
Author(s):  
Zhen Huang ◽  
Shiyi Xu ◽  
Minghao Hu ◽  
Xinyi Wang ◽  
Jinyan Qiu ◽  
...  

Author(s):  
Sebastian Blank ◽  
Florian Wilhelm ◽  
Hans-Peter Zorn ◽  
Achim Rettinger

Almost all of today’s knowledge is stored in databases and thus can only be accessed with the help of domain specific query languages, strongly limiting the number of people which can access the data. In this work, we demonstrate an end-to-end trainable question answering (QA) system that allows a user to query an external NoSQL database by using natural language. A major challenge of such a system is the non-differentiability of database operations which we overcome by applying policy-based reinforcement learning. We evaluate our approach on Facebook’s bAbI Movie Dialog dataset and achieve a competitive score of 84.2% compared to several benchmark models. We conclude that our approach excels with regard to real-world scenarios where knowledge resides in external databases and intermediate labels are too costly to gather for non-end-to-end trainable QA systems.


Author(s):  
Hedi Ben-younes ◽  
Remi Cadene ◽  
Nicolas Thome ◽  
Matthieu Cord

Multimodal representation learning is gaining more and more interest within the deep learning community. While bilinear models provide an interesting framework to find subtle combination of modalities, their number of parameters grows quadratically with the input dimensions, making their practical implementation within classical deep learning pipelines challenging. In this paper, we introduce BLOCK, a new multimodal fusion based on the block-superdiagonal tensor decomposition. It leverages the notion of block-term ranks, which generalizes both concepts of rank and mode ranks for tensors, already used for multimodal fusion. It allows to define new ways for optimizing the tradeoff between the expressiveness and complexity of the fusion model, and is able to represent very fine interactions between modalities while maintaining powerful mono-modal representations. We demonstrate the practical interest of our fusion model by using BLOCK for two challenging tasks: Visual Question Answering (VQA) and Visual Relationship Detection (VRD), where we design end-to-end learnable architectures for representing relevant interactions between modalities. Through extensive experiments, we show that BLOCK compares favorably with respect to state-of-the-art multimodal fusion models for both VQA and VRD tasks. Our code is available at https://github.com/Cadene/block.bootstrap.pytorch.


Sign in / Sign up

Export Citation Format

Share Document