scholarly journals Relation-Aware Graph Transformer for SQL-to-Text Generation

2021 ◽  
Vol 12 (1) ◽  
pp. 369
Author(s):  
Da Ma ◽  
Xingyu Chen ◽  
Ruisheng Cao ◽  
Zhi Chen ◽  
Lu Chen ◽  
...  

Generating natural language descriptions for structured representation (e.g., a graph) is an important yet challenging task. In this work, we focus on SQL-to-text, a task that maps a SQL query into the corresponding natural language question. Previous work represents SQL as a sparse graph and utilizes a graph-to-sequence model to generate questions, where each node can only communicate with k-hop nodes. Such a model will degenerate when adapted to more complex SQL queries due to the inability to capture long-term and the lack of SQL-specific relations. To tackle this problem, we propose a relation-aware graph transformer (RGT) to consider both the SQL structure and various relations simultaneously. Specifically, an abstract SQL syntax tree is constructed for each SQL to provide the underlying relations. We also customized self-attention and cross-attention strategies to encode the relations in the SQL tree. Experiments on benchmarks WikiSQL and Spider demonstrate that our approach yields improvements over strong baselines.

2007 ◽  
Vol 13 (2) ◽  
pp. 185-189
Author(s):  
ROBERT DALE

“Powerset Hype to Boiling Point”, said a February headline on TechCrunch. In the last installment of this column, I asked whether 2007 would be the year of question-answering. My query was occasioned by a number of new attempts at natural language question-answering that were being promoted in the marketplace as the next advance upon search, and particularly by the buzz around the stealth-mode natural language search company Powerset. That buzz continued with a major news item in the first quarter of this year: in February, Xerox PARC and PowerSet struck a much-anticipated deal whereby PowerSet won exclusive rights to use PARC's natural language technology, as announced in a VentureBeat posting. Following the scoop, other news sources drew the battle lines with titles like “Can natural language search bring down Google?”, “Xerox vs. Google?”, and “Powerset and Xerox PARC team up to beat Google”. An April posting on Barron's Online noted that an analyst at Global Equities Research had cited Powerset in his downgrading of Google from Buy to Neutral. And, all this on the basis of a product which, at the time of writing, very few people have actually seen. Indications are that the search engine is expected to go live by the end of the year, so we have a few more months to wait to see whether this really is a Google-killer. Meanwhile, another question remaining unanswered is what happened to the Powerset engineer who seemed less sure about the technology's capabilities: see the segment at the end of D7TV's PartyCrasher video from the Powerset launch party. For a more confident appraisal of natural language search, check out the podcast of Barney Pell, CEO of Powerset, giving a lecture at the University of California–Berkeley.


2010 ◽  
Vol 23 (2-3) ◽  
pp. 241-265 ◽  
Author(s):  
Ulrich Furbach ◽  
Ingo Glöckner ◽  
Björn Pelzer

Author(s):  
Boris Galitsky

Whatever knowledge a database contains, one of the essential questions in its design and usability is how its users will interact with it. If these users are human agents, the most ordinary way to query a database would be in the natural language (Gazdar, 1999; Popescu, Etzioni, & Kautz, 2003; Sabourin, 1994). Natural language question answering (NL Q/A), wherein questions are posed in a plain language, may be considered the most universal but not always the best (i.e., fastest) way to provide the information access to a database. One should be aware that approaches to data access, such as visualization, menus and multiple choice, FAQ lists, and so forth, have been successfully employed long before the NL Q/A systems came into play. In the following, I discuss situations in which a particular information access approach is optimal.


2001 ◽  
Vol 7 (4) ◽  
pp. 275-300 ◽  
Author(s):  
L. HIRSCHMAN ◽  
R. GAIZAUSKAS

As users struggle to navigate the wealth of on-line information now available, the need for automated question answering systems becomes more urgent. We need systems that allow a user to ask a question in everyday language and receive an answer quickly and succinctly, with sufficient context to validate the answer. Current search engines can return ranked lists of documents, but they do not deliver answers to the user.Question answering systems address this problem. Recent successes have been reported in a series of question-answering evaluations that started in 1999 as part of the Text Retrieval Conference (TREC). The best systems are now able to answer more than two thirds of factual questions in this evaluation.


Sign in / Sign up

Export Citation Format

Share Document