scholarly journals Learning natural language interfaces with neural models

AI Matters ◽  
2021 ◽  
Vol 7 (2) ◽  
pp. 14-17
Author(s):  
Li Dong

Language is the primary and most natural means of communication for humans. The learning curve of interacting with various services (e.g., digital assistants, and smart appliances) would be greatly reduced if we could talk to machines using human language. However, in most cases computers can only interpret and execute formal languages.

AI Matters ◽  
2021 ◽  
Vol 7 (2) ◽  
pp. 3-4
Author(s):  
Iolanda Leite ◽  
Anuj Karpatne

Welcome to the second issue of this year's AI Matters Newsletter. We start with a report on upcoming SIGAI Events by Dilini Samarasinghe and Conference reports by Louise Dennis, our conference coordination officer. In our regular Education column, Carolyn Rosé discusses the role of AI in education in a post-pandemic reality. We then bring you our regular Policy column, where Larry Medsker covers interesting and timely discussions on AI policy, for example whether governments should play a role in reducing algorithmic bias. This issue closes with an article contribution from Li Dong, one of the runner-ups in the latest AAIS/SIGAI dissertation award, on the use neural models to build natural language interfaces.


1978 ◽  
Author(s):  
Howard Lee Morgan ◽  
Edgar F. Codd ◽  
William A. Martin ◽  
Larry Harris ◽  
Daniel Sagalowicz ◽  
...  

2021 ◽  
Vol 12 (5) ◽  
Author(s):  
Alexandre F. Novello ◽  
Marco A. Casanova

A Natural Language Interface to Database (NLIDB) refers to a database interface that translates a question asked in natural language into a structured query. Aggregation questions express aggregation functions, such as count, sum, average, minimum and maximum, and optionally a group by clause and a having clause. NLIDBs deliver good results for standard questions but usually do not deal with aggregation questions. The main contribution of this article is a generic module, called GLAMORISE (GeneraL Aggregation MOdule using a RelatIonal databaSE), that extends NLIDBs to cope with aggregation questions. GLAMORISE covers aggregations with ambiguities, timescale differences, aggregations in multiple attributes, the use of superlative adjectives, basic recognition of measurement units, and aggregations in attributes with compound names.


2019 ◽  
Author(s):  
Edward Gibson ◽  
Richard Futrell ◽  
Steven T. Piantadosi ◽  
Isabelle Dautriche ◽  
Kyle Mahowald ◽  
...  

Cognitive science applies diverse tools and perspectives to study human language. Recently, an exciting body of work has examined linguistic phenomena through the lens of efficiency in usage: what otherwise puzzling features of language find explanation in formal accounts of how language might be optimized for communication and learning? Here, we review studies that deploy formal tools from probability and information theory to understand how and why language works the way that it does, focusing on phenomena ranging from the lexicon through syntax. These studies show how apervasive pressure for efficiency guides the forms of natural language and indicate that a rich future for language research lies in connecting linguistics to cognitive psychology and mathematical theories of communication and inference.


Sign in / Sign up

Export Citation Format

Share Document