query understanding
Recently Published Documents


TOTAL DOCUMENTS

33
(FIVE YEARS 13)

H-INDEX

4
(FIVE YEARS 0)

10.2196/30704 ◽  
2021 ◽  
Vol 23 (11) ◽  
pp. e30704
Author(s):  
Timothy W Bickmore ◽  
Stefán Ólafsson ◽  
Teresa K O'Leary

Background Prior studies have demonstrated the safety risks when patients and consumers use conversational assistants such as Apple’s Siri and Amazon’s Alexa for obtaining medical information. Objective The aim of this study is to evaluate two approaches to reducing the likelihood that patients or consumers will act on the potentially harmful medical information they receive from conversational assistants. Methods Participants were given medical problems to pose to conversational assistants that had been previously demonstrated to result in potentially harmful recommendations. Each conversational assistant’s response was randomly varied to include either a correct or incorrect paraphrase of the query or a disclaimer message—or not—telling the participants that they should not act on the advice without first talking to a physician. The participants were then asked what actions they would take based on their interaction, along with the likelihood of taking the action. The reported actions were recorded and analyzed, and the participants were interviewed at the end of each interaction. Results A total of 32 participants completed the study, each interacting with 4 conversational assistants. The participants were on average aged 42.44 (SD 14.08) years, 53% (17/32) were women, and 66% (21/32) were college educated. Those participants who heard a correct paraphrase of their query were significantly more likely to state that they would follow the medical advice provided by the conversational assistant (χ21=3.1; P=.04). Those participants who heard a disclaimer message were significantly more likely to say that they would contact a physician or health professional before acting on the medical advice received (χ21=43.5; P=.001). Conclusions Designers of conversational systems should consider incorporating both disclaimers and feedback on query understanding in response to user queries for medical advice. Unconstrained natural language input should not be used in systems designed specifically to provide medical advice.


2021 ◽  
pp. 016555152110221
Author(s):  
Usashi Chatterjee

Dietary practices are governed by a mix of ethnographic aspects, such as social, cultural and environmental factors. These aspects need to be taken into consideration during an analysis of food-related queries. Queries are usually ambiguous. It is essential to understand, analyse and refine the queries for better search and retrieval. The work is focused on identifying the explicit, implicit and hidden facets of a query, taking into consideration the context – culinary domain. This article proposes a technique for query understanding, analysis and refinement based on a domain specific knowledge model. Queries are conceptualised by mapping the query term to concepts defined in the model. This allows an understanding of the semantic point of view of a query and an ability to determine the meaning of its terms and their interrelatedness. The knowledge model acts as a backbone providing the context for query understanding, analysis and refinement and outperforms other models, such as Schema.org , BBC Food Ontology and Recipe Ontology.


2021 ◽  
Author(s):  
Timothy W Bickmore ◽  
Stefán Ólafsson ◽  
Teresa K O'Leary

BACKGROUND Prior studies have demonstrated the safety risks when patients and consumers use conversational assistants such as Apple’s Siri and Amazon’s Alexa for obtaining medical information. OBJECTIVE The aim of this study is to evaluate two approaches to reducing the likelihood that patients or consumers will act on the potentially harmful medical information they receive from conversational assistants. METHODS Participants were given medical problems to pose to conversational assistants that had been previously demonstrated to result in potentially harmful recommendations. Each conversational assistant’s response was randomly varied to include either a correct or incorrect paraphrase of the query or a disclaimer message—or not—telling the participants that they should not act on the advice without first talking to a physician. The participants were then asked what actions they would take based on their interaction, along with the likelihood of taking the action. The reported actions were recorded and analyzed, and the participants were interviewed at the end of each interaction. RESULTS A total of 32 participants completed the study, each interacting with 4 conversational assistants. The participants were on average aged 42.44 (SD 14.08) years, 53% (17/32) were women, and 66% (21/32) were college educated. Those participants who heard a correct paraphrase of their query were significantly more likely to state that they would follow the medical advice provided by the conversational assistant (<i>χ</i><sup>2</sup><sub>1</sub>=3.1; <i>P</i>=.04). Those participants who heard a disclaimer message were significantly more likely to say that they would contact a physician or health professional before acting on the medical advice received (<i>χ</i><sup>2</sup><sub>1</sub>=43.5; <i>P</i>=.001). CONCLUSIONS Designers of conversational systems should consider incorporating both disclaimers and feedback on query understanding in response to user queries for medical advice. Unconstrained natural language input should not be used in systems designed specifically to provide medical advice. CLINICALTRIAL


Author(s):  
Wei Zhu ◽  
Yuan Ni ◽  
Xiaoling Wang ◽  
Guotong Xie
Keyword(s):  

Author(s):  
Federico Tomasi ◽  
Rishabh Mehrotra ◽  
Aasish Pappu ◽  
Judith Bütepage ◽  
Brian Brost ◽  
...  
Keyword(s):  

2020 ◽  
Vol 10 (3) ◽  
pp. 1127
Author(s):  
Yun Li ◽  
Yongyao Jiang ◽  
Justin C. Goldstein ◽  
Lewis J. Mcgibbney ◽  
Chaowei Yang

One longstanding complication with Earth data discovery involves understanding a user’s search intent from the input query. Most of the geospatial data portals use keyword-based match to search data. Little attention has focused on the spatial and temporal information from a query or understanding the query with ontology. No research in the geospatial domain has investigated user queries in a systematic way. Here, we propose a query understanding framework and apply it to fill the gap by better interpreting a user’s search intent for Earth data search engines and adopting knowledge that was mined from metadata and user query logs. The proposed query understanding tool contains four components: spatial and temporal parsing; concept recognition; Named Entity Recognition (NER); and, semantic query expansion. Spatial and temporal parsing detects the spatial bounding box and temporal range from a query. Concept recognition isolates clauses from free text and provides the search engine phrases instead of a list of words. Name entity recognition detects entities from the query, which inform the search engine to query the entities detected. The semantic query expansion module expands the original query by adding synonyms and acronyms to phrases in the query that was discovered from Web usage data and metadata. The four modules interact to parse a user’s query from multiple perspectives, with the goal of understanding the consumer’s quest intent for data. As a proof-of-concept, the framework is applied to oceanographic data discovery. It is demonstrated that the proposed framework accurately captures a user’s intent.


Sign in / Sign up

Export Citation Format

Share Document