query structure
Recently Published Documents


TOTAL DOCUMENTS

23
(FIVE YEARS 6)

H-INDEX

3
(FIVE YEARS 0)

Author(s):  
Lucas Woltmann ◽  
Claudio Hartmann ◽  
Dirk Habich ◽  
Wolfgang Lehner

AbstractCardinality estimation is a fundamental task in database query processing and optimization. As shown in recent papers, machine learning (ML)-based approaches may deliver more accurate cardinality estimations than traditional approaches. However, a lot of training queries have to be executed during the model training phase to learn a data-dependent ML model making it very time-consuming. Many of those training or example queries use the same base data, have the same query structure, and only differ in their selective predicates. To speed up the model training phase, our core idea is to determine a predicate-independent pre-aggregation of the base data and to execute the example queries over this pre-aggregated data. Based on this idea, we present a specific aggregate-based training phase for ML-based cardinality estimation approaches in this paper. As we are going to show with different workloads in our evaluation, we are able to achieve an average speedup of 90 with our aggregate-based training phase and thus outperform indexes.


2021 ◽  
Author(s):  
Feroz Alam

As a part of achieving specific targets, business decision making involves processing and analyzing large volumes of data that leads to growing enterprise databases day by day. Considering the size and complexity of the databases used in today’s enterprises, it is a major challenge for enterprises to re-engineering their applications that can handle large amounts of data. Compared to traditional relational databases, non-relational NoSQL databases are better suited for dynamic provisioning, horizontal scaling, significant performance, distributed architecture and developer agility benefits. Based on the concept of Object Relational Mapping (ORM) and traditional ETL data migration technique this thesis proposes a methodology for migrating data from RDBMS to NoSQL. The performance of the proposed solution is evaluated through a comparative analysis of RDBMS and NoSQL implementations based on query performance evaluation, query structure and developmental agility.


2021 ◽  
Author(s):  
Feroz Alam

As a part of achieving specific targets, business decision making involves processing and analyzing large volumes of data that leads to growing enterprise databases day by day. Considering the size and complexity of the databases used in today’s enterprises, it is a major challenge for enterprises to re-engineering their applications that can handle large amounts of data. Compared to traditional relational databases, non-relational NoSQL databases are better suited for dynamic provisioning, horizontal scaling, significant performance, distributed architecture and developer agility benefits. Based on the concept of Object Relational Mapping (ORM) and traditional ETL data migration technique this thesis proposes a methodology for migrating data from RDBMS to NoSQL. The performance of the proposed solution is evaluated through a comparative analysis of RDBMS and NoSQL implementations based on query performance evaluation, query structure and developmental agility.


2021 ◽  
Vol 9 ◽  
Author(s):  
Peter Vrolijk ◽  
Lori Summa ◽  
Benjamin Ayton ◽  
Paraskevi Nomikou ◽  
Andre Hüpers ◽  
...  

Natural seeps occur at the seafloor as loci of fluid flow where the flux of chemical compounds into the ocean supports unique biologic communities and provides access to proxy samples of deep subsurface processes. Cold seeps accomplish this with minimal heat flux. While individual expertize is applied to locate seeps, such knowledge is nowhere consolidated in the literature, nor are there explicit approaches for identifying specific seep types to address discrete scientific questions. Moreover, autonomous exploration for seeps lacks any clear framework for efficient seep identification and classification. To address these shortcomings, we developed a Ladder of Seeps applied within new decision-assistance algorithms (Spock) to assist in seep exploration on the Costa Rica margin during the R/V Falkor 181210 cruise in December, 2018. This Ladder of Seeps [derived from analogous astrobiology criteria proposed by Neveu et al. (2018)] was used to help guide human and computer decision processes for ROV mission planning. The Ladder of Seeps provides a methodical query structure to identify what information is required to confirm a seep either: 1) supports seafloor life under extreme conditions, 2) supports that community with active seepage (possible fluid sample), or 3) taps fluids that reflect deep, subsurface geologic processes, but the top rung may be modified to address other scientific questions. Moreover, this framework allows us to identify higher likelihood seep targets based on existing incomplete or easily acquired data, including MBES (Multi-beam echo sounder) water column data. The Ladder of Seeps framework is based on information about the instruments used to collect seep information (e.g., are seeps detectable by the instrument with little chance of false positives?) and contextual criteria about the environment in which the data are collected (e.g., temporal variability of seep flux). Finally, the assembled data are considered in light of a Last-Resort interpretation, which is only satisfied once all other plausible data interpretations are excluded by observation. When coupled with decision-making algorithms that incorporate expert opinion with data acquired during the Costa Rica experiment, the Ladder of Seeps proved useful for identifying seeps with deep-sourced fluids, as evidenced by results of geochemistry analyses performed following the expedition.


2020 ◽  
Vol 1 (3) ◽  
pp. 1092-1108
Author(s):  
Caroline S. Armitage ◽  
Marta Lorenz ◽  
Susanne Mikki

Many research and higher education institutions are interested in their contribution to achieving the United Nation’s Sustainable Development Goals (SDG). Commercial services from Elsevier and Times Higher Education are addressing this by developing bibliometric queries for measuring SDG-related publications and SDG university rankings. However, such services should be evaluated carefully before use due to the challenging nature of interpreting the SDGs, delimiting relevance, and building queries. The aim of this bibliometric study was to build independent queries to find scholarly publications related to SDG 1, SDG 2, SDG 3, SDG 7, SDG 13, and SDG 14 using a consistent method based on SDG targets and indicators (the Bergen approach), and compare sets of publications retrieved by the Bergen and Elsevier approaches. Our results show that approach made a large difference, with little overlap in publications retrieved by the two approaches. We further demonstrate that different approaches can alter resulting country rankings. Choice of search terms, how they are combined, and query structure play a role, related to differing interpretations of the SDGs and viewpoints on relevance. Our results suggest that currently available SDG rankings and tools should be used with caution at their current stage of development.


Author(s):  
Yongrui Chen ◽  
Huiying Li ◽  
Yuncheng Hua ◽  
Guilin Qi

Formal query building is an important part of complex question answering over knowledge bases. It aims to build correct executable queries for questions. Recent methods try to rank candidate queries generated by a state-transition strategy. However, this candidate generation strategy ignores the structure of queries, resulting in a considerable number of noisy queries. In this paper, we propose a new formal query building approach that consists of two stages. In the first stage, we predict the query structure of the question and leverage the structure to constrain the generation of the candidate queries. We propose a novel graph generation framework to handle the structure prediction task and design an encoder-decoder model to predict the argument of the predetermined operation in each generative step. In the second stage, we follow the previous methods to rank the candidate queries. The experimental results show that our formal query building approach outperforms existing methods on complex questions while staying competitive on simple questions.


Author(s):  
Muhammad Fahrurrozi ◽  
Azhari SN

Semantic web is a technology that allows us to build a knowledge base or ontology for the information of the web page can be understood by computers. One software for building ontology-based semantic web is a protégé. Protege allows developers to develop an ontology with an expression of logic description. Protégé provides a plugin such as DL-Query and SPARQL-Query to display information that involve expression of class, property and individual in the ontology. The problem that then arises is DL-plugin Query only able to process the rules that involve expression of class to any object property, despite being equipped with the function of reasoning. while the SPARQL-Query plugin does not have reasoning abilities such as DL-Query plugin although the SPARQL-Query plugin can query memperoses rules involving class, property and individual. This research resulted in a new plugin using SPARQL-DL with input natural language as a protégé not provide a plugin with input natural language to see results from the combined expression-expression contained in the ontology that allows developers to view information ontology language that is easier to understand without having think of SPARQL query structure is complicated.


2017 ◽  
Vol 13 (2) ◽  
pp. 155-172 ◽  
Author(s):  
Keng Hoon Gan ◽  
Keat Keong Phang

Purpose When accessing structured contents in XML form, information requests are formulated in the form of special query languages such as NEXI, Xquery, etc. However, it is not easy for end users to compose such information requests using these special queries because of their complexities. Hence, the purpose of this paper is to automate the construction of such queries from common query like keywords or form-based queries. Design/methodology/approach In this paper, the authors address the problem of constructing queries for XML retrieval by proposing a semantic-syntax query model that can be used to construct different types of structured queries. First, a generic query structure known as semantic query structure is designed to store query contents given by user. Then, generation of a target language is carried out by mapping the contents in semantic query structure to query syntax templates stored in knowledge base. Findings Evaluations were carried out based on how well information needs are captured and transformed into a target query language. In summary, the proposed model is able to express information needs specified using query like NEXI. Xquery records a lower percentage because of its language complexity. The authors also achieve satisfactory query construction rate with an example-based method, i.e. 86 per cent (for NEXI IMDB topics) and 87 per cent (NEXI Wiki topics), respectively, compare to benchmark of 78 per cent by Sumita and Iida in language translation. Originality/value The proposed semantic-syntax query model allows flexibility of accommodating new query language by separating the semantic of query from its syntax.


Episteme ◽  
2015 ◽  
Vol 12 (2) ◽  
pp. 249-268 ◽  
Author(s):  
Michael Williams

AbstractUnlike animal knowledge, mature human knowledge is not a natural phenomenon. This claim is defended by examining the concept of such knowledge and showing that it is best analysed in deontic terms. To be knowledgeable is to possess epistemic authority. Such authority is assessable in two dimensions. The first is contextually appropriate truth-reliability. The second is epistemic responsibility in three senses of “responsibility”: accountability, due diligence and liability to sanction. The fact that knowledge can be impugned by non-culpable unreliability shows that, with respect to loss of authority, liability is strict. This entails that mature human knowledge requires some degree of epistemic self-consciousness: in particular, the conceptual capacities for critically examining one's beliefs. Arguments advanced by Kornblith for the claim that there is a vicious regress involved in this requirement are shown to depend on under-described examples. When the examples are fleshed out, we see that the “critical reflection” requirement does not demand constant self-monitoring but only the capacity to recognize and respond to appropriate epistemic queries. Recognizing that the practice of justifying one's beliefs conforms to a default and query structure also dispels the illusion that insisting that epistemic subjects have some sense of their own epistemic powers involves an unpalatable form of epistemic circularity.


Sign in / Sign up

Export Citation Format

Share Document