database engine
Recently Published Documents


TOTAL DOCUMENTS

116
(FIVE YEARS 28)

H-INDEX

10
(FIVE YEARS 2)

Author(s):  
Juan Miguel Medina ◽  
Ignacio J. Blanco ◽  
Olga Pons

Vehicles ◽  
2022 ◽  
Vol 4 (1) ◽  
pp. 42-59
Author(s):  
Mikel García ◽  
Itziar Urbieta ◽  
Marcos Nieto ◽  
Javier González de Mendibil ◽  
Oihana Otaegui

Local dynamic map (LDM) is a key component in the future of autonomous and connected vehicles. An LDM serves as a local database with the necessary tools to have a common reference system for both static data (i.e., map information) and dynamic data (vehicles, pedestrians, etc.). The LDM should have a common and well-defined input system in order to be interoperable across multiple data sources such as sensor detections or V2X communications. In this work, we present an interoperable graph-based LDM (iLDM) using Neo4j as our database engine and OpenLABEL as a common data format. An analysis on data insertion and querying time to the iLDM is reported, including a vehicle discovery service function in order to test the capabilities of our work and a comparative analysis with other LDM implementations showing that our proposed iLDM outperformed in several relevant features, furthering its practical utilisation in advanced driver assistance system development.


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Christopher T. Lee ◽  
Manolis Maragkakis

Abstract Background The Sequence Alignment/Map Format Specification (SAM) is one of the most widely adopted file formats in bioinformatics and many researchers use it daily. Several tools, including most high-throughput sequencing read aligners, use it as their primary output and many more tools have been developed to process it. However, despite its flexibility, SAM encoded files can often be difficult to query and understand even for experienced bioinformaticians. As genomic data are rapidly growing, structured, and efficient queries on data that are encoded in SAM/BAM files are becoming increasingly important. Existing tools are very limited in their query capabilities or are not efficient. Critically, new tools that address these shortcomings, should not be able to support existing large datasets but should also do so without requiring massive data transformations and file infrastructure reorganizations. Results Here we introduce SamQL, an SQL-like query language for the SAM format with intuitive syntax that supports complex and efficient queries on top of SAM/BAM files and that can replace commonly used Bash one-liners employed by many bioinformaticians. SamQL has high expressive power with no upper limit on query size and when parallelized, outperforms other substantially less expressive software. Conclusions SamQL is a complete query language that we envision as a step to a structured database engine for genomics. SamQL is written in Go, and is freely available as standalone program and as an open-source library under an MIT license, https://github.com/maragkakislab/samql/.


2021 ◽  
Vol 14 (11) ◽  
pp. 2419-2431
Author(s):  
Tarique Siddiqui ◽  
Surajit Chaudhuri ◽  
Vivek Narasayya

Data analysis often involves comparing subsets of data across many dimensions for finding unusual trends and patterns. While the comparison between subsets of data can be expressed using SQL, they tend to be complex to write, and suffer from poor performance over large and high-dimensional datasets. In this paper, we propose a new logical operator COMPARE for relational databases that concisely captures the enumeration and comparison between subsets of data and greatly simplifies the expressing of a large class of comparative queries. We extend the database engine with optimization techniques that exploit the semantics of COMPARE to significantly improve the performance of such queries. We have implemented these extensions inside Microsoft SQL Server, a commercial DBMS engine. Our extensive evaluation on synthetic and real-world datasets shows that COMPARE results in a significant speedup over existing approaches, including physical plans generated by today's database systems, user-defined functions (UDFs), as well as middleware solutions that compare subsets outside the databases.


Author(s):  
Filipe Sá ◽  
Pedro Martins ◽  
Maryam Abbasi
Keyword(s):  

The selected database engine significantly impacts the performance of electronic services. This impact also affects governmental e-services. Based on several inquiries to all Portugal city halls, this study ranks the different available databases (commercial and non-commercial) in the year 2019.


2021 ◽  
Author(s):  
Zainab Al-Zanbouri

Information Technology uses up to 10% of the world’s electricity generation, contributing to CO2 emissions and high energy costs. Data centers consume up to 23% of this energy, and a large fraction of this energy is consumed by databases. Therefore, building an energy efficient (green) database engine will reduce associated energy consumption and CO2 emissions. To understand the factors driving database energy consumption and execution time over the course of their evolution, we conducted an empirical case study of energy consumption of two MySQL database engines, InnoDB and MyISAM, across 12 releases. Moreover, we examined the relation between four software metrics and energy consumption & execution time, to determine the software metrics affecting the greenness and performance of a database. Our analysis shows that database engines energy consumption and execution time increase as databases evolve. Moreover, the Lines of Code metric is strongly correlated with energy consumption and execution time.


2021 ◽  
Author(s):  
Zainab Al-Zanbouri

Information Technology uses up to 10% of the world’s electricity generation, contributing to CO2 emissions and high energy costs. Data centers consume up to 23% of this energy, and a large fraction of this energy is consumed by databases. Therefore, building an energy efficient (green) database engine will reduce associated energy consumption and CO2 emissions. To understand the factors driving database energy consumption and execution time over the course of their evolution, we conducted an empirical case study of energy consumption of two MySQL database engines, InnoDB and MyISAM, across 12 releases. Moreover, we examined the relation between four software metrics and energy consumption & execution time, to determine the software metrics affecting the greenness and performance of a database. Our analysis shows that database engines energy consumption and execution time increase as databases evolve. Moreover, the Lines of Code metric is strongly correlated with energy consumption and execution time.


2021 ◽  
Author(s):  
Pablo Fuchs ◽  
Javier Mendoza

<p>We present a numerical and geographical database for the Tarija Glacier in the Tropical Andes (68.2° W, 16.2° S, 4820-5380 m.a.s.l.). The database consists of meteorological data, mass balance observations, and variations in glacier front positions. Meteorological data was obtained by an automatic weather station (AWS) located on the glacier surface that includes the following variables: precipitation, temperature, incoming shortwave radiation, relative humidity, wind speed and wind direction. Mass balance for this glacier was observed on a monthly basis in an ablation stake network and annually in a snow pit at 5230 m.a.s.l. The glacier front topography was monitored annually using a DGPS survey. We set up the database using the relational database engine PostgreSQL which is capable of managing geospatial data through the PostGIS extension. The SAGA system was used for image analysis and mapping. Data quality control and further processing was carried out in the R environment which has interfaces to the PostgreSQL database system and SAGA, as well as several additional packages for statistical analyses and modelling. The database contains data spanning the 2011-2018 period and would be useful for multiple applications including environmental and ecological modeling, water resources assessment, and climate change studies.</p>


Author(s):  
José A. Herrera-Ramírez ◽  
Marlen Treviño-Villalobos ◽  
Leonardo Víquez-Acuña

The design and implementation of services to handle geospatial data involves thinking about storage engine performance and optimization for the desired use. NoSQL and relational databases bring their own advantages; therefore, it is necessary to choose one of these options according to the requirements of the solution. These requirements can change, or  some operations may be performed in a more efficient way on another database engine, so using just one engine means being tied to its features and work model. This paper presents a hybrid approach (NoSQL-SQL) to store geospatial data on MongoDB, which are replicated and mapped on a PostgreSQL database, using an open source tool called ToroDB Stampede; solutions then can take advantage from either NoSQL or SQL features, to satisfy most of the requirements associated to the storage engine performance. A descriptive analysis to explain the workflow of the replication and synchronization in both engines precedes the quantitative analysis by which it was possible to determine that a normal database in PostgreSQL has a shorter response time than to perform the query in PostgreSQL with the hybrid database. In addition, the type of geometry increases the update response time of a materialized view.


Sign in / Sign up

Export Citation Format

Share Document