scholarly journals Dynamic Compilation of User-Defined Functions in PL/pgSQL Language

Author(s):  
Vladislav Muratovich Dzhidzhoyev ◽  
Ruben Arturovich Buchatskiy ◽  
Michael Vyacheslavovich Pantilimonov ◽  
Alexander Nikolaevich Tomilin

Many modern RDBMS provide procedural extensions for SQL programming language, which allow users to perform server-side complex computations. Use of such extensions improves modularity and code reuse, simplifies programming of application logic, and helps developers to avoid network overhead and enhance performance. Interpretation is mostly used to execute SQL queries and procedural extensions code, resulting in significant computational overhead because of indirect function calls and performing of generic checks. Moreover, most RDBMS use different engines for SQL queries execution and procedural extensions code execution, and it is necessary to perform additional computations to switch between different engines. Thus, interpretation of SQL queries combined with interpretation of procedural extensions code may drastically degrade performance of RDBMS. One solution is to use a dynamic compilation technique. In this paper, we describe the technique of dynamic compilation of PL/pgSQL procedural language for the PostgreSQL database system using LLVM compiler infrastructure. Dynamic compiler of PL/pgSQL procedural language is developed as part of PostgreSQL queries dynamic compiler. Proposed technique helps to get rid of computational overhead caused by interpretation usage. Synthetic performance tests show that the developed solution speeds up SQL queries execution by several times.

Author(s):  
Obiniyi Ayodele Afolayan ◽  
Ezugwu El-Shamir Absalom

This paper identifies the causes associated with delays in processing and releasing results in tertiary institutions. An enhanced computer program for result computation integrated with a database for storage of processed results simplifies a university grading system and overcomes the short-comings of existing packages. The system takes interdepartmental collaboration and alliances into consideration, over a network that speeds up collection of processed results from designated departments through an improved centralized database system. An empirical evaluation of the system shows that it expedites processing of results and transcripts at various levels and management of and access to student results on-line. The technological approach for the implementation of the proposed system is based on open source solutions. Apache is used as Web server extended with PHP for server side processing. In recognition of the confidentiality of data contained in the system, communication networks are protected with open-ssl library for data encryption and role-based authentication. This system increases efficient service delivery and benefits both administration and students.


2021 ◽  
Vol 2094 (3) ◽  
pp. 032045
Author(s):  
A Y Unger

Abstract A new design pattern intended for distributed cloud-based information systems is proposed. Pattern is based on the traditional client-server architecture. The server side is divided into three principal components: data storage, application server and cache server. Each component can be used to deploy parts of several independent information systems, thus realizing shared-resource approach. A strategy of separation of competencies between the client and the server is proposed. The strategy assumes that the client side is responsible for application logic and the server side is responsible for data storage consistency and data access control. Data protection is ensured by means of two particular approaches: at the entity level and at the transaction level. The application programming interface to access data is presented at the level of identified transaction descriptors.


2021 ◽  
Vol 7 ◽  
pp. e548
Author(s):  
Martin Grambow ◽  
Christoph Laaber ◽  
Philipp Leitner ◽  
David Bermbach

Performance problems in applications should ideally be detected as soon as they occur, i.e., directly when the causing code modification is added to the code repository. To this end, complex and cost-intensive application benchmarks or lightweight but less relevant microbenchmarks can be added to existing build pipelines to ensure performance goals. In this paper, we show how the practical relevance of microbenchmark suites can be improved and verified based on the application flow during an application benchmark run. We propose an approach to determine the overlap of common function calls between application and microbenchmarks, describe a method which identifies redundant microbenchmarks, and present a recommendation algorithm which reveals relevant functions that are not covered by microbenchmarks yet. A microbenchmark suite optimized in this way can easily test all functions determined to be relevant by application benchmarks after every code change, thus, significantly reducing the risk of undetected performance problems. Our evaluation using two time series databases shows that, depending on the specific application scenario, application benchmarks cover different functions of the system under test. Their respective microbenchmark suites cover between 35.62% and 66.29% of the functions called during the application benchmark, offering substantial room for improvement. Through two use cases—removing redundancies in the microbenchmark suite and recommendation of yet uncovered functions—we decrease the total number of microbenchmarks and increase the practical relevance of both suites. Removing redundancies can significantly reduce the number of microbenchmarks (and thus the execution time as well) to ~10% and ~23% of the original microbenchmark suites, whereas recommendation identifies up to 26 and 14 newly, uncovered functions to benchmark to improve the relevance. By utilizing the differences and synergies of application benchmarks and microbenchmarks, our approach potentially enables effective software performance assurance with performance tests of multiple granularities.


2010 ◽  
Vol 1 (1) ◽  
pp. 1-15 ◽  
Author(s):  
Obiniyi Ayodele Afolayan ◽  
Ezugwu El-Shamir Absalom

This paper identifies the causes associated with delays in processing and releasing results in tertiary institutions. An enhanced computer program for result computation integrated with a database for storage of processed results simplifies a university grading system and overcomes the short-comings of existing packages. The system takes interdepartmental collaboration and alliances into consideration, over a network that speeds up collection of processed results from designated departments through an improved centralized database system. An empirical evaluation of the system shows that it expedites processing of results and transcripts at various levels and management of and access to student results on-line. The technological approach for the implementation of the proposed system is based on open source solutions. Apache is used as Web server extended with PHP for server side processing. In recognition of the confidentiality of data contained in the system, communication networks are protected with open-ssl library for data encryption and role-based authentication. This system increases efficient service delivery and benefits both administration and students.


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Wenming Yong

In this paper, the intelligent English electronic dictionary system is studied to design and implement the electronic dictionary system according to the advantages of the Internet of Things. The software architecture, the design, and implementation of the client and server-side and related technologies in the development process of the dictionary application are used as the research content to comprehensively discuss the development process of the electronic dictionary. The client and server-side is based on C/S technology architecture, and the server-side is a standard Maven Web project, which is managed by Maven and does not cause conflicts; the model-view-controller framework is built using Spring MVC to achieve the separation of user interface and application logic. Spring MVC is used to build a model-view-controller framework to separate user interface and application logic. Spring dependency injection is used to build a loosely coupled project, which helps to separate project components; Spring Data JPA is used to build a persistence layer to facilitate data access and maximize the developer’s ability to automatically realize logical operations on data. After the overall performance test of the system, the performance is good under the platform, and the intelligence of trilingual word query is achieved, and the quickness and ease of use meet the requirements that can be applied.


2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Zijian Li ◽  
Chuqiao Xiao

In distributed database systems, as cluster scales grow, efficiency and availability become critical considerations. In a cluster, a common approach to high availability is using replication, but this is inefficient due to its low storage utilization. Erasure coding can provide data reliability while ensuring high storage utilization. However, due to the large number of coding and decoding operations required by the CPU, it is not suitable for some frequently updated data. In order to optimize the storage efficiency of the data in the distributed system without affecting the availability of the data, this paper proposes a data temperature recognition algorithm that can distinguish data tablets and divides data tablets into three types, cold, warm, and hot, according to the frequency of access. Combining three replicas and erasure coding technology, ER-store is proposed, a hybrid storage mechanism for different data types. At the same time, we combined the read-write separation architecture of the distributed database system to design the data temperature conversion cycle, which reduces the computational overhead caused by frequent updates of erasure coding technology. We have implemented this design on the CBase database system based on the read-write separation architecture, and the experimental results show that it can save 14.6%–18.3% of the storage space while meeting the efficient access performance of the system.


2000 ◽  
Vol 42 (4) ◽  
pp. 220-227 ◽  
Author(s):  
U M Fietzek ◽  
F Heinen ◽  
S Berweck ◽  
S Maute ◽  
A Hufschmidt ◽  
...  

Author(s):  
Leland van den Daele ◽  
Ashley Yates ◽  
Sharon Rae Jenkins

Abstract. This project compared the relative performance of professional dancers and nondancers on the Music Apperception Test (MAT; van den Daele, 2014 ), then compared dancers’ performance on the MAT with that on the Thematic Apperception Test (TAT; Murray, 1943 ). The MAT asks respondents to “tell a story to the music” in compositions written to represent basic emotions. Dancers had significantly shorter response latency and were more fluent in storytelling than a comparison group matched for gender and age. Criterion-based evaluation of dancers’ narratives found narrative emotion consistent with music written to portray the emotion, with the majority integrating movement, sensation, and imagery. Approximately half the dancers were significantly more fluent on the MAT than the TAT, while the other half were significantly more fluent on the TAT than the MAT. Dancers who were more fluent on the MAT had a higher proportion of narratives that integrated movement and imagery compared with those more fluent on the TAT. The results were interpreted as consistent with differences observed in neurological studies of auditory and visual processing, educational studies of modality preference, and the cognitive style literature. The MAT provides an assessment tool to complement visually based performance tests in personality appraisal.


Sign in / Sign up

Export Citation Format

Share Document