scholarly journals String processing model for knowledge-driven systems

Doklady BGUIR ◽  
2020 ◽  
Vol 18 (6) ◽  
pp. 33-40
Author(s):  
V. P. Ivashenko

The purpose of the work is to confirm experimentally theoretical estimates for time complexity of operations of the string processing model linked with the metric space for solving data processing problems in knowledge-driven systems including the research and comparison of the operation characteristics of these operations with the characteristics of similar operations for the most relevant data structures. Integral and unit testing were used to obtain the results of the performed computational experiments and verify their correctness. The C \ C++ implementation of operations of the string processing model was tested. The paper gives definitions of concepts necessary for the calculation of metric features calculated over strings. As a result of the experiments, theoretical estimates of the computational complexity of the implemented operations and the validity of the choice of parameters of the used data structures were confirmed, which ensures near-optimal throughput and operation time indicators of operations. According to the obtained results, the advantage is the ability to guarantee the time complexity of the string processing operations no higher than O  at all stages of a life cycle of data structures used to represent strings, from their creation to destruction, which allows for high throughput in data processing and responsiveness of systems built on the basis of the implemented operations. In case of solving particular string processing problems and using more suitable for these cases data structures such as vector or map the implemented operations have disadvantages meaning they are inferior in terms of the amount of data processed per time unit. The string processing model is focused on the application in knowledge-driven systems at the data management level.

2016 ◽  
Vol 20 (3) ◽  
pp. 125-134
Author(s):  
Aleksander Marek ◽  
Piotr Kardasz ◽  
Mikolaj Karpinski ◽  
Volodymyr Pohrebennyk

AbstractThis paper presents the logistic system of fuel life cycle, covering diesel oil and the mixture of rapeseed oil and butanol (2:3 ratio), using the Life-Cycle Assessment (LCA) method. This method is a technique in the field of management processes with a view to assessing the potential environmental hazards. Our intention was to compare the energy consumption needed to produce each of the test fuels and emissions of selected substances generated during ithe production process. The study involved 10,000 liters of diesel and the same amount of rapeseed oil and butanol mixture (2:3 ratio). On the basis of measurements the following results were obtained. To produce a functional unit of diesel oil (i.e. 10,000 liters) it is necessary to extract 58.8 m3 of crude oil. The entire life cycle covering the consumption of 10,000 liters of diesel consumes 475.668 GJ of energy and causes the emission to air of the following substances: 235.376 kg of COx, 944.921 kg of NOx, 83.287 kg of SOx. In the ease of a functional unit, to produce a mixture of rapeseed oil and butanol (2:3 ratio) 10,000 kg of rapeseed and 20,350 kg of straw should be used. The entire life cycle of 10,000 liters of a mixture of rapeseed oil and butyl alcohol (2:3 ratio) absorbs 370.616 GJ of energy, while emitting the following air pollutants: 105.14832 kg of COx, 920.03124 kg of NOx, 0.162 kg of SOx. Analysis of the results leads to the conclusion that it is oil refining which is the most energy-intensive and polluting process in the life cycle of diesel. The process consumes 41.4 GJ of energy, and causes a significant emission of sulfur oxides (50 kg). In the production of fuel that is a mixture of rapeseed oil and butyl alcohol (2:3 ratio), rape production is the most energy-intensive manufacturing process is (absorbs 53.856 GJ of energy). This is due to the long operation time of the farm tractor and combine harvester. The operation of these machines leads also to the emission of a significant amount of pollution in the form of COx (2.664 kg) and NOx (23.31 kg).


2018 ◽  
Vol 1 (2) ◽  
pp. 14-24
Author(s):  
Dame Christine Sagala ◽  
Ali Sadikin ◽  
Beni Irawan

The data processing systems is a very necessary way to manipulate a data into useful information. The system makes data storage, adding, changing, scheduling to reporting well integrated, so that it can help parts to exchange information and make decisions quickly. The problems faced by GKPI Pal Merah Jambi are currently still using Microsoft Office Word and in disseminating information such as worship schedules, church activities and other worship routines through paper and wall-based worship services. To print worship and report reports requires substantial operational funds, in addition to data collection and storage there are still deficiencies including recording data on the book, difficulty in processing large amounts of data and stored in only one special place that is passive. Based on the above problems, the author is interested in conducting research with the title Designing Data Processing Systems for Web-Based Churches in the GKPI Pal Merah Church in Jambi. The purpose of this study is to design and produce a data processing system for the church. Using this system can facilitate data processing in the GKPI Pal Merah Jambi Church. This study uses a waterfall development method, a method that provides a systematic and sequential approach to system needs analysis, design, implementation and unit testing, system testing and care. Applications built using the web with MySQL DBMS database, PHP programming language and Laravel.


Author(s):  
André M. de Roos ◽  
Lennart Persson

This chapter shows that overcompensation and cohort cycles are also found in demand-driven systems, and that shifts in overcompensation patterns and cycle types can, as for supply-driven systems, be related to whether development or reproduction is more limited and controls the population at equilibrium. Furthermore, it considers whether dynamical phenomena like cohort cycles have also been reported to occur in unicellular species, which have a limited change in size over their life cycle. Finally, the principles of development versus reproduction control and the concept of ontogenetic asymmetry have formed major cornerstones throughout this whole book. The chapter returns to these topics and sets them in the context of contemporary—and future—ecological theory.


2020 ◽  
Vol 10 (8) ◽  
pp. 2888
Author(s):  
Bojun Sun ◽  
Xiaogang Sun ◽  
Meisheng Luan ◽  
Jingmin Dai ◽  
Shuanglong Cui

This paper develops a two-dimensional array pyrometer, which can measure the true temperature field of the two-dimensional array. The pyrometer consists of an optical part, a circuit part and a software part. In the optical part, the radiation energy of the two-dimensional array target is obtained by scanning with the rotating mirror. Then, the radiation signal is converted and amplified by the circuit part. The software component realizes the functions of the pyrometer calibration, signal acquisition and data processing. The data processing adopts the secondary measurement method to calculate the true temperature and uses the multi-threaded method to improve the operational efficiency. Experiments show that the uncertainty of the two-dimensional pyrometer array can reach 1.43%. Compared with the single-threaded method, the true temperature operation time of the two-dimensional pyrometer array is improved by 77%, which verifies that the software operational efficiency is greatly improved.


2013 ◽  
Vol 312 ◽  
pp. 714-718
Author(s):  
Zi Qi Zhao ◽  
Xiao Jun Ye ◽  
Chun Ping Li

Multidimensional clustering analysis algorithm is for a class of cell-based clustering method of processing speed quickly, time efficiency, mainly to CLIQUE representatives. With time efficient clustering algorithm CLIQUE algorithm can achieve multi-dimensional k - Anonymous the algorithm KLIQUE, KLIQUE algorithm based CLIQUE efficiently retained their CLIQUE algorithm time complexity of features, can play the CLIQUE multidimensional data for the large amount of data processing advantage.


2018 ◽  
Vol 12 (11) ◽  
pp. 387
Author(s):  
Evon Abu-Taieh ◽  
Issam AlHadid

Multimedia is highly competitive world, one of the properties that is reflected is speed of download and upload of multimedia elements: text, sound, pictures, animation. This paper presents CRUSH algorithm which is a lossless compression algorithm. CRUSH algorithm can be used to compress files. CRUSH method is fast and simple with time complexity O(n) where n is the number of elements being compressed.Furthermore, compressed file is independent from algorithm and unnecessary data structures. As the paper will show comparison with other compression algorithms like Shannon–Fano code, Huffman coding, Run Length Encoding, Arithmetic Coding, Lempel-Ziv-Welch (LZW), Run Length Encoding (RLE), Burrows-Wheeler Transform.Move-to-Front (MTF) Transform, Haar, wavelet tree, Delta Encoding, Rice &Golomb Coding, Tunstall coding, DEFLATE algorithm, Run-Length Golomb-Rice (RLGR).


2021 ◽  
Vol 9 ◽  
pp. 113-124
Author(s):  
Hadas Chassidim ◽  
Dani Almog ◽  
Shlomo Mark

With the Agile development approach, the software industry has moved to a more flexible and continuous Software Development Life Cycle (SDLC), which integrates the stages of development, delivery and deployment. This trend has exposed a tendency of increasing reliance on both unit testing and test automation for the fundamental quality-activities during the code development. To implement Continuous Software Engineering (CSE), it is vital to assure that unit-testing activities are an integral and well-defined part of a continuous process. This paper focuses on the initial role of actual testing – viewing unit testing as a quality indicator during the development life cycle. We review the definition of unit-testing from the CSE world, and describe a qualitative study in which we examined implementation of unit testing in three software companies that recently migrated to CSE methodology. The results from the qualitative study corroborate our argument that under the continues approach, quality-based development practices such as unit testing are of increasing importance, lacking common set of measurements and KPI's. A possible explanation to this may be the role of continuous practices as well as unit testing in the software engineering curriculum


2021 ◽  
Vol 22 (2) ◽  
Author(s):  
Vinod Prasad

A fundamental problem in computational biology is to deal with circular patterns. The problem consists of finding the least certain length substrings of a pattern and its rotations in the database. In this paper, a novel method is presented to deal with circular patterns. The problem is solved using two incremental steps. First, an algorithm is provided that reports all substrings of a given linear pattern in an online text. Next, without losing efficiency, the algorithm is extended to process all circular rotations of the pattern. For a given pattern P of size M, and a text T of size N, the algorithm reports all locations in the text where a substring of Pc is found, where Pc is one of the rotations of P. For an alphabet size σ, using O(M) space, desired goals are achieved in an average O(MN/σ) time, which is O(N) for all patterns of length M ≤ σ. Traditional string processing algorithms make use of advanced data structures such as suffix trees and automaton. We show that basic data structures such as arrays can be used in the text processing algorithms without compromising the efficiency.


Sign in / Sign up

Export Citation Format

Share Document