PROBLEMS IN PROGRAMMING
Latest Publications


TOTAL DOCUMENTS

252
(FIVE YEARS 109)

H-INDEX

3
(FIVE YEARS 2)

Published By Co. Ltd. Ukrinformnauka

1727-4907

2021 ◽  
pp. 027-039
Author(s):  
S.Ya. Svistunov ◽  
◽  
P.I. Perkonos ◽  
S.V. Subotin ◽  
Ya.M. Tverdochlib ◽  
...  

Modern conditions of science development, which are characterized by a significant increase in the volume of information, require new approaches to computational methods and new approaches to information processing. The article considers approaches to creating conditions and tools based on cloud technologies for more effective cooperation of research teams working on similar scientific problems. The article analyzes the stages of development of the European Open Science Cloud and presents the strategy of building the National Open Science Cloud. The article presents the main results of the development of a common information resource for scientific researches at the National Academy of Sciences of Ukraine, which can be considered as a prototype of the National Open Science Cloud, which integrates into the European Open Science Cloud.


2021 ◽  
pp. 040-071
Author(s):  
V.A. Reznichenko ◽  

The article provides an overview of research and development of databases since their appearance in the 60s of the last century to the present time. The following stages are distinguished: the emergence formation and rapid development, the era of relational databases, extended relational databases, post-relational databases and big data. At the stage of formation, the systems IDS, IMS, Total and Adabas are described. At the stage of rapid development, issues of ANSI/X3/SPARC database architecture, CODASYL proposals, concepts and languages of conceptual modeling are highlighted. At the stage of the era of relational databases, the results of E. Codd's scientific activities, the theory of dependencies and normal forms, query languages, experimental research and development, optimization and standardization, and transaction management are revealed. The extended relational databases phase is devoted to describing temporal, spatial, deductive, active, object, distributed and statistical databases, array databases, and database machines and data warehouses. At the next stage, the problems of post-relational databases are disclosed, namely, NOSQL-, NewSQL- and ontological databases. The sixth stage is devoted to the disclosure of the causes of occurrence, characteristic properties, classification, principles of work, methods and technologies of big data. Finally, the last section provides a brief overview of database research and development in the Soviet Union.


2021 ◽  
pp. 016-026
Author(s):  
O.V. Zakharova ◽  

Establishing the semantic similarity of information is an integral part of the process of solving any information retrieval tasks, including tasks related to big data processing, discovery of semantic web services, categorization and classification of information, etc. The special functions to determine quantitative indicators of degree of se­mantic similarity of the information allow ranking the found information on its semantic proximity to the pur­po­se or search request/template. Forming such measures should take into account many aspects from the mea­nings of the matched concepts to the specifics of the business-task in which it is done. Usually, to construct such si­milarity functions, semantic ap­proaches are combined with structural ones, which provide syntactic comparison of concepts descriptions. This allows to do descriptions of the concepts more detail, and the impact of syntactic matching can be significantly reduced by using more expressive descriptive logics to represent information and by moving the focus to semantic properties. Today, DL-ontologies are the most developed tools for representing semantics, and the mechanisms of reasoning of descriptive logics (DL) provide the possibility of logical inference. Most of the estimates presented in this paper are based on basic DLs that support only the intersection constructor, but the described approaches can be applied to any DL that provides basic reasoning services. This article contains the analysis of existing approaches, models and measures based on descriptive logics. Classification of the estimation methods both on the levels of defining similarity and the matching types is proposed. The main attention is paid to establishing the similarity between concepts (conceptual level models). The task of establishing the value of similarity between instances and between concept and instance consists of finding the most specific concept for the instance / instances and evaluating the similarity between the concepts. The term of existential similarity is introduced. In this paper the examples of applying certain types of measures to evaluate the degree of semantic similarity of notions and/or knowledge based on the geometry ontology is demonstrated.


2021 ◽  
pp. 003-015
Author(s):  
I.Z. Achour ◽  
◽  
A.Yu. Doroshenko ◽  
◽  

Despite the neuroevolution of augmenting topologies method strengths, like the capability to be used in cases where the formula for a cost function and the topology of the neural network are difficult to determine, one of the main problems of such methods is slow convergence towards optimal results, especially in cases with complex and challenging environments. This paper proposes the novel distributed implementation of neuroevolution of augmenting topologies method, which considering availability of sufficient computational resources allows drastically speed up the process of optimal neural network configuration search. Batch genome evaluation was implemented for the means of proposed solution performance optimization, fair, and even computational resources usage. The proposed distributed implementation benchmarking shows that the generated neural networks evaluation process gives a manifold increase of efficiency on the demonstrated task and computational environment.


2021 ◽  
pp. 085-094
Author(s):  
A.A. Triantafillu ◽  
◽  
M.A. Mateshko ◽  
V.L. Shevchenko ◽  
І.P. Sinitsyn ◽  
...  

One of the needs of music business is a quick classification of the song genre by means of widely available tools. This work focuses on improving the accuracy of the song genre determination based on its lyrics through the development of software that uses new factors, namely the rhythm of the text and its morpho-syntactic structure. In the research Bayes Classifier and Logistic Regression were used to classify song genres, a systematic approach and principles of invention theory were used to summarize and analyze the results. New features were proposed in the paper to improve the accuracy of the classification, namely the features to indicate rhythm and parts of speec h in the song.


2021 ◽  
pp. 034-041
Author(s):  
A.Y. Gladun ◽  
◽  
K.A. Khala ◽  

It is becoming clear with growing complication of cybersecurity threats, that one of the most important resources to combat cyberattacks is the processing of large amounts of data in the cyber environment. In order to process a huge amount of data and to make decisions, there is a need to automate the tasks of searching, selecting and interpreting Big Data to solve operational information security problems. Big data analytics is complemented by semantic technology, can improve cybersecurity, and allows you to process and interpret large amounts of information in the cyber environment. Using of semantic modeling methods in Big Data analytics is necessary for the selection and combination of heterogeneous Big Data sources, recognition of the patterns of network attacks and other cyber threats, which must occur quickly to implement countermeasures. Therefore to analyze Big Data metadata, the authors propose pre-processing of metadata at the semantic level. As analysis tools, it is proposed to create a thesaurus of the problem based on the domain ontology, which should provide a terminological basis for the integration of ontologies of different levels. To build a thesaurus of the problem, it is proposed to use the standards of open information resources, dictionaries, encyclopedias. The development of an ontology hierarchy formalizes the relationships between data elements that will be used in future for machine learning and artificial intelligence algorithms to adapt to changes in the environment, which in turn will increase the efficiency of big data analytics for the cybersecurity domain.


2021 ◽  
pp. 024-033
Author(s):  
O.V. Zakharova

Establishing the semantic similarity of information is an integral part of the process of solving any information retrieval tasks, including tasks related to big data processing, discovery of semantic web services, categorization and classification of information, etc. The special functions to determine quantitative indicators of degree of semantic similarity of the information allow ranking the found information on its semantic proximity to the purpose or search request/template. Forming such measures should take into account many aspects from the meanings of the matched concepts to the specifics of the business-task in which it is done. Usually, to construct such similarity functions, semantic approaches are combined with structural ones, which provide syntactic comparison of concepts descriptions. This allows to do descriptions of the concepts more detail, and the impact of syntactic matching can be significantly reduced by using more expressive descriptive logics to represent information and by moving the focus to semantic properties. Today, DL-ontologies are the most developed tools for representing semantics, and the mechanisms of reasoning of descriptive logics (DL) provide the possibility of logical inference. Most of the estimates presented in this paper are based on basic DLs that support only the intersection constructor, but the described approaches can be applied to any DL that provides basic reasoning services. This article contains the analysis of existing approaches, models and measures based on descriptive logics. Classification of the estimation methods both on the levels of defining similarity and the matching types is proposed. The main attention is paid to establishing the similarity between concepts (conceptual level models). The task of establishing the value of similarity between instances and between concept and instance consists of finding the most specific concept for the instance / instances and evaluating the similarity between the concepts. The term of existential similarity is introduced. In this paper the examples of applying certain types of measures to evaluate the degree of semantic similarity of notions and/or knowledge based on the geometry ontology is demonstrated.


2021 ◽  
pp. 063-075
Author(s):  
M. Коsovets ◽  
◽  
L. Tovstenko ◽  
◽  

The problem of architecture development of modern radar systems using artificial intelligence technology is considered. The main difference is the use of a neural network in the form of a set of heterogeneous neuromultimicroprocessor modules, which are rebuilt in the process of solving the problem systematically in real time by the means of the operating system. This architecture promotes the implementation of cognitive technologies that take into account the requirements for the purpose, the influence of external and internal factors. The concept of resource in general and abstract resource of reliability in particular and its role in designing a neuromultimicroprocessor with fault tolerance properties is introduced. The variation of the ratio of performance and reliability of a fault-tolerant neuromultimicroprocessor of real time with a shortage of reliability resources at the system level by means of the operating system is shown, dynamically changing the architectural appearance of the system with structural redundancy, using fault-tolerant technologies and dependable computing.


2021 ◽  
pp. 054-062
Author(s):  
D.V. Rahozin ◽  
◽  
A.Yu. Doroshenko ◽  

Modern workloads, parallel or sequential, usually suffer from insufficient memory and computing performance. Common trends to improve workload performance include the utilizations of complex functional units or coprocessors, which are able not only to provide accelerated computations but also independently fetch data from memory generating complex address patterns, with or without support of control flow operations. Such coprocessors usually are not adopted by optimizing compilers and should be utilized by special application interfaces by hand. On the other hand, memory bottlenecks may be avoided with proper use of processor prefetch capabilities which load necessary data ahead of actual utilization time, and the prefetch is also adopted only for simple cases making programmers to do it usually by hand. As workloads are fast migrating to embedded applications a problem raises how to utilize all hardware capabilities for speeding up workload at moderate efforts. This requires precise analysis of memory access patterns at program run time and marking hot spots where the vast amount of memory accesses is issued. Precise memory access model can be analyzed via simulators, for example Valgrind, which is capable to run really big workload, for example neural network inference in reasonable time. But simulators and hardware performance analyzers fail to separate the full amount of memory references and cache misses per particular modules as it requires the analysis of program call graph. We are extending Valgrind tool cache simulator, which allows to account memory accesses per software modules and render realistic distribution of hot spot in a program. Additionally the analysis of address sequences in the simulator allows to recover array access patterns and propose effective prefetching schemes. Motivating samples are provided to illustrate the use of Valgrind tool.


2021 ◽  
pp. 076-084
Author(s):  
V.L. Shevchenko ◽  
◽  
Y.S. Lazorenko ◽  
O.M. Borovska ◽  
◽  
...  

As the amount of media content increases, there is a need for its automated sounding with the most accessible built-in and mobile means. The factors influencing the formation of different intonations were analyzed in the article, the dependences of the change of sound characteristics in accordance with the intonations were mathematically described. In the course of the study, the numerical analysis of sentences was improved using the moving average method for smoothing audio recording, approximation lines for approximate generalization of emotions as mathematical functions, and Fourier transform for volume control. The obtained dependences allow to synthesize the necessary intonations according to the punctuation of the sentence, the presence of emotionally colored vocabulary and psycho-emotional mood of the speaker when reading such a text. As a result of our study, software for emotional sounding of texts was developed, which provides the perception of audio information easier, clearer and more comfortable based on the use of built-in processors of mobile devices.


Sign in / Sign up

Export Citation Format

Share Document