Radio Electronics Computer Science Control
Latest Publications


TOTAL DOCUMENTS

739
(FIVE YEARS 237)

H-INDEX

4
(FIVE YEARS 1)

Published By Zaporizhhzhia National Technical University

2313-688x, 1607-3274

Author(s):  
A. A. Stenin ◽  
V. P. Pasko ◽  
M. A. Soldatova ◽  
I. G. Drozdovich

Context. The article proposes a latent-semantic technology for extracting information from Internet resources, which allows processing information in natural language, as well as a multi-agent search algorithm based on it. The relevance of this approach to the search for subject-oriented information determined by the fact that currently a direct lexical comparison of queries with document indexes does not fully satisfy the developer. The object of the study is a multi-agent latent-semantic algorithm for searching for subject-oriented information.  Objective.  The work is to increase the efficiency of forming a knowledge model that is adequate for this subject area. Method. A latent semantic technology based on the weighted descriptor method developed by the authors is proposed. The main difference from the existing methods is that the analysis of words occurring in the text both in frequency and taking into account semantics carried out by selecting the appropriate descriptors, which improves the quality of the information found. Results. The developed latent-semantic technology of information search tested in the task of constructing a knowledge model of automated decision support systems for operational and dispatching control of urban engineering networks. The conducted modeling of the search for subject-oriented information in this subject area showed the effectiveness of the developed approach. Conclusions. Improving the efficiency of search and semantic content of subject-oriented information of the knowledge model of this subject area achieved by using the weighted descriptor method based on Zipf’s laws in this technology. The prospects for further research are to build evolutionary models of knowledge and improve the quality of updated information.


Author(s):  
A. I. Kosolap ◽  
T. M. Dubovik

Context. In this paper, we consider a well-known university scheduling problem. Such tasks are solved several times a year in every educational institution. The problem of constructing an optimal schedule remains open despite numerous studies in this area. This is due to the complexity of the corresponding optimization problem, in particular, its significant dimension. This complicates its numerical solution with existing optimization methods. Scheduling models also need improvement. Thus, schedule optimization is a complex computational problem and requires the development of new methods for solving it.  Objective. Improvement of optimization models for timetabling at the university and the use of new effective methods to solve them. Method. We use the exact quadratic regularization method to solve timetabling optimization problems. Exact quadratic regularization allows transforming complex optimization models with Boolean variables into the problem of maximizing the vector norm on a convex set. We use the efficient direct dual interior point method and dichotomy method to solve this problem. This method has shown significantly better results in solving many complex multimodal problems. This is confirmed by many comparative computational experiments. The exact quadratic regularization method is even more effective in solving timetabling problems. This optimization method is used for the first time for this class of problems, so it required the development of adequate algorithmic support. Results. We propose a new, simpler timetabling optimization model that can be easily implemented software in Excel with the OpenSolver, RoskSolver, and others. We give a small example of building a schedule and describe step-by-step instructions for obtaining the optimal solution.  Conclusions. An efficient new technology developed for university timetable, which is simple to implement and does not require the development of special software. The efficiency of the technology is ensured by the use of a new method of exact quadratic regularization. 


Author(s):  
P. S. Nosov ◽  
I. S. Popovych ◽  
S. M. Zinchenko ◽  
V. M. Kobets ◽  
A. F. Safonova ◽  
...  

Context. The article proposes an approach for automated identification of the  navigators motivational model in the control of water transport. Algorithms for data extraction as a result of the man-machine interaction of navigator with the electronic control systems of the vessel during performing navigation operations of increased complexity are proposed. Objective. The purpose of research is to apply formal and algorithmic approaches to extracting data on the motivational model of navigator to prevent accidents in water transport.  Method. The identification of manifestation determination of navigators’ mental activity by means of the visual concept of the geometric group theory is proposed. This approach delivered the visual systematic-logical combining of diagnostic methods aimed at determining navigators motivational centers and the processes of professional activity like maneuver performing. The key indicator of identification is said to be the parameter of the navigator’s activity as “rpm_port” having an impact on the vessel speed being a marker of intensification of the navigator’s physiological activity. Such an approach is beneficial in time phase identification while maneuvering indicating explicitly at the stepping up of the navigator’s physiological motivational state. It was proven to be correct based on the results due to Ward’s dendrogram, several statistical methods and applied software. The obtained research results encourage the prediction of the navigator’ motivational states in critical situations. Results. In order to confirm the proposed formal-algorithmic approach, an experiment was carried out using the navigation simulator Navi Trainer 5000. Automated analysis of experimental ones made it possible to form a motivational map of the navigator and determine the decision-making model affecting in the processes of  control vessel in difficult situations. Conclusions. The proposed research approaches made it possible to automate the processes of extracting data indicating the principles of decision-making by navigator. The effectiveness of proposed approach was substantiated by the results of experimental data automated processing and the constructed tree-like decision-making spaces.


Author(s):  
V. V. Moskalenko ◽  
M. O. Zaretsky ◽  
A. S. Moskalenko ◽  
A. O. Panych ◽  
V. V. Lysyuk

Context. A model and training method for observational context classification in CCTV sewer inspection vide frames was developed and researched. The object of research is the process of detection of temporal-spatial context during CCTV sewer inspections. The subjects of the research are machine learning model and training method for classification analysis of CCTV video sequences under the limited and imbalanced training dataset constraint. Objective. Stated research goal is to develop an efficient context classifier model and training algorithm for CCTV sewer inspection video frames under the constraint of the limited and imbalanced labeled training set. Methods. The four-stage training algorithm of the classifier is proposed. The first stage involves training with soft triplet loss and regularisation component which penalises the network’s binary output code rounding error. The next stage is needed to determine the binary code for each class according to the principles of error-correcting output codes with accounting for intra- and interclass relationship. The resulting reference vector for each class is then used as a sample label for the future training with Joint Binary Cross Entropy Loss. The last machine learning stage is related to decision rule parameter optimization according to the information criteria to determine the boundaries of deviation of binary representation of observations for each class from the corresponding reference vector. A 2D convolutional frame feature extractor combined with the temporal network for inter-frame dependency analysis is considered. Variants with 1D Dilated Regular Convolutional Network, 1D Dilated Causal Convolutional Network, LSTM Network, GRU Network are considered. Model efficiency comparison is made on the basis of micro averaged F1 score calculated on the test dataset. Results. Results obtained on the dataset provided by Ace Pipe Cleaning, Inc confirm the suitability of the model and method for practical use, the resulting accuracy equals 92%. Comparison of the training outcome with the proposed method against the conventional methods indicated a 4% advantage in micro averaged F1 score. Further analysis of the confusion matrix had shown that the most significant increase in accuracy in comparison with the conventional methods is achieved for complex classes which combine both camera orientation and the sewer pipe construction features. Conclusions. The scientific novelty of the work lies in the new models and methods of classification analysis of the temporalspatial context when automating CCTV sewer inspections under imbalanced and limited training dataset conditions. Training results obtained with the proposed method were compared with the results obtained with the conventional method. The proposed method showed 4% advantage in micro averaged F1 score. It had been empirically proven that the use of the regular convolutional temporal network architecture is the most efficient in utilizing inter-frame dependencies. Resulting accuracy is suitable for practical use, as the additional error correction can be made by using the odometer data.


Author(s):  
L. V. Sukhostat

Context. The problem of detecting anomalies from signals of cyber-physical systems based on spectrogram and scalogram images is considered. The object of the research is complex industrial equipment with heterogeneous sensory systems of different nature.  Objective. The goal of the work is the development of a method for signal anomalies detection based on transfer learning with the extreme gradient boosting algorithm. Method. An approach based on transfer learning and the extreme gradient boosting algorithm, developed for detecting anomalies in acoustic signals of cyber-physical systems, is proposed. Little research has been done in this area, and therefore various pre-trained deep neural model architectures have been studied to improve anomaly detection. Transfer learning uses weights from a deep neural model, pre-trained on a large dataset, and can be applied to a small dataset to provide convergence without overfitting. The classic approach to this problem usually involves signal processing techniques that extract valuable information from sensor data. This paper performs an anomaly detection task using a deep learning architecture to work with acoustic signals that are preprocessed to produce a spectrogram and scalogram. The SPOCU activation function was considered to improve the accuracy of the proposed approach. The extreme gradient boosting algorithm was used because it has high performance and requires little computational resources during the training phase. This algorithm can significantly improve the detection of anomalies in industrial equipment signals. Results. The developed approach is implemented in software and evaluated for the anomaly detection task in acoustic signals of cyber-physical systems on the MIMII dataset. Conclusions. The conducted experiments have confirmed the efficiency of the proposed approach and allow recommending it for practical use in diagnosing the state of industrial equipment. Prospects for further research may lie in the application of ensemble approaches based on transfer learning to various real datasets to improve the performance and fault-tolerance of cyber-physical systems.


Author(s):  
V. Vysotska

Context. Timely and correct analysis of the process of visiting Internet resources, which led to the overall conversion of e-business, is fundamental and relevant for successfully managing the website. Appropriate, accurate traffic analysis, which brings both successful and unsuccessful conversions, will identify the cause of the impact on conversion metrics and criteria and will measure the effectiveness of changes made to the site to increase traffic conversions. It is necessary to collect information on the activities of system users on the website and determine specific performance indicators of the website to improve e-business strategy further to solve these problems and achieve the relevant goals of e-commerce. Thus, it is necessary to develop and implement an analytical method of text content support for e-commerce Internet resources based on the analysis of key performance indicators of the website, paying particular attention to determining the set of relevant and relevant keywords used by regular users and led to an increase in e-business conversions. Objective of the study is to develop a technology for promoting Internet resources of e-commerce based on the results of Web-analytics of critical indicators of pages as KPI and KSI through forming a relevant set of keywords as feedback activity of a regular audience. Method. An analytical method for promoting Internet resources based on the analysis of key performance indicators of the website, which is based on three main algorithms аlgorithm for identifying problem areas of the site structure for further optimization, аlgorithm for optimizing search engine marketing activities (SEM), аlgorithm for site promotion and calculation of its efficiency. General recommendations for the design of information resources processing systems have been developed, different from the existing ones, by the presence of additional modules that significantly affect promoting the website on the Internet to further the success of ecommerce or improve the values of these indicators. Among them is the module of online shopping, marketing, module-copywriter and Web-master. For each of them, calculate their own KRI. It will allow you to effectively implement the processing of information resources at the level of system developers (reducing resources and time for development, improving the quality of information processing systems). Results. The paper develops and describes in detail, based on the results of Web-analytics, the parameters and criteria for assessing the level of success of e-business. Software tools for monitoring the textual content of Internet resources based on the analysis of key performance indicators of the website have also been developed. For a detailed analysis of the functioning and promotion of Internet ecommerce systems such as Internet newspaper and Internet magazine, 12 different methods have been developed and implemented, respectively, with support for each of them with a different number of stages of the content life cycle. A computer experiment of analysis of key performance indicators of the website was conducted. The service of keeping statistics of visits to the Web resource allows you to estimate the increase in sales of textual content in direct proportion to the rise in the number of visits to the Web resource, the number of regular users, the prospects of marketing activities. Conclusions. It was found that the presence of appropriate modules in the systems of information resources processing increases the sales of textual content to the regular user by 9%, active involvement of unique visitors, potential users and expanding the target and regional audience by 11%, viewed pages by 12%, resources by 7%.


Author(s):  
S. D. Leoshchenko ◽  
A. O. Oliinyk ◽  
S. A. Subbotin ◽  
Ye. O. Gofman ◽  
O. V. Korniienko

Context. The problem of structural modification of pre-synthesized models based on artificial neural networks to ensure the property of interpretation when working with big data is considered. The object of the study is the process of structural modification of artificial neural networks using adaptive mechanisms. Objective of the work is to develop a method for structural modification of neural networks to increase their speed and reduce resource consumption when processing big data. Method. A method of structural adjustment of neural networks based on adaptive mechanisms borrowed from neuroevolutionary synthesis methods is proposed. At the beginning, the method uses a system of indicators to evaluate the existing structure of an artificial neural network. The assessment is based on the structural features of neuromodels. Then the obtained indicator estimates are compared with the criteria values for choosing the type of structural changes. Variants of mutational changes from the group of methods of neuroevolutionary modification of the topology and weights of the neural network are used as variants of structural change. The method allows to reduce the resource intensity during the operation of neuromodels, by accelerating the processing of big data, which expands the field of practical application of artificial neural networks. Results. The developed method is implemented and investigated by the example of using a recurrent artificial network of the long short-term memory type when solving the classification problem. The use of the developed method allowed speed up of the neuromodel with a test sample by 25.05%, depending on the computing resources used. Conclusions. The conducted experiments confirmed the operability of the proposed mathematical software and allow us to recommend it for use in practice in the structural adjustment of pre-synthesized neuromodels for further solving problems of diagnosis, forecasting, evaluation and pattern recognition using big data. The prospects for further research may consist in a more fine-tuning of the indicator system to determine the connections encoding noisy data in order to further improve the accuracy of models based on neural networks.


Author(s):  
I. F. Povkhan ◽  
O. V. Mitsa ◽  
O. Y. Mulesa ◽  
O. O. Melnyk

Context. In this paper, a problem of a discrete data array approximation by a set of elementary geometric algorithms and a recognition model representation in a form of algorithmic classification tree has been solved. The object of the present study is a concept of a classification tree in a form of an algorithm trees. The subject of this study are the relevant models, methods, algorithms and schemes of different classification tree construction.  Objective. The goal of this work is to create a simple and efficient method and algorithmic scheme of building the tree-like recognition and classification models on the basis of the algorithm trees for training selections of large-volume discrete information characterized by a modular structure of independent recognition algorithms assessed in accordance with the initial training selection data for a wide class of applied tasks.  Method. A scheme of classification tree (algorithm tree) synthesis has been suggested being based on the data array approximation by a set of elementary geometric algorithms that constructs a tree-like structure (the ACT model) for a preset initial training selection of arbitrary size. The latter consists of a set of autonomous classification/recognition algorithms assessed at each step of the ACT construction according to the initial selection. A method of the algorithmic classification tree construction has been developed with the basic idea of step-by-step arbitrary-volume and structure initial selection approximation by a set of elementary geometric classification algorithms. When forming a current algorithm tree vertex, node and generalized attribute, this method provides alignment of the most effective and high-quality elementary classification algorithms from the initial set and complete construction of only those paths in the ACT structure, where the most of classification errors occur. The scheme of synthesizing the resulting classification tree and the ACT model developed allows one to reduce considerably the tree size and complexity. The ACT construction structural complexity is being assessed on the basis of a number of transitions, vertices and tiers of the ACT structure that allows the quality of its further analysis to be increased, the efficient decomposition mechanism to be provided and the ACT structure to be built in conditions of fixed limitation sets. The algorithm tree synthesis method allows one to construct different-type tree-like recognition models with various sets of elementary classifiers at the preset accuracy for a wide class of artificial intelligence theory problems.  Results. The method of discrete training selection approximation by a set of elementary geometric algorithms developed and presented in this work has received program realization and was studied and compared with those of logical tree classification on the basis of elementary attribute selection for solving the real geological data recognition problem.  Conclusions. Both general analysis and experiments carried out in this work confirmed capability of developed mechanism of constructing the algorithm tree structures and demonstrate possibility of its promising use for solving a wide spectrum of applied recognition and classification problems. The outlooks of the further studies and approbations might be related to creating the othertype algorithmic classification tree methods with other initial sets of elementary classifiers, optimizing its program realizations, as well experimental studying this method for a wider circle of applied problems.


Author(s):  
A. Hahanova ◽  
V. Hahanov ◽  
S. Chumachenko ◽  
E. Litvinova ◽  
D. Rakhlis

Context. It is known that data structures are decisive for the creation of efficient parallel algorithms and high-performance computing devices. Therefore, the development of mathematically perfect and technologically simple data structures takes about 80 percent of the design time, when about 20 percent of time and material resources are spent on algorithms and their hardware-software coding. This lead to search for such primitives of data structures that will significantly simplify the parallel high-performance algorithms which are working on them. Models and methods for testing and simulation of digital systems are proposed, which containing certain advantages of quantum computing in terms of implementation of vector qubit data structures in technology of classical computational processes. Objective. The goal of the work is development of an innovative technology for qubit-vector synthesis and deductive analysis of tests for their verification based on vector data structures that greatly simplify algorithms that can be embedded as BIST components in digital systems on chips. Method. The deductive faults simulation is used to obtain analytical expressions focused on transporting fault lists through a functional or logical element based on the xor-operation, which serves as a measure of similarity-difference between a test, a function and faults which is specified in the same way in one of the formats − a table, graph, equation. A binary vector is proposed as the most technologically advanced primitive of data structures for setting logical functionality for the purpose of parallel synthesis and analysis of digital systems. The parallelism of solving combinatorial problems is a physical property of quantum computing, which in classical computing, for parallel simulation and faults diagnostics, is provided by unitary-coded data structures due to excess memory. Results. 1) A method of analytical synthesis of deductive logic for functional elements on the gate level and register transfer level has been developed. 2) A deductive processor for faults simulation based on transporting input lists or faults vectors to external outputs of digital circuits was proposed. 3) The qubit-vector form of logic setting and methods of qubit synthesis of deductive equations for faults simulation were described. 4) A qubit-vector method for the tests’ synthesis which is using derivatives calculated by vector coverage of logic has been developed. 5) Models and methods verification is performed on test examples in the software implementation of structures and algorithms. Conclusions. The scientific novelty lies in the new paradigm of the technology for the synthesis of deductive RTL logic based on metric test equation, which forms the. A vector form for structures description is introduced, which makes it possible to apply wellknown technologies for the synthesis and analysis of logical circuits tests to effectively solve the problems of graph structures testing and state machine models of digital devices. The practical significance is reflected in the examples of analytical synthesis of deductive logic for functional elements on gate level and register transfer level. A deductive processor for faults simulation which is focused on implementation as a BIST tool, which is used in online testing, simulation and fault diagnosis for digital systems on chips is proposed. A qubit-vector form of the digital systems description is proposed, which surpasses the existing methods of computing devices development in terms of the metric: manufacturability, compactness, speed and quality. A software application has been developed that implements the main testing, simulation and diagnostics services which are used in the educational process to study the advantages of qubit-vector data structures and algorithms. The computational complexity of synthesis processes and deductive formulas for logic and their usage in fault simulation are given.


Author(s):  
I. Prots’ko ◽  
N. Kryvinska ◽  
O. Gryshchuk

Context. Providing the problem of fast calculation of the modular exponentiation requires the development of effective algorithmic methods using the latest information technologies. Fast computations of the modular exponentiation are extremely necessary for efficient computations in theoretical-numerical transforms, for provide high crypto capability of information data and in many other applications. Objective – the runtime analysis of software functions for computation of modular exponentiation of the developed programs based on parallel organization of computation with using multithreading. Method. Modular exponentiation is implemented using a 2k-ary sliding window algorithm, where k is chosen according to the size of the exponent. Parallelization of computation consists in using the calculation of the remainders of numbers raised to the power of 2i modulo, and their further parallel multiplications modulo. Results. Comparison of the runtimes of three variants of functions for computing the modular exponentiation is performed. In the algorithm of parallel organization of computation with using multithreading provide faster computation of modular exponentiation for exponent values larger than 1K binary digits compared to the function of modular exponentiation of the MPIR library. The MPIR library with an integer data type with the number of binary digits from 256 to 2048 bits is used to develop an algorithm for computing the modular exponentiation with using multithreading. Conclusions. In the work has been considered and analysed the developed software implementation of the computation of modular exponentiation on universal computer systems. One of the ways to implement the speedup of computing modular exponentiation is developing algorithms that can use multithreading technology on multi-cores microprocessors. The multithreading software implementation of modular exponentiation with increasing from 1024 the number of binary digit of exponent shows an improvement of computation time with comparison with the function of modular exponentiation of the MPIR library.


Sign in / Sign up

Export Citation Format

Share Document