RADIOELECTRONIC AND COMPUTER SYSTEMS
Latest Publications


TOTAL DOCUMENTS

173
(FIVE YEARS 137)

H-INDEX

2
(FIVE YEARS 1)

Published By National Aerospace University - Kharkiv Aviation Institute

1814-4225

Author(s):  
Dmytro Chumachenko ◽  
Ievgen Meniailov ◽  
Andrii Hrimov ◽  
Vladislav Lopatka ◽  
Olha Moroz ◽  
...  

Today's global COVID-19 pandemic has affected the spread of influenza. COVID-19 and influenza are respiratory infections and have several similar symptoms. They are, however, caused by various viruses; there are also some differences in the categories of people most at risk of severe forms of these diseases. The strategies for their treatment are also different. Mathematical modeling is an effective tool for controlling the epidemic process of influenza in specified territories. The results of modeling and forecasts obtained with the help of simulation models make it possible to develop timely justified anti-epidemic measures to reduce the dynamics of the incidence of influenza. The study aims to develop a seasonal autoregressive integrated moving average (SARIMA) model for influenza epidemic process simulation and to investigate the experimental results of the simulation. The work is targeted at the influenza epidemic process and its dynamic in the territory of Ukraine. The subjects of the research are methods and models of epidemic process simulation, which include machine learning methods, in particular the SARIMA model. To achieve the aim of the research, we have used methods of forecasting and have built the influenza epidemic process SARIMA model. Because of experiments with the developed model, the predictive dynamics of the epidemic process of influenza for 10 weeks were obtained. Such a forecast can be used by persons making decisions on the implementation of anti-epidemic and deterrent measures if the forecast exceeds the epidemic thresholds of morbidity. Conclusions. The paper describes experimental research on the application of the SARIMA model to the epidemic process of influenza simulation. Models have been verified by influenza morbidity in the Kharkiv region (Ukraine) in epidemic seasons for the time ranges as follows: 2017-18, 2018-19, 2019-20, and 2020-21. Data were provided by the Kharkiv Regional Centers for Disease Control and Prevention of the Ministry of Health of Ukraine. The forecasting results show a downward trend in the dynamics of the epidemic process of influenza in the Kharkiv region. It is due to the introduction of anti-epidemic measures aimed at combating COVID-19. Activities such as wearing masks, social distancing, and lockdown also contribute to reducing seasonal influenza epidemics.


Author(s):  
Дмитро Вячеславович Грецьких ◽  
Василь Олександрович Алєксєєв ◽  
Андрій Володимирович Гомозов ◽  
Віктор Олександрович Катрич ◽  
Михайло Васильович Нестеренко

The paper presents a mathematical model of radio-electronic systems (RES), which include antennas and their excitation paths with nonlinear characteristics. The model provides acceptable accuracy of RES quality indicator analysis and electromagnetic compatibility (EMC) for further practical design. General purpose: the development of a mathematical model of a transmitting multi-input radiating structure with nonlinear characteristics under the Fresnel zone. Objective: choice justification of a structural schema of a radiating multi-input system with a radiator that has a distributed nonlinear surface impedance; obtaining the nonlinear integral equations (NIE) related to the current density for radiators with distributed nonlinearity, excited by an arbitrary field distribution for solving the general analysis problem; obtaining a ratio for calculating focused electromagnetic fields (EMF) created by multi-input radiating structures with nonlinear characteristics in the Fresnel zone. The methods used in the paper are mathematical methods of electrodynamics and antennas theory with nonlinear elements (ANE), theory of microwave circuits, and multipoles. The following results were obtained. An electrodynamics approach is proposed to analyze the entire set of nonlinear effects arising in transmitting multi-input radiating structures with nonlinear characteristics. It allows considering the mutual influence of the transmitting and receiving antennas with nonlinear characteristics in the system itself and the electrodynamics interaction of the transmitting antenna with nonlinear characteristics with RES for other purposes. Component equations (NIE) of multi-input radiating structures that establish the relationship of amplitude-phase distribution at the inputs of radiators with distributed nonlinearity and amplitude-phase distribution on their surfaces are obtained. A mathematical model of multi-input radiator structures with nonlinear characteristics in the Fresnel zone for analysis purposes has been produced. Conclusions. The scientific novelty of the obtained results is as follows: a generalized theory of transmitting antennas of arbitrary configuration with nonlinear characteristics in the Fresnel zone, which makes it possible to analyze the characteristics of these antennas considering the positive and negative (beneficial and adverse) nonlinear effects that arise in them.


2021 ◽  
pp. 132-144
Author(s):  
Ігор Ігорович Фурсов ◽  
Олександр Віталійович Шматко

The active introduction of intelligent systems that closely interact with physical processes to solve a wide range of different tasks of human life increases the relevance of risk analysis associated with the functioning of such systems. Such hybrid complex intelligent systems belong to the class of cyberphysical systems (CPS). Violations of CPS security caused by outside interference in the information flow can lead to economic losses, environmental threats, and threats to human life and health. A significant increase in incidents of violation of the safety of CPS wind turbines determines the relevance of research on methods for protecting such systems. The subject matter of the study in the article is the process of determining violations of the information security of the CPS of a wind generator based on the analysis of statistical indicators of variance, asymmetry, and kurtosis of the input parameter "Power" collected by CPS sensors. The goal is to develop an algorithm for determining violations of the information security of the CPS using methods for analyzing statistical indicators of variance, asymmetry, and kurtosis. The tasks to be solved are: to formalize the process of identifying falsified data in the information flow of the CPS; to determine the advantages and disadvantages of existing methods for ensuring the information security of the CPS; to determine the degree of changes in statistical indicators of variance, asymmetry, and kurtosis of the sample of the "Power" parameter of the wind generator in the presence of misinformation in the information flow; to analyze the possibility of supplementing and further improving the proposed algorithm. The methods used are analysis of statistical indicators of variance, asymmetry, and kurtosis of the sample of the parameter "Power" of the wind generator. The following results are obtained: the general characteristics of the CPS and features of the functioning of the CPS of the wind turbine as the object of research of this work are considered; an initial algorithm for determining violations of the information security of the CPS of wind turbine based on the use of statistical indicators of variance, asymmetry, and excess is developed; the fact of artificial substitution of data for the parameter "power" of the information flow of the CPS of a wind turbine is determined; ways to improve the developed algorithm using one-factor variance analysis, bootstrap methods are proposed. Conclusions. The scientific novelty of the results obtained consists of the development of an improved algorithm for determining the fact of data falsification in the CPS information flow based on the analysis of variance, asymmetry, and kurtosis indicators; the use of a statistical method for determining CPS security violations, analyzing the shortcomings of existing methods for determining CPS security violations and the possibility of their comprehensive improvement. The issues of the possibility of improving the developed method and testing the method in the future are also considered.


2021 ◽  
pp. 157-165
Author(s):  
Anatoliy Gorbenko ◽  
Andrii Karpenko ◽  
Olga Tarasyuk

A concept of distributed replicated NoSQL data storages Cassandra-like, HBase, MongoDB has been proposed to effectively manage Big Data set whose volume, velocity and variability are difficult to deal with by using the traditional Relational Database Management Systems. Tradeoffs between consistency, availability, partition tolerance and latency is intrinsic to such systems. Although relations between these properties have been previously identified by the well-known CAP and PACELC theorems in qualitative terms, it is still necessary to quantify how different consistency settings, deployment patterns and other properties affect system performance.This experience report analysis performance of the Cassandra NoSQL database cluster and studies the tradeoff between data consistency guaranties and performance in distributed data storages. The primary focus is on investigating the quantitative interplay between Cassandra response time, throughput and its consistency settings considering different single- and multi-region deployment scenarios. The study uses the YCSB benchmarking framework and reports the results of the read and write performance tests of the three-replicated Cassandra cluster deployed in the Amazon AWS. In this paper, we also put forward a notation which can be used to formally describe distributed deployment of Cassandra cluster and its nodes relative to each other and to a client application. We present quantitative results showing how different consistency settings and deployment patterns affect Cassandra performance under different workloads. In particular, our experiments show that strong consistency costs up to 22 % of performance in case of the centralized Cassandra cluster deployment and can cause a 600 % increase in the read/write requests if Cassandra replicas and its clients are globally distributed across different AWS Regions.


Author(s):  
Valeriy Mygal ◽  
Galyna Mygal ◽  
Stanislav Mygal

The article is devoted to the systemic problems of the study of the human factor, which are associated with the cognitive aspects of human-computer interaction. The rapid development of mathematical modeling has created systemic problems of safety, control and forecasting of the functioning of dynamic transport systems in difficult conditions. The accumulation of latent contradictions and interdisciplinary conflict are the main reasons for the systemic complexity of the problems of education and science, which have increased the importance of the human factor. The main goal of the work is to further develop a convergent approach to studying the problems of the safety of the human factor on a transdisciplinary basis. The key reason for systemic security problems and the manifestation of the human factor is self-organized criticality, the manifestation of which in information transmission lines causes nonlinearity and instability of fractal signals of various natures. The work establishes a connection between the transitional functional states of a person with the individuality of his cognitive activity. A toolkit for identifying induced spatial and temporal inhomogeneities of information transmission media, which generate hidden spatio-temporal relationships at different scale levels, is proposed. These interconnections are determined by the individuality of the cognitive graphic images of fractal and multifractal signals of various natures. The creation of a knowledge base of cognitive graphic images of the dynamic structure of fractal and multifractal signals of various nature will allow finding solutions not yet available to the brain and computer separately. The application of the transdisciplinary convergent approach and tools based on it to electrophysiological signals of a human operator demonstrates advantages and new possibilities. In particular, revealing hidden spatio-temporal relationships that determine the manifestation of human factor in difficult conditions. The innovative potential of the convergent approach to training and forecasting activities of operators (pilot, dispatcher, etc.) is being discussed.


2021 ◽  
pp. 92-102
Author(s):  
Sergiy Rassomakhin ◽  
Olha Melkozerova ◽  
Oleksii Nariezhnii

The subject matter of the paper is the development of fingerprint local structures based on the new method of the minutia vicinity decomposition (MVD) for the solution to the task of fingerprint verification. It is an essential task because it is produced attempts to introduce biometric technology in different areas of social and state life: criminology, access control system, mobile device applications, banking. The goal is to develop real number vectors that can respond to criteria for biometric template protection schemes such as irreversibility with the corresponding accuracy of equal error rate (EER). The problem to be solved is the problem of accuracy in the case of verification because there are false minutiae, disappearing of truth minutiae and there are also linear and angular deformations. The method is the new method of MVD that used the level of graphs with many a point from 7 to 3. This scheme of decomposition is shown in this paper; such a variant of decomposition is never used in science articles. The following results were obtained: description of a new method for fingerprint verification. The new metric for creating vectors of real numbers were suggested – a minimal path for points in the graphs. Also, the algorithm for finding out minimal paths for points was proposed in the graphs because the classic algorithm has a problem in some cases with many points being 6. These problems are crossing and excluding arcs are in the path. The way of sorting out such problems was suggested and examples are given for several points are 20. Results of false rejection rate (FRR), false acceptance rate (FAR), EER are shown in the paper. In this paper, the level of EER is 33 % with full search. 78400 false and 1400 true tests were conducted. The method does not use such metrics as distances and angles, which are used in the classical method of MVD and will be used in future papers. This result is shown for total coincidences of real number, not a similarity that it is used at verifications. It is a good result in this case because the result from the method index-of-max is 40 %.


Author(s):  
Heorhii Kuchuk ◽  
Andrii Podorozhniak ◽  
Nataliia Liubchenko ◽  
Daniil Onischenko

The system of automatic license plate recognition (ALPR) is a combination of software and hardware technologies implementing ALPR algorithms. It seems to be easy to achieve the goal but recognition of license plate requires many difficult solutions to some non-trivial tasks. If the license plate is oriented horizontally, uniformly lighted, has a clean surface, clearly distinguishable characters, then it’ll be not too difficult to recognize such a license plate. However, the reality is much worse. The lighting of each part of the plate isn’t equal; the picture from the camera is noisy. Besides, the license plate can have a big angle relative to the camera and be dirty. These obstacles make it difficult to recognize the license plate characters and determine their location on the image. For instance, the accuracy of recognition is much worse on large camera angles. To solve these problems, the developers of automatic license plate recognition systems use a different approach to processing and analysis of images. The work shows an automatic license plate recognition system, which increases the recognition accuracy at large camera angles. The system is based on the technology of recognition of images with the use of highly accurate convolutional neural networks. The proposed system improves stages of normalization and segmentation of an image of the license plate, taking on large camera angles. The goal of improvements is to increase of accuracy of recognition. On the stage of normalization, before histogram equalization, the affine transformation of the image is performed. For the process of segmentation and recognition, Mask R-CNN is used. As the main segment-search algorithm, selective search is chosen. The combined loss function is used to fasten the process of training and classification of the network. The additional module to the convolutional neural network is added for solving the interclass segmentation. The input for this module is generated feature tensor. The output is segmented data for semantic processing. The developed system was compared to well-known systems (SeeAuto.USA and Nomeroff.Net). The invented system got better results on large camera shooting angles.


2021 ◽  
pp. 183-198
Author(s):  
Konstantin Dergachov ◽  
Leonid Krasnov ◽  
Vladislav Bilozerskyi ◽  
Anatoly Zymovin

The subject of study in the article is the formulation of a modern concept of improving the quality of work of optical recognition systems by using a set of various algorithms for preprocessing document images at the user's discretion. The research synthesizes algorithms that compensate for external negative influences (unfavorable geometric factor, poor lighting conditions when photographing, the effect of noise, etc.). The methods used imply a certain sequence of data preprocessing stages: geometric transformation of the original images, their processing with a set of various filters, image equalization without increasing the noise level to increase the contrast of images, the binarization of images with adaptive conversion thresholds to eliminate the influence of uneven photo illumination. The following results were obtained. A package of algorithms for preliminary processing of photographs of documentation has been created, in which, to increase the functionality of data identification, a face detection algorithm is also built in, intended for their further recognition (face recognition). A number of service procedures are provided to ensure the convenience of data processing and their information protection. In particular, interactive procedures for text segmentation with the possibility of anonymizing its individual fragments are proposed. It helps provide the confidentiality of the processed documents. The structure of the listed algorithms is described and the stability of their operation under various conditions is investigated. Based on the results of the research, a text recognition software was developed using the Tesseract version 4.0 optical character recognition (OCR) program. The program "HQ Scanner" is written in Python using the OpenCV library. An original technique for evaluating the effectiveness of the algorithms using the criterion of the maximum probability of correct text recognition has been implemented in software. A large number of examples of system operation and software testing results are provided. Conclusions. The results of the research conducted are a basis for developing software for creating cost-effective and easy-to-use OCR systems for commercial use.


Author(s):  
В’ячеслав Васильович Москаленко ◽  
Микола Олександрович Зарецький ◽  
Альона Сергіївна Москаленко ◽  
Артем Геннадійович Коробов ◽  
Ярослав Юрійович Ковальський

A machine learningsemi-supervised method was developed for the classification analysis of defects on the surface of the sewer pipe based on CCTV video inspection images. The aim of the research is the process of defect detection on the surface of sewage pipes. The subject of the research is a machine learning method for the classification analysis of sewage pipe defects on video inspection images under conditions of a limited and unbalanced set of labeled training data. A five-stage algorithm for classifier training is proposed. In the first stage, contrast training occurs using the instance-prototype contrast loss function, where the normalized Euclidean distance is used to measure the similarity of the encoded samples. The second step considers two variants of regularized loss functions – a triplet NCA function and a contrast-center loss function. The regularizing component in the second stage of training is used to penalize the rounding error of the output feature vector to a discrete form and ensures that the principle of information bottlenecking is implemented. The next step is to calculate the binary code of each class to implement error-correcting codes, but considering the structure of the classes and the relationships between their features. The resulting prototype vector of each class is used as a label of image for training using the cross-entropy loss function.  The last stage of training conducts an optimization of the parameters of the decision rules using the information criterion to consider the variance of the class distribution in Hamming binary space. A micro-averaged metric F1, which is calculated on test data, is used to compare learning outcomes at different stages and within different approaches. The results obtained on the Sewer-ML open dataset confirm the suitability of the training method for practical use, with an F1 metric value of 0.977. The proposed method provides a 9 % increase in the value of the micro-averaged F1 metric compared to the results obtained using the traditional method.


2021 ◽  
pp. 119-131
Author(s):  
Володимир Вікторович Бараннік ◽  
Наталія Вячеславівна Бараннік ◽  
Олександр Олексійович Ігнатьєв ◽  
Вікторія Вікторівна Хіменко

It is substantiated that steganographic systems should be used to ensure the protection of special information resources in conditions of its prompt delivery. Here, steganographic technologies are an integral part of complex information protection systems. Simultaneously, for steganographic systems, there is a contradiction between the density of embedded data and level of information compaction of video container (level of reduction of volume bit volume of compact presented video image concerning bit volume of an initial video image). It leads to the fact that under the conditions of the required quality (reliability) of digital video information, the bit rate level of the covert channel is insufficient. Consequently, the scientific-applied problem concerns the necessity to increase the integrity (the level of correspondence of the hidden information before its embedding in a video container and after its extraction) and bit rate of the hidden channel of special information transmission. It is relevant. The solution of the described problem in the field of application of steganographic transformations can be realized based on the application of two different approaches. The first approach is based on methods of direct message embedding. But this approach is characterized by introducing distortions in the video images used as a container. Therefore, changes in structural and statistical patterns in the syntactic description of the video container happen. It reduces the potential for video container compaction. The second approach to creating steganographic transformation methods is based on information hiding using indirect embedding technique. Here, the embedding process exploits the functional dependency between the elements of the video container and the elements of the embedded message. Setting a specific dependency between the elements in the video container corresponds to the embedded element with a value of "0" or "1". However, the existing indirect steganographic transformation methods have a disadvantage. It consists of an insufficient value of embedded data density. To eliminate these disadvantages, it is proposed to develop an approach that allows using not only psychovisual but also structural redundancy of video container for concealment. Therefore, the research objective of this paper is to develop a method for indirect information withholding in the video container compression process to increase the bit rate of the hidden message channel. In the process of research, a steganographic multiagent system is constructed, which allows embedding hidden message elements without loss of information based on the indirect approach by modifying the active bases of the multiagent basis considering their uncertainty. To select transformants (data sets) as containers for information embedding, the requirement of the existence of a base system with all active bases is taken into account. The number of embedded bits of the hidden message is equal to the number of active bases in the base system of the multiadic space. Because of the made experiments, the following results have been received: in the process of embedding messages based on the created method distortions in a video container is not brought; for the created method the additional increase in the hidden channel bit rate in average 5 … 7 times are reached.


Sign in / Sign up

Export Citation Format

Share Document