Applied Aspects of Information Technology
Latest Publications


TOTAL DOCUMENTS

70
(FIVE YEARS 65)

H-INDEX

1
(FIVE YEARS 1)

Published By Odessa National Polytechnic University

2617-4316

2021 ◽  
Vol 4 (4) ◽  
pp. 377-385
Author(s):  
Volodymyr M. Lucenko ◽  
Dmytro O. Progonov

Reliable protection of confidential data processed in critical information infrastructure elements of public institutions and private organizations is topical task today. Of particular interest are methods to prevent the leakage of confidential data by localizing informative (dangerous) signals that both carry an informative component, and have a signal level higher than predefined threshold. The increase in signal energy from personal computers is caused by increasing of its transistors switching speed. Modern passive shielding methods for secured computers, similar to the well-known program TEMPEST, require either costly and large shielding units or technological simplification by using of low-cost fragmentary shielding of computer’s individual elements. Therefore, localization of side electromagnetic radiation produced by personal computer is needed. The paper presents a cost-effective approach to reducing the level of computer’s electromagnetic radiation by passive method. The radiation are localized and measured by its estimation on personal computer’s elements, namely unshielded communication lines between video processor and a monitor, fragments of electric tracks on motherboards, etc. During experiments authors used ad-hoc miniature electric (ball antenna) and magnetic (Hall sensor) antennas connected to selective voltmeters. This approach significantly reduces the cost of equipment and measurements as well as requirements to analytics’ qualification for improving computer’s protection. Also, the alternative approach for computer protection is proposed. The approach is based on image content protection by distorting the image on the monitor instead of reducing electromagnetic radiation caused by signals from the monitor. The protection includes image scrambling using Arnold transform that randomly “shuffle” the lines in each frame.


2021 ◽  
Vol 4 (4) ◽  
pp. 311-328
Author(s):  
Vitaliy F. Sivokobylenko ◽  
Andrey P. Nikiforov ◽  
Ivan V. Zhuravlov

When implementing development concepts in the electric power industry (such as “Smart Grid”, “Digital substation” and “Outsourcing of services”), the task of ensuring stable relay protection operations and automation devices is urgent. The problem is solved according to the developed structural-information (SI) method. A method for selective search of the optimal amount of structured information for automatic decision-making is proposed. The article discusses an algorithm for recognising scenarios for the development of semantic events, which is included in the SP-method. The algorithm is applied uniformly for all hierarchical levels of recognition, based on the goals of decision making at the senior level. Control of the sequence of information events is performed in the dynamics of the passage of events along one path from all relationships according to the structural-information model. Part 1 shows a collaborative structural-information model consisting of a shaping tree in a dynamic object and a recognition tree in devices. A theoretical description of the algorithm is given using the amplitude and time (Ξ,Η) selectivity windows in the general structural scheme of S-detection. The application of the method for different hierarchical levels of recognition is shown. The decision-making results are presented in two forms, by means of a single semantic signal to indicate a group of results and filling in the table of the sequence of occurrence of the recognised elementary information components. Part 2 shows the application of the SPmethod at different hierarchical levels of recognition for the synthesis of a selective relay, which implements an algorithm for finding a damaged network section with single-phase ground faults in 6-35 kV distribution networks with a Petersen’s coil. The reasons for the unstable operation of algorithms of known selective relays are indicated, based on the concepts of scenario recognition. The improvement of the structure of a selective relay operating on the basis of the criterion for monitoring the coincidence of the first half-waves of the mid-frequency components in the signals of transient processes is considered. Examples of the synthesis of elementary detectors of absolute, relative and cumulative actions in relation to a selective relay are given, which make it possible to fill the amount of information for general S-detection. The simulation of the operation of the synthesised S-detector on the signals of real emergency files of the natural development of damage to the isolation of the network phase and simulation of artificial scenarios of events in the mathematical SI-model are carried out.


2021 ◽  
Vol 4 (4) ◽  
pp. 354-365
Author(s):  
Vitaliy S. Yakovyna ◽  
◽  
Ivan I. Symets

This article is focused on improving static models of software reliability based on using machine learning methods to select the software code metrics that most strongly affect its reliability. The study used a merged dataset from the PROMISE Software Engineering repository, which contained data on testing software modules of five programs and twenty-one code metrics. For the prepared sampling, the most important features that affect the quality of software code have been selected using the following methods of feature selection: Boruta, Stepwise selection, Exhaustive Feature Selection, Random Forest Importance, LightGBM Importance, Genetic Algorithms, Principal Component Analysis, Xverse python. Basing on the voting on the results of the work of the methods of feature selection, a static (deterministic) model of software reliability has been built, which establishes the relationship between the probability of a defect in the software module and the metrics of its code. It has been shown that this model includes such code metrics as branch count of a program, McCabe’s lines of code and cyclomatic complexity, Halstead’s total number of operators and operands, intelligence, volume, and effort value. A comparison of the effectiveness of different methods of feature selection has been put into practice, in particular, a study of the effect of the method of feature selection on the accuracy of classification using the following classifiers: Random Forest, Support Vector Machine, k-Nearest Neighbors, Decision Tree classifier, AdaBoost classifier, Gradient Boosting for classification. It has been shown that the use of any method of feature selection increases the accuracy of classification by at least ten percent compared to the original dataset, which confirms the importance of this procedure for predicting software defects based on metric datasets that contain a significant number of highly correlated software code metrics. It has been found that the best accuracy of the forecast for most classifiers was reached using a set of features obtained from the proposed static model of software reliability. In addition, it has been shown that it is also possible to use separate methods, such as Autoencoder, Exhaustive Feature Selection and Principal Component Analysis with an insignificant loss of classification and prediction accuracy


2021 ◽  
Vol 4 (4) ◽  
pp. 366-376
Author(s):  
Oleg N. Galchonkov ◽  
Mykola I. Babych ◽  
Andrey V. Plachinda ◽  
Anastasia R. Majorova

The transition of more and more companies from their own computing infrastructure to the clouds is due to a decrease in the cost of maintaining it, the broadest scalability, and the presence of a large number of tools for automating activities. Accordingly, cloud providers provide an increasing number of different computing resources and tools for working in the clouds. In turn, this gives rise to the problem of the rational choice of the types of cloud services in accordance with the peculiarities of the tasks to be solved. One of the most popular areas of effort for cloud consumers is to reduce rental costs. The main base of this direction is the use of spot resources. The article proposes a method for reducing the cost of renting computing resources in the cloud by dynamically managing the placement of computational tasks, which takes into account the possible underutilization of planned resources, the forecast of the appearance of spot resources and their cost. For each task, a state vector is generated that takes into account the duration of the task and the required deadline. Accordingly, for a suitable set of computing resources, an availability forecast vectors are formed at a given time interval, counting from the current moment in time. The technique proposes to calculate at each discrete moment of time the most rational option for placing the task on one of the resources and the delay in starting the task on it. The placement option and launch delays are determined by minimizing the rental cost function over the time interval using a genetic algorithm. One of the features of using spot resources is the auction mechanism for their provision by a cloud provider. This means that if there are more preferable rental prices from any consumer, then the provider can warn you about the disconnection of the resource and make this disconnection after the announced time. To minimize the consequences of such a shutdown, the technique involves preliminary preparation of tasks by dividing them into substages with the ability to quickly save the current results in memory and then restart from the point of stop. In addition, to increase the likelihood that the task will not be interrupted, a price forecast for the types of resources used is used and a slightly higher price is offered for the auction of the cloud provider, compared to the forecast. Using the example of using the Elastic Cloud Computing (EC2) environment of the cloud provider AWS, the effectiveness of the proposed method is shown.


2021 ◽  
Vol 4 (4) ◽  
pp. 338-353
Author(s):  
Oleksii B. Kungurtsev ◽  
Nataliia O. Novikova ◽  
Svitlana L. Zinovatna ◽  
Nataliia O. Komleva

It is shown that most technologies for creating information systems are based on an object-oriented approach and provide for the presentation of functional requirements in the form of use cases. However, there is no general agreement on the format of the use cases and the rules for describing script items. The work has improved the classification of items of use cases basing on the analysis of a great number of existing descriptions from different subject areas. New rules have been introduced and the existing rules have been clarified for describing use cases, which made it possible to further formalize and automate the process of describing use cases. It is also proposed to automate the process of forming a model of program classes by introducing additional information linking the class with use cases. Thus, the programming class model contains significantly more information for coding than the existing models in UML diagrams. A method for constructing a model of program classes has been developed. Methods for the automated description of use cases and the construction of a model of program classes are linked into a single process. The level of information richness of the class model also makes it possible to automate the debugging process associated with changing requirements. Since the decisions made cover most of the steps in the software module creation process, they collectively represent a new technology. The proposed model, methods and technology were implemented in the ModelEditor and UseCaseEditor software products. Approbation of the method for automating the description of use cases demonstrated a decrease in the number of errors compared to the traditional method of describing more than two times, and shortening the time  more than one and a half times. Testing the method for constructing a model of program classes showed its advantage over the existing technology: errors and time reduction  almost one and a half times. The proposed technology can be used in the development of any information systems.


2021 ◽  
Vol 4 (4) ◽  
pp. 299-310
Author(s):  
Vadim Yu. Skobtsov

The paper presents solutions to the actual problem of intelligent analysis of telemetry data from small satellites in order to detect its technical states. Neural network models based on modern deep learning architectures have been developed and investigated to solve the problem of binary classification of telemetry data. It makes possible to determine the normal and abnormal state of the small satellites or some of its subsystems. For the computer analysis, the data of the functioning of the small satellites navigation subsystem were used: a time series with a dimension of 121690 × 9. A comparative analysis was carried out of fully connected, onedimensional convolution and recurrent (GRU, LSTM) neural networks. We analyzed hybrid neural network models of various depths, which are sequential combinations of all three types of layers, including using the technology of adding residual connections of the ResNet family. Achieved results were compared with results of widespread neural network models AlexNet, LeNet, Inception, Xception, MobileNet, ResNet, and Yolo, modified for time series classification. The best result, in terms of classification accuracy at the stages of training, validation and testing, and the execution time of one training and validation epoch, were obtained by the developed hybrid neural network models of three types of layers: one-dimensional convolution, recurrent GRU and fully connected classification layers, using the technology of adding residual connections. In this case, the input data were normalized. The obtained classification accuracy at the training, validation and testing stages was 0.9821, 0.9665, 0.9690, respectively. The execution time of one learning and validation epoch was twelve seconds. At the same time, the modified Inception model showed the best alternative result in terms of accuracy: 0.9818, 0.9694, 0.9675. The execution time of one training and validation epoch was twenty seven seconds. That is, there was no increase in the classification accuracy when adapting the well-known neural network models used for image analysis. But the training and validation time in the case of the best Inception model increased by more than two times. Thus, proposed and analyzed hybrid neural network model showed the highest accuracy and minimum training and validation time in solving the considered problem according to compared with a number of developed and widely known and used deep neural network models.


2021 ◽  
Vol 4 (4) ◽  
pp. 329-337
Author(s):  
Georgy V. Derevyanko ◽  
Vladimir I. Mescheryakov

The mathematical model of the system is considered consisting of a series connection of three heating devices. A system of equations based on the energy conservation law is constructed, which turns out to be incomplete. It is shown that, given the known requirements for the system, expressed only in the efficiency of the system, the formalization of design often becomes insoluble. The system of equations is supplemented with expressions in accordance with the hypothesis of the proportionality of the amount of energy in an element and is presented in matrix form. The design task is reduced to determining the elements of the matrix by the value of the determinants. Analysis of the mathematical model made it possible to obtain an expression for the efficiency of the system as a function of energy exchange in its elements. This made it possible to obtain solutions for flows and their relationships in the elements of the system. In addition, the efficiency of inter-network and intra-network energy exchange has been determined, which satisfy the principles of equilibrium and minimum uncertainty in the values of the average parameters of the system. As an application, one of the main parameters, NTU, is considered, which determines the area of heat exchange with the external environment and the mass and dimensional characteristics of the heat exchange system. Models of direct and opposite switching on of flows with variations of flows and the value of the surface of devices when meeting the requirements for the efficiency of the system are considered. The results of comparing the design process with the iterative calculation method are presented and the advantages of the proposed approach are shown


2021 ◽  
Vol 4 (2) ◽  
pp. 152-167
Author(s):  
Vasily Petrovich Larshin ◽  
Anatoly M. Gushchin

The article focuses on a new way to solve the problem of cutting processing due to the appearance of a wide range of super hard and hard-to-machine structural materials for aircraft, automobile, ship and engine construction, as well as for spacecraft, medi cine (orthopedics, dentistry), nuclear and military equipment. Such materials have an organized regular structure, high strength, super hardness. As a result, there is a problem of defect-free machining of these materials without damaging their balanced structure. The article describes a new approach and formulates innovative principles for creating a new class of mechatronic technological systems for precision machining of parts made of these materials using the example of drilling small diameter deep holes. The core of the mechatronic technological system is a mechatronic parametric stabilizer of the power load on the cutting tool. The mechatronic tech nological system provides a program task, automatic stabilization and maintenance in the tracking mode of the power load on the cutting tool with “disturbance control”. For example, in the technological cycle of drilling small diameter holes, such a system pro tects the drill bits from breakage. An integrated technological system is proposed with the following three levels of control: intelli gent (upper), adaptive (middle) and robust (lower). The basis of the multi-level system is a high-speed robust automatic control sys tem “by the disturbance”. The disturbance is the load torque, which is either automatically stabilized, or tracked when setting a pro gram from a computer, or changes according to the program that sets the mechatronic technological system the functioning (opera tion) algorithm. This algorithm can vary widely with different methods of machining parts by cutting (grinding), including shaping free 3D surfaces according to their digital models. The mechatronic technological system proposed is easily integrated into the cut ting (grinding) system of CNC machines, expanding their capabilities by transferring the standard control program of the CNC to a higher level of the control hierarchy. This allows machining any complex-shaped parts, including “double curvature” parts, namely impellers, turbine blades, rowing screws, etc.


2021 ◽  
Vol 4 (2) ◽  
pp. 168-177
Author(s):  
Oleksandr V. Drozd ◽  
Andrzej Rucinski ◽  
Kostiantyn V. Zashcholkin ◽  
Myroslav O. Drozd ◽  
Yulian Yu. Sulima

The article is devoted to the problem of improving FPGA (Field Programmable Gate Array) components developed for safety related systems. FPGA components are improved in the checkability of their circuits and the trustworthiness of the results calculated on them to support fault-tolerant solutions, which are basic in ensuring the functional safety of critical systems. Fault-tolerant solu tions need protection from sources of multiple failures, which include hidden faults. They can be accumulated in significant quanti ties during a long normal operation and disrupt the functionality of fault-tolerant circuits with the onset of the most responsible emer gency mode. Protection against hidden faults is ensured by the checkability of the circuits, which is aimed at the manifestation of faults and therefore must be supported in conjunction with the trustworthiness of the results, taking into account the decrease in trustworthiness in the event of the manifestation of faults. The problem of increasing the checkability of the FPGA component in normal operation and the trustworthiness of the results calculated in the emergency mode is solved by using the natural version re dundancy inherent in the LUT-oriented architecture (Look-Up Table). This redundancy is manifested in the existence of many ver sions of the program code that preserve the functionality of the FPGA component with the same hardware implementation. The checkability of the FPGA component and the trustworthiness of the calculated results are considered taking into account the typical failures of the LUT-oriented architecture. These malfunctions are investigated from the standpoint of the consistency of their mani festation and masking, respectively, in normal and emergency modes on versions of the program code. Malfunctions are identified with bit distortion in the memory of the LUT units. Bits that are only observed in emergency mode are potentially dangerous because they can hide faults in normal mode. Moving potentially dangerous bits to checkable positions, observed in normal mode, is per formed by choosing the appropriate versions of the program code and organizing the operation of the FPGA component on several versions. Experiments carried out with the FPGA component using the example of an iterative array multiplier of binary codes have shown the effectiveness of using the natural version redundancy of the LUT-oriented architecture to solve the problem of hidden faults.


2021 ◽  
Vol 4 (2) ◽  
pp. 192-201
Author(s):  
Denys Valeriiovych Petrosiuk ◽  
Olena Oleksandrivna Arsirii ◽  
Oksana Yurievna Babilunha ◽  
Anatolii Oleksandrovych Nikolenko

The application of deep learning convolutional neural networks for solving the problem of automated facial expression recognition and determination of emotions of a person is analyzed. It is proposed to use the advantages of the transfer approach to deep learning convolutional neural networks training to solve the problem of insufficient data volume in sets of images with different facial expressions. Most of these datasets are labeled in accordance with a facial coding system based on the units of human facial movement. The developed technology of transfer learning of the public deep learning convolutional neural networks families DenseNet and MobileNet, with the subsequent “fine tuning” of the network parameters, allowed to reduce the training time and computational resources when solving the problem of facial expression recognition without losing the reliability of recognition of motor units. During the development of deep learning technology for convolutional neural networks, the following tasks were solved. Firstly, the choice of publicly available convolutional neural networks of the DenseNet and MobileNet families pre-trained on the ImageNet dataset was substantiated, taking into account the peculiarities of transfer learning for the task of recognizing facial expressions and determining emotions. Secondary, a model of a deep convolutional neural network and a method for its training have been developed for solving problems of recognizing facial expressions and determining human emotions, taking into account the specifics of the selected pretrained convolutional neural networks. Thirdly, the developed deep learning technology was tested, and finally, the resource intensity and reliability of recognition of motor units on the DISFA set were assessed. The proposed technology of deep learning of convolutional neural networks can be used in the development of systems for automatic recognition of facial expressions and determination of human emotions for both stationary and mobile devices. Further modification of the systems for recognizing motor units of human facial activity in order to increase the reliability of recognition is possible using of the augmentation technique.


Sign in / Sign up

Export Citation Format

Share Document