Dynamics of Complex Systems - XXI century
Latest Publications


TOTAL DOCUMENTS

12
(FIVE YEARS 12)

H-INDEX

0
(FIVE YEARS 0)

Published By Publishing House &Quot;Radiotekhnika&Quot;

1999-7493

Author(s):  
M. Kiwan ◽  
D.V. Berezkin ◽  
M. Raad ◽  
B. Rasheed

Statement of a problem. One of the main tasks today is to prevent accidents in complex systems, which requires determining their cause. In this regard, several theories and models of the causality of accidents are being developed. Traditional approaches to accident modeling are not sufficient for the analysis of accidents occurring in complex environments such as socio-technical systems, since an accident is not the result of individual component failure or human error. Therefore, we need more systematic methods for the investigation and modeling of accidents. Purpose. Conduct a comparative analysis of accident models in complex systems, identify the strengths and weaknesses of each of these models, and study the feasibility of their use in risk management in socio-technical systems. The paper analyzes the main approaches of accident modeling and their limitations in determining the cause-and-effect relationships and dynamics of modern complex systems. the methodologies to safety and accident models in sociotechnical systems based on systems theory are discussed. The complexity of sociotechnical systems requires new methodologies for modeling the development of emergency management. At the same time, it is necessary to take into account the socio-technical system as a whole and to focus on the simultaneous consideration of the social and technical aspects of the systems. When modeling accidents, it is necessary to take into account the social structures and processes of social interaction, the cultural environment, individual characteristics of a person, such as their abilities and motivation, as well as the engineering design and technical aspects of systems. Practical importance. Based on analyzing various techniques for modeling accidents, as well as studying the examples used in modeling several previous accidents and review the results of this modeling, it is concluded that it is necessary to improve the modeling techniques. The result was the appearance of hybrid models of risk management in socio-technical systems, which we will consider in detail in our next work.


Author(s):  
O.B. Rogova ◽  
V.Yu. Stroganov ◽  
D.V. Stroganov

The article deals with the analysis of the behavior of controlled simulation models for solving the choice of extreme values of the functional, which it determines on the basis of the average integral estimate. It is assumed that the search engine optimization algorithm is directly included in the model. Of interest is the problem of estimating the duration of the control interval, i.e. system simulation time with different parameters to select the search direction. The smaller the control interval, the lower the accuracy of the estimates of the functional and, accordingly, the lower the probability of choosing the correct search direction. However, with a general limitation on the simulation time, the search algorithm performs a larger number of steps, which increases the rate of convergence to the extreme value. Thus, the choice of the duration of the control interval raises a question. The aim of the work is to build a model of a controlled process, i.e. the process of changing the controlled parameters, to estimate the rate of convergence of the optimization algorithm depending on the duration of the control interval. The analysis of the convergence of the optimization process directly on the simulation model is practically impossible due to the nonstationary nature of all ongoing processes. In this regard, the article introduces a class of conditionally non-stationary Gaussian processes, on which the efficiency of a controlled simulation model is evaluated. It is assumed that a symmetric design is used to choose the direction, and all realizations of the nonstationary process at the current point have the same initial state. As a result of the analysis of such a model, analytical expressions were obtained for estimating the accuracy of the position of the extremum depending on the duration of the control interval. The results obtained make it possible, with a general limitation of the time for conducting experiments with a simulation model, to construct a sequential analysis plan, which improves the accuracy of solving the optimization problem.


Author(s):  
D.V. Semenov ◽  
D.S. Gudilin

Formulation of the problem. When designing waveguides, spatial solutions are often in demand. However, from a methodological (including educational) point of view, mostly linear-extended structures with various sectional shapes are considered. The aim of this work is to consider a waveguide as a structure composed of segments bent in a plane with a certain radius. On the other hand, this solution is common for a plane-oriented waveguide path and, in the case of an infinitely large radius, converges to a solution for a straight waveguide. Practical significance. The presented solution of the Helmholtz equation for electromagnetic waves in an annular (segmentedannular) waveguide can be considered as a methodological basis for calculating a spatially oriented rectangular waveguide path. A step-by-step solution of the Helmholtz equation for a bent rectangular waveguide is presented; a methodology for determining the parameters of the electromagnetic field in a bent homogeneous waveguide is given. Expressions are derived for determining the parameters of the electromagnetic field components for waves of type E and H. General solutions are obtained that converge at an infinitely large bending radius to harmonic functions characteristic of solutions as applied to rectilinear waveguides. This technique can be applied both for analytical evaluation or numerical calculation and spatial modeling of waveguide parameters, and for designing the waveguide path as a whole. The presence of relatively simple analytical expressions greatly facilitates the task of analyzing and optimizing the waveguide path and building software and computing systems for their assessment, modeling and development.


Author(s):  
E.Yu. Silantieva ◽  
V.A. Zabelina ◽  
G.A. Savchenko ◽  
I.M. Chernenky

This study presents an analysis of autoencoder models for the problems of detecting anomalies in network traffic. Results of the training were assessed using open source software on the UNB ICS IDS 2017 dataset. As deep learning models, we considered standard and variational autoencoder, Deep SSAD approaches for a normal autoencoder (AE-SAD) and a variational autoencoder (VAE-SAD). The constructed deep learning models demonstrated different indicators of anomaly detection accuracy; the best result in terms of the AUC metric of 98% was achieved with VAE-SAD model. In the future, it is planned to continue the analysis of the characteristics of neural network models in cybersecurity problems. One of directions is to study the influence of structure of network traffic on the performance indicators of using deep learning models. Based on the results, it is planned to develop an approach of robust identification of security events based on deep learning methods.


Author(s):  
D.V. Berezkin ◽  
Shi Runfang ◽  
Li Tengjiao

This experiment compared the performance of four machine learning algorithms in detecting bank card fraud. At the same time, the strong imbalance of the classes in the training sample was taken into account, as well as the difference in transaction amounts, and the ability of different machine learning methods to recognize fraudulent behavior was assessed taking into account these features. It has been found that a method that works well with indicators for assessing a classification is not necessarily the best in terms of assessing the magnitude of economic losses. Logistic regression is a good proof of this. The results of this work show that the problem of detecting fraud with bank cards cannot be regarded as a simple classification problem. AUC data is not the most appropriate metric for fraud detection tasks. The final choice of the model depends on the needs of the bank, that is, it is necessary to take into account which of the two types of errors (FN, FP) will lead to large economic losses for the bank. If the bank believes that the loss caused by identifying fraudulent transactions as regular transactions is the main one, it should choose the algorithm with the lowest FN value, which in this experiment is Adaboost. If the bank believes that the negative impact of identifying regular transactions as fraudulent is also very important, it should choose an algorithm with relatively small FN and FP data. In this experiment, the overall performance of the random forest is better. Further, by evaluating the economic losses caused by false positives (identifying an ordinary transaction as fraudulent), a quantitative analysis of the economic losses caused by each algorithm can be used to select the optimal algorithm model.


Author(s):  
M. Kiwan ◽  
D.V. Berezkin ◽  
A. Hamed

Statement of a problem. The increasing complexity in high-tech systems leads to potentially disastrous failure models and new kinds of safety issues. This led to the development of new approaches, for modeling accidents and risk management. In recent years, extended and hybrid approaches have been gaining popularity due to their effectiveness in decision-making for the design and operation of socio-technical systems. The proliferation of these approaches makes it difficult to select the appropriate approach for a particular system. Purpose. Conduct a comparative analysis of various hybrid approaches of accidents in complex systems, identify the strengths and weaknesses of each one, and study the feasibility of their use in risk management in socio-technical systems. Results. The paper analyzes the main approaches of accident modeling (FRAM, STAMP, failure tree, AcciMap) and their limitations in determining cause-effect relationships and dynamics of modern complex systems. New approaches to safety and accident modeling in sociotechnical systems are discussed, these approaches depend on combining several models into one hybrid approach: FuzzyFTA, FRAM-ANP, ACAT-FRAM, STAMP-HFACS, AcciMap-ANP, and SD-ET-FT-ANN. A review of hybrid approaches of accident modeling in complex systems and identify weaknesses and strengths, as well as the application field of each one of these approaches. Practical importance. This study will be a guide for researchers in the field of accident modeling and risk management in sociotechnical systems. It also concludes that it is necessary to use different approaches to risk management depending on the type of risk and the complexity of the system.


Author(s):  
R.A. Dorokhin ◽  
O.A. Bezrodnykh ◽  
S.N. Smirnov ◽  
V.A. Maystrenko

The paper considers the task of studying the features of the protection system of the operating system Astra linux 1.6 SE (Further OS Astra 1.6 SE). The basic principles of access control, functional features of protection modules, settings of some configuration files of the operating system, as well as types and features of classification marks are revealed. The result of this work is the proposal for the implementation of the possibility of configuring the basic access control mechanisms without using a graphical shell, the study of the principle of operation of these mechanisms, as well as the use of the features of kernel modules, configuration files for the design of a security system for computer facilities by information protection units. This operating system has a specific feature of the structure of the security system, since it includes mechanisms for mandatory access control, allowing access to be denied or allowed depending on the user's authority. The exchange and processing of information occurs with the use of classification labels, which make it possible to delimit information flows of different mandated contexts. These labels are written in accordance with GOST R 58256-2018 “Information security. Information flow control in the information system. Format of classification marks”. The paper analyzes traffic in different mandated sessions, and also considers the behavior of information flows regarding interaction in a network of computers with the installed OS Astra linux 1.6 SE and the security system configured on it. In this case, the exchange of data will occur both with users in the same sessions and in different ones that differ between computers.


Author(s):  
M.A. Basarab ◽  
B.S. Lunin ◽  
E.A. Chumankin

Wave solid-state gyroscopes (WSG) are among the most modern navigation devices. Based on the phenomenon of precession of elastic waves in thin-walled axisymmetric bodies, WSGs have a simple design, including 2-3 fixed parts, and have a number of advantages over other types of gyroscopes: great resource of work; small random error; resistance to severe operating conditions (overload, vibration, gamma radiation); relatively small overall dimensions, weight and power consumption; preservation of inertial information during short-term power outages. From the point of view of practical application and technologies used, three main groups of WSG can be distinguished. Wave solid-state gyroscopes of high precision. In such devices, high-quality (with a Q-factor of over 1·107) quartz resonators, contactless sensors and actuators, as well as complex electronic control systems are used. The field of application today, for various reasons, is limited to space technology, which requires, along with high precision, a long working life. Micromechanical devices of low accuracy for mass use (laptop computers, toys, industrial equipment, etc.) Integration of micromechanical WSGs with satellite systems makes it possible to create small-sized inexpensive navigation systems for widespread use. This market segment is developing very quickly, but production of such devices requires a very high the level of development of the microelectronic industry. An intermediate group consists of sensors of general use with metal resonators. Although these devices are larger than micromechanical devices, their production technology is much simpler. Metal resonators with a quality factor of (3 ... 5)∙104 can be manufactured using universal metal-cutting equipment; such devices have a simple design, do not require the creation of a high vacuum in their housing, and widespread radioelements can be used in control units. As a result, devices of this group, possessing insignificant power consumption and long working life, have a low cost price. On the other hand, the comparatively large dimensions of the resonator allow their precise tuning, which makes it possible to sharply increase the accuracy of the gyro instruments. From these points of view, a general-purpose WSG with a metal resonator is the most promising device that should replace the rotary-type electromechanical gyroscopes used today, and the production of which can be quickly mastered by the domestic industry. The development of such sensors requires solving a number of scientific and technical problems. Since all the main characteristics of such a device are determined by the properties of the resonator, special attention should be paid to its design and production technology. One of the most difficult and expensive operations in the WSG technology is the balancing of the resonator, carried out to eliminate the mass imbalance that arises during its manufacture due to inevitable deviations from the ideal axisymmetric shape (inhomogeneity of the wall thickness, displacement of the centers of the outer and inner surfaces, etc.). At a nonzero value of the 4th harmonics of the mass imbalance, a splitting of the natural frequency of the resonator occurs, leading to random errors in the WSG. A number of technologies are described in the literature to eliminate this mass defect [3-5]. The resonator balancing according to the first three forms of mass defect is much more difficult. Here, oscillations of the center of mass of the resonator occur during operation of the gyroscope and additional dissipation of the energy of oscillations of the resonator in the nodes of its attachment. This leads to a dependence of the Q-factor of the resonator on the orientation of the standing wave and, consequently, to a systematic error of the device. Thus, the aim of this work is to develop a technique and equipment for balancing metal resonators according to the first three forms of mass defect, suitable for use in the production of general-purpose WSGs.


Author(s):  
D.V. Stroganov ◽  
V.M. Chernenky

Thus, the formal formulation of the problem of evaluating the effectiveness of search optimization procedures on simulation models of regenerating processes under strict time constraints has been completed. A procedure for parametric tuning of the optimization algorithm has been developed, which sequentially refines the values of the functional specified by the model and redistributes the remaining model regeneration cycles between the investigated values of the controlled parameter. The problem of maximizing the probability of the correct choice is posed and solved, i.e. selection, according to the results of the simulation experiment on the model of the regenerating process, of the value of the controlled parameter that delivers the true maximum to the investigated functional. Based on the transition to the Lagrangian, the solution to the constrained optimization problem is reduced to an unconstrained optimization problem. Analytical expressions are obtained to assess the optimal distribution of regeneration cycles. It is shown that the simulation model with the included search engine optimization algorithm provides solutions that are quite effective in terms of computational costs. As a result, a method is proposed for a simple extension of the developed simulation models by including a search optimization algorithm, which makes it possible to move from modeling the system to optimizing its objective function on a given area of controlled parameters.


Author(s):  
I.S. Markov ◽  
N.V. Pivovarova

Formulation of the problem. The problem of using large neural networks with complex architectures on modern devices is considered. They work well, but sometimes their speed is unacceptably low and the amount of memory required to place them on the device is not always available. Briefly describes how to solve these problems by using pruning and quantization. It is proposed to consider an unconventional type of neural networks that can meet the requirements for the occupied memory space, speed and quality of work and describe approaches to training this type of networks. The aim of the work is to describe modern approaches to reducing the size of neural networks with minimal loss of the quality of their work and to propose an alternative type of networks of small size and high accuracy. Results. The proposed type of neural network has a large number of advantages in terms of the size and flexibility of layer settings. By varying the parameters of the layers, you can control the size, speed and quality of the network. However, the greater accuracy, the greater the memory volume. To train such a small network, it is proposed to use specific techniques that allow learning complex dependencies based on a more complex and voluminous network. As a result of this learning procedure, it is assumed that only a small network is used, which can then be placed on low-power devices with a small amount of memory. Practical significance. The described methods allow the use techniques to reduce the size of networks with minimal loss of quality of their work. The proposed architecture makes it possible to train simpler networks without using their size reduction techniques. These networks can work with various data, be it pictures, text, or other information encoded in a numerical vector.


Sign in / Sign up

Export Citation Format

Share Document