scholarly journals THE TECHNIQUE OF SUBSTANTIATING THE RATIONAL COMPOSITION OF INFORMATION SECURITY TOOLS WITH CONSTRAINED RESOURCES

Author(s):  
Iurii I. Sineshchuk ◽  
◽  
Tatiana I. Davydova ◽  

Finding the optimal structure of an information security system is an important task complicated by its uncertain, stochastic and nonlinear nature especially, if resources are constrained. The article considers a mathematical model for determining the cost of damage prevented by information security tools, and the cost of their installation and maintenance. The optimization criterion is the minimum cost of the damage prevented. Task variables are the number of different types of security features installed in the security system. The authors propose a methodology to substantiate rational composition of information security tools, taking into account economic constraints.

Author(s):  
N. Koshevoy ◽  
E. Kostenko ◽  
V. Muratov

he planning of the experiment allows us to solve the problem of obtaining a mathematical model with minimal cost and time costs. The cost of implementing an experiment is significantly affected by the order of alternating levels of change in factors. Thus, it is required to find a procedure for the implementation of experiments that provides the minimum cost (time) for conducting a multivariate experiment. This task becomes especially relevant when studying long and expensive processes. The purpose of this article is the further development of the methodology of optimal planning of the experiment in terms of cost (time), which includes a set of methods for optimizing the plans of the experiment and hardware and software for their implementation. Object of study: optimization processes for the cost of three-level plans for multivariate experiments. Subject of research: optimization method for cost and time costs of experimental designs based on the use of the jumping frog method. Experimental research methods are widely used to optimize production processes. One of the main goals of the experiment is to obtain the maximum amount of information about the influence of the studied factors on the production process. Next, a mathematical model of the object under study is built. Moreover, it is necessary to obtain these models at the minimum cost and time costs. The design of the experiment allows you to get mathematical models with minimal cost and time costs. For this, a method and software were developed for optimizing three-level plans using the jumping frog method. Three-level plans are used in the construction of mathematical models of the studied objects and systems. An analysis is made of the known methods for the synthesis of three-level plans that are optimal in cost and time costs. The operability of the algorithm was tested when studying the roughness of the silicon surface during deep plasma-chemical etching of MEMS elements. Its effectiveness is shown in comparison with the following methods: swarm of particles, taboo search, branches and borders. Using the developed method and software for optimizing three-level plans using the jumping frog method, one can achieve high winnings compared to the initial experimental plan, optimal or close to optimal results compared to particle swarm, taboo search, branches and borders methods, and also high speed of solving the optimization problem in comparison with previously developed optimization methods for three-level experimental designs.


Author(s):  
Bogdan Korniyenko ◽  
Liliya Galata

This article presents simulation modeling process as the way to study the behavior of the Information Security system. Graphical Network Simulator is used for modeling such system and Kali Linux is used for penetration testing and security audit. To implement the project GNS3 package is selected. GNS3 is a graphical network emulator that allows you to simulate a virtual network of more than 20 different manufacturers on a local computer, connect a virtual network to a real one, add a full computer to the network, Third-party Applications for network packet analysis are supported. Depending on the hardware platform on which GNS3 will be used, it is possible to build complex projects consisting of routers Cisco, Cisco ASA, Juniper, as well as servers running network operating systems. Using modeling in the design of computing systems, you can: estimate the bandwidth of the network and its components; identify vulnerability in the structure of computing system; compare different organizations of a computing system; make a perspective development forecast for computer system; predict future requirements for network bandwidth; estimate the performance and the required number of servers in the network; compare various options for computing system upgrading; estimate the impact of software upgrades, workstations or servers power, network protocols changes on the computing system. Research computing system parameters with different characteristics of the individual components allows us to select the network and computing equipment, taking into account its performance, quality of service, reliability and cost. As the cost of a single port in active network equipment can vary depends on the manufacturer's equipment, technology used, reliability, manageability. The modeling can minimize the cost of equipment for the computing system. The modeling becomes effective when the number of workstations is 50-100, and when it more than 300, the total savings could reach 30-40% of project cost


Author(s):  
V. Martynov ◽  
◽  
O. Martynova ◽  
S. Makarova ◽  
O. Vietokh ◽  
...  

The analysis of existing methods for calculating concrete compositions was carried out. The characteristics and sequence of the calculation-experimental and experimental-calculation methods for the selection of concrete compositions are provided. The advantages and disadvantages of each of the methods are described. These methods are generalized by the general systemic cycle PDCA (Deming cycle), which is determined by the sequence of actions: P (plan) ‒ D (do) ‒ C (check) ‒ A (Action). It was established that for calculating the compositions of the cellular method there is no universal method, which would ensure the achievement of the required strength and average density at the same time. Based on the above, the aim of the thesis was formulated. The aim of the thesis is to develop a method for calculating the composition of cellular concrete, based on experimental-statistical models, which would ensure the production of concrete with the required properties while minimizing the cost of raw materials. A calculation algorithm, a block diagram and a computer program for designing cellular concrete compositions based on experimental-statistical modeling were developed. Using the example of the specified block diagram for calculating concrete compositions, the sequence of calculations is described in detail. The essence of the calculations is that the three-factor mathematical model of the property parameter of cellular concrete, which is supposed to be guaranteed, leads to a second order equation. After that, the roots of the equation are determined. They are substituted into a mathematical model and the composition of concrete is determined in natural values of variable factors. After that, the cost of the composition is determined, which is entered into the data array. Then one of the factors changes, according to the set step and the cycle repeats. At the last stage, the formed data array is processed and the composition with the minimum cost of materials is determined. Testing of the developed software was carried out by processing and calculating a three factor experiment. As a result, the composition of cellular concrete which provides the required strength of foam concrete with a minimum cost of materials, was determined.


Author(s):  
Raghda Salam Al mahdawi ◽  
Huda M. Salih

The world is entering into the era of Big Data where computer networks are an essential part. However, the current network architecture is not very convenient to configure such leap. Software defined network (SDN) is a new network architecture which argues the separation of control and data planes of the network devices by centralizing the former in high level, centralised devices and efficient supervisors, called controllers. This paper proposes a mathematical model that helps optimizing the locations of the controllers within the network while minimizing the overall cost under realistic constrains. Our method includes finding the minimum cost of placing the controllers; these costs are the network latency, controller processing power and link bandwidth. Different types of network topologies have been adopted to consider the data profile of the controllers, links of controllers and locations of switches. The results showed that as the size of input data increased, the time to find the optimal solution also increased in a non-polynomial time. In addition, the cost of solution is increased linearly with the input size. Furthermore, when increasing allocating possible locations of the controllers, for the same number of switches, the cost was found to be less.


2018 ◽  
Vol 2018 ◽  
pp. 1-8
Author(s):  
Hongge Peng ◽  
Ding Zhang

When two adjacent surface mines are simultaneously mined in the same direction with a certain relationship of the time and space, the mining arrangement of the former mine will greatly affect the mining plan and economic benefit of subsequent surface mine. In this paper, the optimized mathematical model is established with the mining conditions and economic benefits taken into account. The minimum cost of objective function is analyzed in the condition of considering the different transportation distance with different dumping amount and secondary stripping amount. At the end of this paper a conclusion is drawn that the reasonable dumping level is determined as 1130 based on the annual planning project location and stripping amount, which will reduce the cost of production and smooth the coal mining. Moreover, it is verified by an example to be correct that the mathematical model can be used to solve similar problems of tracing mining or adjacent districts mined in surface mines.


2021 ◽  
Vol 6 (2 (114)) ◽  
pp. 19-29
Author(s):  
Yuliia Tatarinova ◽  
Olga Sinelnikova

One of the key processes in software development and information security management is the evaluation of vulnerability risks. Analysis and evaluation of vulnerabilities are considered a resource-intensive process that requires high qualifications and a lot of technical information. The main opportunities and drawbacks of existing systems for evaluation of vulnerability risks in software, which include the lack of consideration of the impact of trends and the degree of popularity of vulnerability on the final evaluation, were analyzed. During the study, the following information was analyzed in the structured form: the vector of the general system of vulnerability evaluation, the threat type, the attack vector, the existence of the original code with patches, exploitation programs, and trends. The obtained result made it possible to determine the main independent characteristics, the existence of a correlation between the parameters, the order, and schemes of the relationships between the basic magnitudes that affect the final value of evaluation of vulnerability impact on a system. A dataset with formalized characteristics, as well as expert evaluation for further construction of a mathematical model, was generated. Analysis of various approaches and methods for machine learning for construction of a target model of dynamic risk evaluation was carried out: neuro-fuzzy logic, regression analysis algorithms, neuro-network modeling. A mathematical model of dynamic evaluation of vulnerability risk in software, based on the dynamics of spreading information about a vulnerability in open sources and a multidimensional model with an accuracy of 88.9 %, was developed. Using the obtained model makes it possible to reduce the analysis time from several hours to several minutes and to make a more effective decision regarding the establishment of the order of patch prioritization, to unify the actions of experts, to reduce the cost of managing information security risks


Author(s):  
Yury Shcheblanin ◽  
Dmytro Rabchun

To provide information security in automated control systems, the construction of an effective system of information security, it was not enough to identify channels of information leakage, to analyze the possible threats, the consequences of their implementation and estimate the losses. It is necessary to imagine an offender even better. An offender model is one of the most important components of a possible scenario for unlawful actions on access to information. The existence of such a model of a security breach, which is constantly corrected on the basis of obtaining new knowledge about the possibilities of the offender and changes in the security system, based on an analysis of the causes of violations, will allow themselves to affect these reasons, as well as more precisely define the requirements for the information security system from this type of violations. Correctly constructed model of the violator of information security, (adequate to reality), which reflects his practical and theoretical capabilities, a priori knowledge, time and place of action, etc. characteristics are an important part of a successful risk analysis and the definition of requirements for the composition and characteristics of the protection system. The difficulties of mathematical modeling in the study of information confrontation, which are conditioned, on the one hand, by the uncertainty of the opponent’s actions, and on the other, the complexity of creating a conditional image, which in the largest degree corresponds to the branched protective structure, is considered in the paper. When creating a mathematical model one of the main tasks is to determine the parameters and characteristics that form the target function. The consideration of this task is devoted to this work. A model is considered in which the target function determines the proportion of information lost during an attack and is expressed through the dynamic vulnerability of the system, which depends on the ratio of attacks and protection resources, as well as on the likelihood of the implementation of such a relationship. The form of these dependencies is considered. The vulnerability is expressed by the fractional-power function in which the degree of power is determined by the nature of the information system and its structure. The density of probability of allocating an attack of resources with a given number of defense resources is given by a two-parameter distribution law. By selecting the indicators in both dependencies, it is possible to reach their maximum approximation to the statistical curves and eventually to form an explicit form of the target function.


Sign in / Sign up

Export Citation Format

Share Document