Bulletin of V.N. Karazin Kharkiv National University, series «Mathematical modeling. Information technology. Automated control systems»
Latest Publications


TOTAL DOCUMENTS

60
(FIVE YEARS 53)

H-INDEX

2
(FIVE YEARS 1)

Published By V. N. Karazin Kharkiv National University

2524-2601, 2304-6201

Knowing probability distributions for calculating expected values is always required in the engineering practice and other fields. Commonly, probability distributions are not always available. Moreover, the distribution type may not be reliably determined. In this case, an empirical distribution should be built directly from the observations. Therefore, the goal is to develop a methodology of accumulating and processing observation data so that the respective empirical distribution would be close enough to the unknown real distribution. For this, criteria regarding sufficiency of observations and the distribution validity are to be substantiated. As a result, a methodology is presente О.М. Мелкозьорова1, С.Г. Рассомахінd that considers the empirical probability distribution validity with respect to the parameter’s expected value. Values of the parameter are registered during a period of observations or measurements of the parameter. On this basis, empirical probabilities are calculated, where every next period the previous registration data are used as well. Every period gives an approximation to the parameter’s expected value using those empirical probabilities. The methodology using the moving averages and root-mean-square deviations asserts that the respective empirical distribution is valid (i.e., it is sufficiently close to the unknown real distribution) if the parameter’s expected value approximations become scattered very little for at least the three window multiple-of-2 widths by three successive windows. This criterion also implies the sufficiency of observation periods, although the sufficiency of observations per period is not claimed. The validity strongly depends on the volume of observations per period.


Different stances of human body are studied in medicine and biology for quantitative estimation and clinical diagnostics of impairments and diseases of the musculoskeletal, nervous, vestibular systems and functions. Human body is composed of ~200 bones and ~600 muscles, and its upright position is unstable due to high complexity of the system and its control mechanisms. Among different techniques of the body sway recording the stabilography is one of the most simple and cheap unit. It is composed by a force platform that can measure the reaction forces over the contact areas between two feet and the platform. The former is portable and can be connected to any laptop via USB port. In this study the functions controlling the vertical stance of a person are studied accounting for the nonlinear dynamics of oscillations of the projection (XC,YC) of center of mass (CM) of the body on the horizontal plane. The time series {XC(t),YC(t)} have been measured on 28 healthy volunteers (age 21-42, height 156-182 cm, body mass 48-84.8 kg). The volunteers were asked to keep a quiet stance on two feet, similar stances with body mass shifted onto the left and then onto the right leg. Each stance has been repeated during 30 s with open and then with closed eyes. After a short break a test with balancing on the left and then on the right leg has been perfrmed. For each case, based on the mathematical model of the inverted pendulum, the calculated control functions u(t) in the form u(t)=k1(r(t)-r0)+ k2(r/(t)-r/0), where r(t) is the radius-vector of the CM, r0 is its averaged value over time, (.)/ means the time derivative. Using statistical analysis, the absence of correlations between the control functions for both different subjects and for different positions of the body of the same volunteer was shown. Based on the calculations of the Lyapunov exponent, the individuals have been classified into groups with stable, weakly and highly unstable control of the vertical position of the body. The modeling of such systems in the framework of nondeterministic chaos models with nonlinear control is discussed.


Basic approaches to creating hardware and software for radiation monitoring information systems have been developed in the article. A modern information system for radiation monitoring and control that requires a comprehensive approach and an iterative process of its creation has been developed. The proposed approach to integrating local measuring devices with cloud services, using M2M/IoT technology for remote measurements, advanced semiconductor sensors based on CdTe and CdZnTe radiation detectors, modern microcontroller and communication microchips is highly promising. Developed hardware and software solutions demonstrate increased accuracy due to hardware and software correction of measurement results. A variant of the architectural solution for building a platform for remote access to dosimetric and radiometric measurements is being developed. The solution lies in the direction of improving the parameters of detectors, as well as the characteristics of electronic modules of detecting systems and creating software for controlling the detection process, collecting and digital processing of information, and its adequate presentation to users online. The architecture and structural diagram of a dosimetric system, a sequence diagram, a diagram of a dosimetric system with a subsystem for data exchange over the Internet have been created. A new algorithm for measuring the exposure dose rate of ionizing radiation has been proposed. The block diagram of a microcontroller dosimeter has been developed. The algorithm for correcting the dependence of the sensitivity of the detector based on CdZnTe on the energy of the detected gamma quanta has already been proposed. The algorithm significantly reduces the uncertainty of measuring the radiation dose rate. The architecture and block diagram of the dosimetric system with the possibility of remote access and remote control of the main functions has been presented as well. The calculation of the exposure dose of gamma radiation and the power of the exposure dose with the energy dependence correction have been used. The system elements have proved to be useful for students’ distant laboratory work during the quarantine.


A mathematical model of thermal process in an electrical machine was built as an example, presented as a three-layer cylinder where internal heat sources operate in one of the layers and heat is submitted to the other two by means of heat conduction. A method of solving the boundary-value problems for heat conduction equation in a complex area – a multi-layered cylinder with internal heat sources operating in one part of the layers and external ones in another part, is proposed. A method of problem solution in conditions of uncertainty of one of the boundary condition at the layers interface with conductive heat exchange between the layers is reviewed. The principle of method lies in the averaging of temperature distributions radially in the internal layers. As a result of transformations at the layers interface a boundary condition of the impedance-type conjugation appears. The analytical and numeric-analytical solutions of simplified problems were obtained.


The past few decades have seen large fluctuations in the perceived value of parallel computing. At times, parallel computation has optimistically been viewed as the solution to all of our computational limitations. The conventional division of verification methods is analyzed. It is concluded that synthetic methods of software verification can be considered as the most relevant, most useful and productive ones. It is noted that the implementation of the methods of formal verification of software of computer systems, which supplement the traditional methods of testing and debugging, and make it possible to improve the uptime and security of programs, is relevant. Methods of computer systems software formal verification can guarantee the check that verified properties are performed by system model. Nowadays, these methods are actively being developed in the direction of reducing the formal verification total cost, support of modern programming concepts and minimization of "manual" work in the transition from the system model to its implementation. Their main feature is an ability to search for errors using mathematical model, without recourse to existing realization of software. It is very convenient and economical. There are several specific techniques used for formal models analysis, such as deductive analysis, model and consistence check. Every verification method is been used in particular cases, depending on the goal. Synthetic methods of software verification are considered the most actual, useful and efficient, as they somehow try to combine the advantages of different verification approaches, getting rid of their drawbacks. Currently, there has been made significant progress in the development of such methods and their implementation in the practice of industrial software development.


The problem of finding the lengths of Hamiltonian cycles on complex graphs is considered. The task has such practical applications as determining the optimal routes (salesman's task), identifying graph structures (recognizing the characteristics of local features of biometric objects), etc. When solving the task of verification of biometric samples, the problems of addition or disappearance of reference points, deformation of the distances between them, the appearance of linear and angular displacements of the whole sample emerges. Using the method described in the article, the problem of displacements can be eliminated, as the solution is stable when shuffling of the points is present. Moreover, it is possible to obtain reference plans with the same stability. Obtaining them requires less computational complexity and provides greater recognition accuracy. A detailed description of the problem solution based on the application of the method of branches and boundaries for symmetric matrices of graphs, which describe the distribution of local features in the images of fingerprints, has been proposed. It is known that a guaranteed solution for finding the length of the Hamiltonian cycle for an arbitrary graph of the planar distribution of points is possible only by using an exhaustive search. However, the computational complexity of such a search is not acceptable. The method of branches and boundaries, like all existing methods of directional search, does not guarantee finding a solution with an arbitrarily large dimension of the graph. Therefore, a method of decomposing graphs is proposed, which allows reducing a complex problem to a set of simpler ones. That allows for a significant reduction in computational complexity. The relative invariance of the metrics of Hamiltonian cycles to probabilistic shifts, which are characteristic of biometric pattern recognition problems, has been shown.


The respiratory ducts of animals and humans are presented by curved tubes with complex geometries. The open areas in such structures are filled with moving air governed by a pressure drop between the inlet and outlet of the duct. The complex structures formed by thin walls and warmed by constant blood flow at the body temperatures T=36-39 C serve for fast and efficient warming of the inhaled air to the body temperature and its moistening up to 100% humidity. The Arctic animals possess the most efficient nasal ducts allowing the heating of the inhaled air from T=-30-60C to T=38-39 C during the duct with the length L=8-15 only. The detailed geometry of the nasal ducts of some Arctic animal has been studied on the computed tomograms (CT) scans of the heads of the animals found in the open databases and published in literature. The highly porous structures on some slices are formed by fractal-like divisions of the walls protruded into the nasal lumen. Since the fractal structures are characterized by their fractal dimensions D, the relationships between the hydrodynamic properties and fractal dimensions of the porous structures of the upper respiratory tract of some Arctic animals has been studied. The dimensions D of the cross sections of the tract have been calculated by the counting box method. The porosities of the samples, the tortuosity of the pores, and the equivalent hydraulic diameter Dh of the channel have been calculated. Sierpinski fractals of various types have been used as models of porous structures, for which the above listed parameters, as well as hydraulic resistance to a stationary flow, have also been computed. A number of statistical dependencies between the calculated parameters were revealed, but the absence of their correlations with D was shown. It was obtained, the structures with different porosities and hydraulic resistance Dh can have the same values ​​of D. Therefore, the choice of an adequate model based on only D value introduces significant errors in the calculations of air heating along the upper respiratory tract. The statistical dependences inherent in the natural samples studied can be obtained only on the basis of multifractal models in which the number and shape of the channels, as well as the scale of their decrease, change in a certain way at each generation.


The paper presents a model of computational workflows based on end-user understanding and provides an overview of various computational architectures, such as computing cluster, Grid, Cloud Computing, and SOA, for building workflows in a distributed environment. A comparative analysis of the capabilities of the architectures for the implementation of computational workflows have been shown that the workflows should be implemented based on SOA, since it meets all the requirements for the basic infrastructure and provides a high degree of compute nodes distribution, as well as their migration and integration with other systems in a heterogeneous environment. The Cloud Computing architecture using may be efficient when building a basic information infrastructure for the organization of distributed high-performance computing, since it supports the general and coordinated usage of dynamically allocated distributed resources, allows in geographically dispersed data centers to create and virtualize high-performance computing systems that are able to independently support the necessary QoS level and, if necessary, to use the Software as a Service (SaaS) model for end-users. The advantages of the Cloud Computing architecture do not allow the end user to realize business processes design automatically, designing them "on the fly". At the same time, there is the obvious need to create semantically oriented computing workflows based on a service-oriented architecture using a microservices approach, ontologies and metadata structures, which will allow to create workflows “on the fly” in accordance with the current request requirements.


Statistical relationships between the pressure curves Pa(t), Pd(t) and blood flow velocity Va(t), recorded in vivo in the coronary arteries of patients before and after stenosis, as part of the standard clinical procedure for calculating dynamic indices FFR, HSR, CFR, and a number of other ones generally accepted in surgical practice are studied. It is shown that in the case of insignificant stenosis that does not require surgical intervention, there is a correlation between the curves, and their spectrum is represented by three main harmonics. In the case of significant stenosis requiring immediate stenting, the positive correlation between Pa(t) and Pd(t) is less pronounced, and there is a negative correlation with the Va(t) curve. The spectrum of the curves is much more complex and contains high-frequency harmonics. For patients from the so-called “gray zone”, an expert decision on the need for stenting can be made based on the appearance of additional harmonics in the spectrum and a negative correlation between the Pa(t), Pd(t) and Va(t) curves. The proposed approach can be used for automatic decision-making based on machine learning and the development of appropriate mathematical models.


A lot of methods for solving boundary value problems using arbitrary grids, such as SDI (scattered data interpolation) and SPH (smoothed particle hydrodynamics), use families of atomic radial basis functions that depend on parameters to improve the accuracy of calculations. Functions of this kind are commonly called "shape functions". When polynomials or polynomial splines are used as such functions, they are called "basis functions". The term "radial" means that the carrier of the function is a disk or layer. The term "atomic" means that the support of the function is limited, ie the function is finite. In most cases, the term "finite" is used in English-language publications. The article presents an algorithm for constructing such a function, which is the solution of the functional-differential equation where - circle of radius r: , and . The function generated by this equation has two parameters: r and . Variation of these parameters allows to reduce the error in the calculations of the Poisson boundary value problem by several times. The theorem on the existence of such an unambiguous function is proved in the article. The proof of the theorem allows us to construct one-dimensional Fourier transform of this function in the form , where . Previously, function was calculated using its Taylor approximation (at ), and at – using the asymptotic Hankel approximation of the function . Thus in a circle of a point a fairly large error was found. Therefore, the calculation of the function in the range was carried out by Chebyshev approximation of this function in the range . Chebyshev coefficients (calculated in the Maple 18 system with an accuracy of 26 decimal digits) and the range were chosen by an experiment aimed at minimizing the overall error in calculating of the function . Thanks to the use of the Chebyshev approximation, the obtained function has more than twice less error than calculated by the previous algorithm. Arbitrary value of the function is calculated using a six-point Aitken scheme, which can be considered (to some extent) a smoothing filter. The use of Aitken's six-point scheme introduces an error equal to 6% of the total function calculation error , but helps to save a lot of time in the formation of ARBF in solving boundary value problems using the method of collocation.


Sign in / Sign up

Export Citation Format

Share Document