Кибернетика и программирование
Latest Publications


TOTAL DOCUMENTS

34
(FIVE YEARS 34)

H-INDEX

1
(FIVE YEARS 1)

Published By Aurora Group, S.R.O

2644-5522

Author(s):  
Aleksei Aleksandrovich Rumyantsev ◽  
Farkhad Mansurovich Bikmuratov ◽  
Nikolai Pavlovich Pashin

The subject of this research is medical chest X-ray images. After fundamental pre-processing, the accumulated database of such images can be used for training deep convolutional neural networks that have become one of the most significant innovations in recent years. The trained network carries out preliminary binary classification of the incoming images and serve as an assistant to the radiotherapist. For this purpose, it is necessary to train the neural network to carefully minimize type I and type II errors. Possible approach towards improving the effectiveness of application of neural networks, by the criteria of reducing computational complexity and quality of image classification, is the auxiliary approaches: image pre-processing and preliminary calculation of entropy of the fragments. The article provides the algorithm for X-ray image pre-processing, its fragmentation, and calculation of the entropy of separate fragments. In the course of pre-processing, the region of lungs and spine is selected, which comprises approximately 30-40% of the entire image. Then the image is divided into the matrix of fragments, calculating the entropy of separate fragments in accordance with Shannon’s formula based pm the analysis of individual pixels. Determination of the rate of occurrence of each of the 255 colors allows calculating the total entropy. The use of entropy for detecting pathologies is based on the assumption that its values differ for separate fragments and overall picture of its distribution between the images with the norm and pathologies. The article analyzes the statistical values: standard deviation of error, dispersion. A fully connected neural network is used for determining the patterns in distribution of entropy and its statistical characteristics on various fragments of the chest X-ray image.


Author(s):  
Ilkhom Izatovich Bakaev

The automatic processing of unstructured texts in natural languages is one of the relevant problems of computer analysis and text synthesis. Within this problem, the author singles out a task of text normalization, which usually suggests such processes as tokenization, stemming, and lemmatization. The existing stemming algorithms for the most part are oriented towards the synthetic languages with inflectional morphemes. The Uzbek language represents an example of agglutinative language, characterized by polysemanticity of affixal and auxiliary morphemes. Although the Uzbek language largely differs from, for example, English language, it is successfully processed by stemming algorithms. There are virtually no examples of effective implementation of stemming algorithms for the Uzbek language; therefore, this questions is the subject of scientific interest and defines the goal of this work. In the course of this research, the author solved the task of bringing the given texts in the Uzbek language to normal form, which on the preliminary stage were tokenized and cleared of stop words. To author developed the method of normalization of texts in the Uzbek language based on the stemming algorithm. The development of stemming algorithm employed hybrid approach with application of algorithmic method, lexicon of linguistic rules and database of the normal word forms of the Uzbek language. The precision of the proposed algorithm depends on the precision of tokenization algorithm. At the same time, the article did not explore the question of finding the roots of paired words separated by spaces, as this task is solved at the stage of tokenization. The algorithm can be integrated into various automated systems for machine translation, information extraction, data retrieval, etc.


Author(s):  
Vladimir Viktorovich Pekunov

This article explores certain aspects of the process of numerical solution of the tasks of continuous medium mechanics in the conditions of ongoing chemical reactions. Such tasks are usually characterized by the presence of multiple local areas with elevated temperature, which position in space is relatively unstable. In such conditions, rigidly stable methods of integration with step control, which in the “elevated temperature” areas that have higher time input comparing to other areas. In terms of using geometric parallelism, this fact leads to substantial imbalance of CPU load, which reduces the overall effectiveness of parallelization. Therefore, this article examines the problem of CPU load balancing in the context of parallel solution of the aforementioned tasks. The other offers a new modification of the algorithm of large-block distributed balancing with improved time prediction of the numerical integration of chemical kinetics equations, which is most effective in the conditions of drift of the areas with “elevated temperatures”. The improvement consists in application of the linear perceptron, which analyzes several previous values of time integration (the basic version of the algorithm uses only one previous spot from the history of time integration). This allows working in the conditions of fast and slow drift of the areas with “elevated temperatures”. The effectiveness of this approach is demonstrated on the task of modeling the flow-around the building with high-temperature combustion on its roof. It is indicated that the application of modified algorithm increases the effectiveness of parallelization by 2.1% compared to the initial algorithm.


Author(s):  
Vladimir Viktorovich Pekunov

The subject of this article is the numerical optimization techniques used in training neural networks that serve as predicate components in certain modern eddy viscosity models. Qualitative solution to the problem of training (minimization of the functional of neural network offsets) often requires significant computational costs, which necessitates to increase the speed of such training based on combination of numerical methods and parallelization of calculations. The Marquardt method draws particular interest, as it contains  the parameter that allows speeding up the solution by switching the method from the descent away from the solution to the Newton’s method of approximate solution. The article offers modification of the Marquardt method, which uses the limited series of random samples for improving the current point and calculate the parameter of the method. The author demonstrate descent characteristics of the method in numerical experiments, both on the test functions of Himmelblau and Rosenbrock, as well as the actual task of training the neural network predictor applies in modeling of the turbulent flows. The use of this method may significantly speed up the training of neural network predictor in corrective models of eddy viscosity. The method is less time-consuming in comparison with random search, namely in terms of a small amount of compute kernels; however, it provides solution that is close to the result of random search and is better than the original Marquardt method.


Author(s):  
Vladlena Andreevna Larkina

This article explores the walking robots that are used or assist in nondeterministic environment, such as rescue operations. This required carrying out in-depth analysis of the existing models to acquire relevant information on the robots with walking mechanisms in accordance with several criteria: parameters, weight, degrees of freedom, speed of movement, input energy, runtime, maximum load, advantages and disadvantages of the reviewed models. Using the methods of comparative and critical analysis, the author analyzed the available materials for the past 5 years, which allowed accomplishing the set tasks. The results consist in compiling a summary table for largely generalized groups of walking robots and three tables for the models under review, which summarize their advantages and weaknesses. Such data would broaden the knowledge of the researchers dealing with the walking robots, analyze both national and foreign studies on the topic, apply this experience in further work, focus on solution of the problems, and emphasize the uniqueness and relevance of their developments. This article determines the peculiarities of utilization of walking robots specifically in rescue operations; however, the scope of their applicability can be broader. The author also describes the nontraditional locomotors of walking robots, which are widely known in media, video hosting, and other tools of receiving information.


Author(s):  
Vladimir Viktorovich Pekunov

This article examines the problem of numerical simulation of interaction between the gaseous sulfur dioxide emitted by road transport and fog in the conditions of high humidity. For this purpose, the author applies a multi-factor two-phase mathematical model, which takes into account the dynamics of turbulent main phase, dynamics and kinetics of the multi-sectional droplet phase, presence of thermal inconsistencies formed as a result of direct and diffused solar radiation in various ranges, diffusion of sulfur dioxide, and its absorption by the fog droplets. The article carries out a numerical calculation of the corresponding task within the modeling system of environmental processes AirEcology-P, which allows generating the optimal calculation code for a particular mathematical model. The proposed complex mathematical model that descries interaction between the emitted sulfur dioxide gas and the fog droplets is new; it specifies the calculation of the kinetics of droplet phase based on consideration of the additional factor of droplet fusion characteristic to fog. The submodel of the droplet phase was tested in the numerical simulation (the results were compared with the data of direct Lagrangian modeling of the composite of 1,000 droplets), indicating decent accuracy results. The article obtains the results of numerical simulation of interaction between the emitted SO2 and the droplets. The author demonstrates the self-cleaning ability of the atmosphere, the degree of which correlates with the initial concentration of the smallest droplets and the height from the surface.


Author(s):  
Ivan Alekseevich Selishchev ◽  
Svetlana Aleksandrovna Oleinikova

The object of this research is the service systems that receive a stream of requests on their input, which represents a range of mutually dependent operations of the “finish” – “start” type. The time of conducting separate operations is a random variable, and the delivery itself requires the use of one or several types of resources. It is suggested that there are timeframes for processing the request. The goal of this research is to develop the database structure that would allow storing information on the incoming projects, operations, mutual dependence, used resources and specialists. The design of logical structure of the database was carried out using the methodology “essence – link”, which determines the data values in the context of their correlation with other data. The analysis of specificity of the object of research revealed a range of requirements submitted in the database. Leaning on these requirements along with considering normalization of relations used in the theory of relational databases, the author designed the universal structure from the perspective of its application, support of the analysis of the scheduling process, and the entirety of peculiarities of the object of research. Such database structure can be used in different fields that allow decomposition of the project into multiple separate interdependent tasks, not requiring major modifications. The article provides the examples of using the database for information systems in construction sector, as well as for the development of IT projects.


Author(s):  
Vasilii Андреевич Rudometkin

The subject of the research is the problem of monitoring and troubleshooting in distributed high-load systems. The most common mistakes in design and development, methods of their forecasting and solutions are described. In this article, the author describes the most popular tools that are currently used in the development of high-load systems and the main mistakes when working with them from a developer's point of view.This article describes a set of tools, the implementation of which can significantly reduce the time spent searching for vulnerabilities, describes the difficulties in choosing a set of metrics technologies - ELK / EFK, describes their advantages and disadvantages. The analogs of the tools used are analyzed in detail. The main conclusions in the work are:- the need to develop the infrastructure for monitoring the system from the beginning of the project development, due to which it is possible to correct the high complexity of the project at the stage of its development.- it is necessary to use the most popular tools for which there is a large amount of information in open sources, for example, on the Internet. This approach will reduce the time spent on fixing errors that can be caused by a specific set of tools.- the company needs not to save on highly qualified personnel, which in the future will save a lot of time on fixing problems, reduce the time for developing new functionality and allow spending a minimum of time to support and test the already developed functionality.- when analyzing problems, it is worth paying attention to public resources in which other companies, most likely, have already solved similar problems. For example, the Facebook company has been dealing with the monitoring problem for a long time and has developed a large number of tools to solve this problem. They also collect a large number of system records for analyzing the behavior of the system under any circumstances.


Author(s):  
Konstantin Alekseev

The relevance of this article lies in the fact that today's databases are the basis of numerous information systems. The information accumulated in them is extremely valuable material, and today database processing methods are widely spread in terms of extracting additional methods, knowledge from them, which are interconnected with generalization and various additional methods of information processing.The object of research in this work is relational databases and DBMS, the subject of research is the features of their use in applied programming.In accordance with the set goal, it is necessary to solve the following tasks:1) to consider the concept and essence of a relational database;2) to analyze the problematic aspects of relational databases in modern conditions. Relational databases are among the most widespread due to their simplicity and clarity at the creation stage and at the user level. It should also be noted that the main advantage of RDB is its compatibility with the main query language SQL, which is intuitive for users.Nevertheless, with all the variety of approaches, there are still some canons, violation of which greatly affects both the design of the database and its operation. For example, the problem of database normalization is very relevant. Neglecting normalization makes the database structure confusing and the database itself unreliable.Promising directions include the development of queries to a relational database using heuristic methods, as well as the method of accumulating previously optimized queries with subsequent verification of the derivability of the current query from the accumulated ones.Finally, a very slow decline in relational databases is probably happening. While they are still the primary storage medium, especially in large enterprise projects, they are gradually being replaced by non-relational solutions that will become the majority over time.


Author(s):  
Pavel Sechenov ◽  
Inna Rybenko ◽  
Valentin Tsymbal

The simulation model of the column string-emulsion reactor previously suggested that the temperature does not change on the height of reactor and over time is consistent. The assessment of temperature changes in the reactor requires the knowledge on the amount of heat necessary to heat up the particles, absorbed or emitted in the course of chemical reactions, as well as the speed of heat transmission in space. The possibility of calculating these parameters for each floating particle in online regime is limited by the operating speed of the computer system. For accelerating the calculations, the author creates the database of these parameters for all substances involved in the reactions. In these circumstances, enthalpies and entropies were expressed in through the specific thermal capacity calculated based on the fifth degree polynomial. The coefficient values of the polynomial and phase transitions were taken from the reference books. The article provides an algorithm in form of the logic diagram for calculating the specific enthalpy of the particle. Based on the developed algorithm, the author creates the software that allows calculating thermodynamic functions. The interaction between the classes are demonstrated in the UML class diagram. The research presents the calculations of specific enthalpy and entropy for substances in the interval of temperatures of 298-1850 K. Variations of the values of enthalpy and entropy at the temperature of 1700 K compared to the reference values do not exceed 1.2 %.


Sign in / Sign up

Export Citation Format

Share Document