Vestnik komp iuternykh i informatsionnykh tekhnologii
Latest Publications


TOTAL DOCUMENTS

662
(FIVE YEARS 210)

H-INDEX

3
(FIVE YEARS 2)

Published By "Izdatel'skii Dom Spektr, Llc"

1810-7206

Author(s):  
S. S. Vasiliev ◽  
D. M. Korobkin ◽  
S. A. Fomenkov

To solve the problem of information support for the synthesis of new technical solutions, a method of extracting structured data from an array of Russian-language patents is presented. The key features of the invention, such as the structural elements of the technical object and the relationships between them, are considered as information support. The data source addresses the main claim of the invention in the device patent. The unit of extraction is the semantic structure Subject-Action-Object (SAO), which semantically describes the constructive elements. The extraction method is based on shallow parsing and claim segmentation, taking into account the specifics of writing patent texts. Often the excessive length of the claim sentence and the specificity of the patent language make it difficult to efficiently use off-the-shelf tools for data extracting. All processing steps include: segmentation of the claim sentences; extraction of primary SAO structures; construction of the graph of the construct elements f the invention; integration of the data into the domain ontology. This article deals with the first two stages. Segmentation is carried out according to a number of heuristic rules, and several natural language processing tools are used to reduce analysis errors. The primary SAO elements are extracted considering the valences of the predefined semantic group of verbs, as well as information about the type of processed segment. The result of the work is the organization of the domain ontology, which can be used to find alternative designs for nodes in a technical object. In the second part of the article, an algorithm for constructing a graph of structural elements of a separate technical object, an assessment of the effectiveness of the system, as well as ontology organization and the result are considered.


Author(s):  
S. P. Sokolovsky

The usage of known protection tools in information systems, including cryptographic ones, does not allow ensuring the confidentiality of information about its composition, structure and functioning algorithms, due to the fact that modern network technologies require addressable information in the service headers of the transmitted message packets. Strict dependence of information systems configurations on the quality requirements for the architecture, as well as established security policies set by regulators, causes them to have the properties of static, homogeneous and deterministic network parameters. This gives the adversary a number of indisputable advantages to non-compromising conduct network reconnaissance, high reliability of its results over a long period of time, as well as advance (planned) formation and application of the optimal set of tools to implement computer attacks. In this regard, there is a need to develop security technologies that replace static parameters of information systems with the dynamic ones. The analysis of existing technologies in the subject area under consideration showed a number of their inherent disadvantages, consisting in high resource intensity, insufficient performance and narrowness of the scope. In order to solve this problem, the author proposed a new technical solution that allows to level the disadvantages of known analogues and surpasses them by a number of criteria. The technical shape of the suite, consisting of three interconnected subsystems, that allows to mask information directions, the parameters of local area networks and manage the parameters of network connections with network reconnaissance tools, is presented and justified.


Author(s):  
V. A. Konovalov

The paper assesses the prospects for the application of the big data paradigm in socio-economic systems through the analysis of factors that distinguish it from the well-known scientific ideas of data synthesis and decomposition. The idea of extracting knowledge directly from big data is analyzed. The article compares approaches to extracting knowledge from big data: algebraic and multidimensional data analysis used in OLAP-systems (OnLine Analytical Processing). An intermediate conclusion is made about the advisability of dividing systems for working with big data into two main classes: automatic and non-automatic. To assess the result of extracting knowledge from big data, it is proposed to use well-known scientific criteria: reliability and efficiency. It is proposed to consider two components of reliability: methodical and instrumental. The main goals of knowledge extraction in socio-economic systems are highlighted: forecasting and support for making management decisions. The factors that distinguish big data are analyzed: volume, variety, velocity, as applied to the study of socio-economic systems. The expediency of introducing a universe into systems for processing big data, which provides a description of the variety of big data and source protocols, is analyzed. The impact of the properties of sample populations from big data: incompleteness, heterogeneity, and non-representativeness, the choice of mathematical methods for processing big data is analyzed. The conclusion is made about the need for a systemic, comprehensive, cautious approach to the development of fundamental decisions of a socio-economic nature when using the big data paradigm in the study of individual socio-economic subsystems.


Author(s):  
D. I. Kukushkin ◽  
V. A. Antonenko

The serverless computing model is becoming quite widespread. This model allows developers to create flexible and fault tolerant applications with an attractive billing model. The increasing complexity of serverless functions has led to the necessity to use serverless workflows – serverless functions invoking other serverless functions. However, such concept imposes certain requirements on the serverless functions that make distributed computations. The overhead of transferring data between serverless functions can significantly increase the execution time of a program using this approach. One way to reduce overhead is to improve serverless scheduling techniques. This paper discusses an approach to scheduling serverless computations based on data dependency analysis. We propose to divide the problem of planning of the computation of a composite serverless function into three stages. For each stage we provide a description by a mathematical model. We carried out a review of algorithms used to schedule resources by compilers and in parallel computing in multiprocessor systems to determine the best algorithm to implement in a prototype scheduler. For each algorithm, it was specified how it could be used for resource scheduling in serverless platforms. We provide a description of the developed prototype based on the Fission serverless platform. The prototype implements the critical path heuristic. It is shown that the improvements can significantly reduce the execution time up to two times for some types of serverless functions.


Author(s):  
Yu. V. Dubenko ◽  
E. E. Dyshkant ◽  
N. N. Timchenko ◽  
N. A. Rudeshko

The article presents a hybrid algorithm for the formation of the shortest trajectory for intelligent agents of a multi-agent system, based on the synthesis of methods of the reinforcement learning paradigm, the heuristic search algorithm A*, which has the functions of exchange of experience, as well as the automatic formation of subgroups of agents based on their visibility areas. The experimental evaluation of the developed algorithm was carried out by simulating the task of finding the target state in the maze in the Microsoft Unity environment. The results of the experiment showed that the use of the developed hybrid algorithm made it possible to reduce the time for solving the problem by an average of 12.7 % in comparison with analogs. The differences between the proposed new “hybrid algorithm for the formation of the shortest trajectory based on the use of multi-agent reinforcement learning, search algorithm A* and exchange of experience” from analogs are as follows: – application of the algorithm for the formation of subgroups of subordinate agents based on the “scope” of the leader agent for the implementation of a multi-level hierarchical system for managing a group of agents; – combining the principles of reinforcement learning and the search algorithm A*.


Author(s):  
I. V. Sgibnev ◽  
B. V. Vishnyakov

This paper is devoted to the problem of image semantic segmentation for machine vision system of off-road autonomous robotic vehicle. Most modern convolutional neural networks require large computing resources that go beyond the capabilities of many robotic platforms. Therefore, the main drawback of such models is extremely high complexity of the convolutional neural network used, whereas tasks in real applications must be performed on devices with limited resources in real-time. This paper focuses on the practical application of modern lightweight architectures as applied to the task of semantic segmentation on mobile robotic systems. The article discusses backbones based on ResNet18, ResNet34, MobileNetV2, ShuffleNetV2, EfficientNet-B0 and decoders based on U-Net, DeepLabV3 and DeepLabV3+ as well as additional components that can increase the accuracy of segmentation and reduce the inference time. In this paper we propose a model using ResNet34 enconding and DeepLabV3+ decoding with Squeeze & Excitation blocks that was optimal in terms of inference time and accuracy. We also demonstrate our off-road dataset and simulated dataset for semantic segmentation. Furthermore, we increased mIoU metric by 2.6 % on our off-road dataset using pretrained weights on simulated dataset, compared with mIoU metric using pretrained weights on the Cityscapes. Moreover, we achieved 76.1 % mIoU on the Cityscapes validation set and 85.4 % mIoU on our off-road validation set at 37 FPS (Frames per Second) for an input image of 1024×1024 size on one NVIDIA GeForce RTX 2080 card using NVIDIA TensorRT inference framework.


Author(s):  
A. A. Dubanov

This article discusses a kinematic model of the problem of group pursuit of a set of goals. The article discusses a variant of the model when all goals are achieved simultaneously. And also the possibility is considered when the achievement of goals occurs at the appointed time. In this model, the direction of the speeds by the pursuer can be arbitrary, in contrast to the method of parallel approach. In the method of parallel approach, the velocity vectors of the pursuer and the target are directed to a point on the Apollonius circle. The proposed pursuit model is based on the fact that the pursuer tries to follow the predicted trajectory of movement. The predicted trajectory of movement is built at each moment of time. This path is a compound curve that respects curvature constraints. A compound curve consists of a circular arc and a straight line segment. The pursuer's velocity vector applied to the point where the pursuer is located touches the given circle. The straight line segment passes through the target point and touches the specified circle. The radius of the circle in the model is taken equal to the minimum radius of curvature of the trajectory. The resulting compound line serves as an analogue of the line of sight in the parallel approach method. The iterative process of calculating the points of the pursuer’s trajectory is that the next point of position is the point of intersection of the circle centered at the current point of the pursuer’s position, with the line of sight corresponding to the point of the next position of the target. The radius of such a circle is equal to the product of the speed of the pursuer and the time interval corresponding to the time step of the iterative process. The time to reach the goal of each pursuer is a dependence on the speed of movement and the minimum radius of curvature of the trajectory. Multivariate analysis of the moduli of velocities and minimum radii of curvature of the trajectories of each of the pursuers for the simultaneous achievement of their goals i based on the methods of multidimensional descriptive geometry. To do this, the projection planes are entered on the Radishchev diagram: the radius of curvature of the trajectory and speed, the radius of curvature of the trajectory and the time to reach the goal. On the first plane, the projection builds a one-parameter set of level lines corresponding to the range of velocities. In the second graph, corresponding to a given range of speeds, functions of the dependence of the time to reach the target on the radius of curvature. The preset time for reaching the target and the preset value of the speed of the pursuer are the optimizing factors. This method of constructing the trajectories of pursuers to achieve a variety of goals at given time values may be in demand by the developers of autonomous unmanned aerial vehicles.


Author(s):  
N. I. Tikhonov

Collections of scientific publications are growing rapidly. Scientists have access to portals containing a large number of documents. Such a large amount of data is difficult to investigate. Methods of document visualization are used to reduce labor costs, search for necessary and similar documents, evaluate the scientific contribution of certain publications and reveal hidden links between documents. The methods of document visualization can be based on various models of document representation. In recent years, word embedding methods for natural language processing have become extremely popular. Following them, methods for analyzing text collections began to appear to obtain vector representations of documents. Although there are many document analyzing systems, new methods can give new understandings of collections, have better performance for analyzing large collections of documents, or find new relationships between documents. This article discusses two methods Paper2vec and Cite2vec that get vector representations of documents using citation information. The text provides a brief description of the considered methods for analyzing collections of scientific publications, describes experiments with these methods, including the visualization of the results of the methods and a description of the problems that arise.


Author(s):  
K. V. Obrosov ◽  
V. Ya. Kim ◽  
V. M. Lisitsyn

The problems of using two-beam Laser Locators (LL) on Unmanned Vehicles (UV) are analyzed. The article discusses the solution to the problem of automatic assessment of the possibility of collision of a UV with other traffic participants based on information generated by the LL. LL performs controlled scanning of the road surface at a given distance from the vehicle. To generate the error signal in the tilt angle control loop, a special filtering of the correction sequence is applied. Such filtering eliminates numerous outliers and forms a sample of correction values that do not lead to abrupt changes in the road sensing range. The modeling of the system is performed, the adequacy of which is due to the results of the conducted field experiments with a real LL. It is shown that the threat of collision arises if the vehicle speed is in a certain (dangerous) interval, the boundaries of which are functions of the following arguments: – the angle between the tangents to the trajectories of the UV and the vehicle during the LL measurements “angle-angle-range”; – distances between the UV and the vehicle at the same time interval. Tasks solved: – estimates of the angle and distances between the UV and the vehicle based on the current LL measurements “angle-angle-range”; – determination of the boundaries of the dangerous range of vehicle speeds at known UV speeds and dimensions of the UV and vehicle; – estimation of the vehicle speed according to LL measurements “angle-angle-range”. Simulation methods were used to determine the accuracy of estimates of the boundaries of the vehicle speeds dangerous range, which made it possible to create an algorithm for warning about a possible collision.


Author(s):  
V. E. Makhov ◽  
V. M. Petrushenko ◽  
A. V. Emel'yanov ◽  
V. V. Shirobokov ◽  
A. I. Potapov

The issues of constructing algorithms for obtaining coordinate and non-coordinate information used to solve the problem of multiplexing images obtained from several optoelectronic systems are considered. A unique mathematical method for finding the corresponding points in images, based on algorithms for continuous wavelet transform of the brightness structure of an image, is proposed. The technology of development of algorithms intended for multi-position optoelectronic systems for monitoring remote objects based on software from National Instruments is considered. A technique for constructing software for obtaining information is proposed, which ensures high accuracy in determining the coordinates of the corresponding fragments in images. It is shown that the parallel use of several methods makes it possible to assess the reliability of the information obtained under conditions of changing observation parameters. The use of several methods makes it possible to assess the reliability of the information obtained under conditions of changing observation parameters. Computational experiments have confirmed that a more accurate search for image alignment regions is provided by the double wavelet transform method by increasing the number of extrema of the curves of the continuous wavelet transform coefficients, expanding the area from localization and additional filtering. An example of the practical implementation of the developed algorithms in a two-channel optoelectronic system is presented.


Sign in / Sign up

Export Citation Format

Share Document