Computer Science and Cybersecurity
Latest Publications


TOTAL DOCUMENTS

23
(FIVE YEARS 23)

H-INDEX

1
(FIVE YEARS 1)

Published By V. N. Karazin Kharkiv National University

2519-2310

Countering a quantum computer in the process of illegal ultra-high-speed decryption of messages is technically feasible. Information owner must oppose the competitor's computer with tasks, the solution of which requires an infinite number of operations during decryption. For example, the dependence of functions on an infinite number of informative features. The owner encrypts by integrating the functions, the recipient decrypts by solving the integral equations. It is not a discrete but an analog approach that prevails here. The basis for the implementation of this approach was created by Polish scientists. Mathematician Stefan Banach (1892-1945), who created modern functional analysis, and Marian Mazur (1909-1983), the author of " The Qualitative Theory of Information". Their theory was created in contrast with the "Quantitative Information Theory". Cryptologists who have devoted their whole lives to improving the "discrete" theory and found themselves close to power (and finance), try not to recall that Claude Shannon in his basic work "Communication Theory of Secrecy Systems" more than once emphasized the discrete focus of his developments anticipating future research on the specific limitations of his work adapted to the communication theory. Forgetting about the unlimited speeds and amounts of memory of quantum computers the orthodox talk about redundancy and further purely technical issues, including administrative leverages for counteracting against opponents. It is impossible to stop the progress of science. Experiments have shown the reality of creating such post-quantum-level cryptographic systems.


The main characteristics of Internet harassment (cyberbullying) are investigated in the research. The main features of this phenomenon are considered. The analysis of existing types of cyberbullying and their individual characteristics is made. The examples of legislative acts of different countries is concluded that there is deficiency of relevant rules of low. It is emphasized that anyone can become a victim of in the modern world. At the same time a risk of becoming a victim of cyberbullying does not depend on any factors (for example financial position of victim, his or her age, sex, social position etc.). It is noted that communications that are made in cyberspace provide an opportunity for users to choose information they want to make public carefully and in advance. In most cases it contributes to help people show theirs strengths (for example, when communicating in chats). In results there is often false sympathy between network interlocutors and they trust each other. So the idealization of the partner happens and any his or her information is perceived more sensitive than during direct communication. This effect is successfully used during cyberbullying, when first one person inspires the trust of another and then changes communication tactics, becoming faithless and aggressive. It is emphasized that the cyberbullying phenomenon is very underestimated and that`s why it is a serious problem. The brief overview of existing technologies and means of counteracting this phenomenon is made. The comparison of their effectiveness is made. The standards that modern and effective technology of cyberbullying resistance must meet are systematized. There are examples of successful realization of user protection in most popular social network. It is emphasized that for cyberbullying resistance nowadays in most cases the protection technologies of it is to localize undesirable content in terms of the existence of cyberbullying. Based on the results of this research it is confirmed that the cyberbullying will spread further. This is due to the constant increase in the number of users of new network services and online platforms for communication. For effective defense against cyberbullying it is required the introduction of organizational and technical measures. At the end it is proposed the general assessment of further development of cyberbullying and the ways to improve appropriate countermeasures.


In this article are discussed techniques of hiding information messages in cover image using direct spectrum spreading technology. This technology is based on the use of poorly correlated pseudorandom (noise) sequences. Modulating the information data with such signals, the message is presented as a noise-like form, which makes it very difficult to detect. Hiding means adding a modulated message to the cover image. If this image is interpreted as noise on the communication channel, then the task of hiding user’s data is equivalent to transmitting a noise-like modulated message on the noise communication channel. At the same it is supposed that noise-like signals are poorly correlated both with each other and with the cover image (or its fragment). However, the latter assumption may not be fulfilled because a realistic image is not an implementation of a random process; its pixels have a strong correlation. Obviously, the selection of pseudo-random spreading signals must take this feature into account. We are investigating various ways of formation spreading sequences while assessing Bit Error Rate (BER) of information data as well as cover image distortion by mean squared error (MSE) and by Peak signal-to-noise ratio (PSNR). The obtained experimental dependencies clearly confirm the advantage of using Walsh sequences. During the research, the lowest BER values were obtained. Even at low values of the signal power of the spreading sequences (P≈5), the BER value, in most cases, did not exceed 0,01. This is the best result of all the sequences under consideration in this work. The values of PSNR when using orthogonal Walsh sequences are, in most cases, comparable to other considered options. However, for a fixed value of PSNR, using the Walsh transform results in significantly lower BER values. It is noted that a promising direction is the use of adaptively generated discrete sequences. So, for example, if the rule for generating expanding signals takes into account the statistical properties of the container, then you can significantly reduce the value of BER. Also, another useful result could be increasing PSNR at a fixed (given) value of BER. The purpose of our work is to justify the choice of extending sequences to reduce BER and MSE (increase PSNR).


The article is devoted to the study and research of the properties of code-based cryptosystems. They provide a high level of security even in the conditions of quantum cryptographic analysis, i.e. belong to the new generation of cryptosystems for post-quantum application. The main disadvantage of the known code-based digital signature schemes is the long time to generate a signature. This is due to the large number of attempts to decode a randomly generated vector (which is interpreted as a syndrome vector). The high complexity of such a procedure requires the search for new mechanisms and algorithms that would accelerate the formation of code-base electronic signatures. The article presents the results of two research vectors. First, we propose a new code-based digital signature scheme on the use of a one-way function from the classical McEliece cryptosystem and not only provides a proper level of resistance to classical cryptanalysis and cryptanalysis using quantum computers, but also, compared to known alternatives, provides protection against special types of attacks, such as simultaneous counterfeit attacks. Quantitative estimates of the reliability and speed of the new cryptographic algorithm, which were obtained by experimental verification on the BCH codes, are also given. The second vector of research concerns the study of a new direction, which is associated with the modification of the decoder by artificially increasing the corrective code ability. Thanks to the improved decoder scheme, we can significantly reduce the generation time of signatures. The paper confirms the effectiveness of the proposed decoder modification in the implementation of a new digital signature scheme in comparison with the classic Peterson-Gorenstein-Zierler decoder in the context of comparing the speed of signature formation and the number of required decoding attempts.


In today's world, the problem of losses from the actions of malicious software (or ordinary software, which has the characteristics of undeclared functions) continues to be extremely relevant. Therefore, the creation and modification of anti-virus solutions for protection and analysis of malware (software) is a relevant and promising area of research. This is due to the lack of a single, universal method that provides 100% finding malicious code. The paper considers the composition and main components of static analysis. The main methods of static analysis is identified, and relevant examples of almost all of them are given. Got concluded that the main advantages of static analysis are that by using a relatively simple set of commands and tools, it is possible to perform malware analysis and partially understand how it works. Attention is drawn to the fact that static analysis does not give 100% certainty that the investigated software is malicious. With this in mind, to provide a more meaningful analysis, you need to collect as much data as possible about the structure of the file, its possible functions, etc. Analysis of files for the possible presence of malicious code is provided through the use of appropriate programs to view their structure and composition. A more informative way is to analyze the Portable Executable format. It consists of the analysis of various sections of the code of fields and resources. Since static analysis does not always provide the required level of guarantees, it is better to use machine learning algorithms at the stage of making the final classification decision (malicious or not). This approach will make it possible to process large data sets with greater accuracy in determining the nature of the software is analyzed. The main purpose of this work is to analyze the existing methods of static malware analysis, and review the features of their further development.


The role and main tasks of various network traps (Honeypot) in the construction of integrated security systems are defined. Basic classification signs and features of the primary tuning of a few commercial facilities software solutions. It is concluded that the main advantages of Honeypot technology, among other things, are their flexibility and scalability. It is emphasized that at present there are no perfect methods of identification and rapid compromise of network traps. Attention is drawn to the fact that network intelligence tactics and methods of network attacks are constantly progressing. Given this fact, the ongoing audit of HP data and prompt response to identified network incidents is one of the main areas of work for staff on compliance with corporate information security policy requirements. It is noted that the architecture of various traps, in general, is quite well known and therefore potentially vulnerable. Therefore, by providing traps with a more flexible (variable) scenario context and reducing the exposure time, it is possible to maintain their protective potential in the parity enough state. Both of these direction require closer attention (detailed analysis of log-files data and adjustment of behavioral avatar algorithms for the created trap) on the part of staff, and require constant support of them professional competencies. Based on the results of reviewing the capabilities of existing Honeypots and generalizing the typical features of network activity of the most characteristic nodes (in this case the file server), the features of synthesis of the corresponding behavioral profiles (avatars) are considered. It is claim that systematization of avatar rules Honeypot (as a set of behavioral algorithms) and timely correction of existing databases of behavioral profiles is a task that is difficult to formalize. This is caused to the potential variety of network activity options that are specific to each network and the individual settings of existing network nodes. In this sense, excessive unification (narrowing of the possible field of behavioral reactions) of behavioral profiles Honeypot can greatly facilitate the attacker to monitor and subsequently identify the trap created. Therefore, the formation of a basic set of relevant network avatars should be considered as a basis for its further modification under a special task, topology and other features of each individual IT structure (or features of their individual elements). It is emphasized that the introduction of trap technology does not replace other security technologies and tools, but only effectively expands the existing arsenal of countering new security threats (primarily as a tool for operational intelligence and rapid response). Therefore, the way to integrate net-traps with other security solutions is the most balanced way to further improve the overall security of network resources.


The article considers the problems of creating a virtual private network in the modern world. A method for finding the optimal solution of a virtual private network based on the method of hierarchy analysis is proposed. The advantages and disadvantages of this method are given. Due to the availability of a wide range of software and hardware, solutions based on five current VPN protocols are considered: PPTP, IPsec, L2TP + IPsec, SSTP, and OpenVPN. Experts involved in the study introduced six criteria for determining the best VPN solution. The following criteria are speed, data encryption, settings, configuration, ports, stability, and customer compatibility. A hierarchy of decision levels for case studies is provided. The assessment of priorities on the basis of expert judgments of specialists on this topic has been performed. The consistency check has been performed to identify possible contradictions. The global priority is determined using the method of eigenvalues, which calculates not only the priorities but also the degree of inconsistency. According to the simulation results, it is emphasized that the choice of VPN protocol is a difficult task. Solving this problem requires market analysis, the definition of comparison criteria, and prioritization. Attention is drawn to the fact that all these components are carried out in conditions where there is no complete information about the system in which these processes occur. In this case, it is necessary to use decision-making methods in conditions of uncertainty. At present, there are a large number of such methods, but in this situation, experts have proposed a method of analysis of hierarchies. Under the above criteria and priorities, according to the results of the calculation, the OpenVPN protocol is the most optimal solution for creating a virtual private network.


The article presents an example of verification of the fingerprint database by the method of solving the problem of a salesman using the decomposition of the neighborhood of the nearest minutes. The solution of this problem is resistant to linear, angular deformations, mixing of points. This method provides the correct solution for a small number of points, for a large number of points there is a cross section of the contours, the solution is not optimal. Therefore, to reduce the processing time and calculate the metric, a modified algorithm for solving the problem by the method of branches and boundaries, namely the alignment and exclusion of arcs on each cycle of the optimal route. Verification is based on the creation of local structures for each minute of the imprint, because it is the local structures that are resistant to deformation. Building global structures very often does not lead to good quality indicators, as there is a problem with the centering of the entire sample. A complete list of tests of fingerprint database templates during their verification by this method has been carried out. The use of decomposition of characteristic features provides greater stability when adding false and erasing true minutes. The results of the article show the values of pairwise comparisons of two templates for true and false tests. The indicators of false rejection rate (FRR), false access rate (FAR), single equivalent error rate (EER) were studied.


Currently, an attempt is being made to introduce biometric technologies in various spheres of public and state life: forensics, access control systems, applications on mobile devices, banking, etc. The problem of accuracy remains an open question for discussion, because when solving the problem of verification of biometric samples there are problems of addition or disappearance of reference points, deformation of distances between them, linear and angular displacements of the whole sample. Also, the developed biometric systems do not meet all the requirements of information security, namely the integrity, accessibility, authenticity, indisputability, observability and confidentiality. The article presents an analysis of the method of decomposition of minefields during fingerprint verification, describes its advantages and disadvantages in comparison with other methods. It is based on the creation of local structures for each minute of the imprint, because it is the local structures that are resistant to mixing, angular and linear displacement of points. Building global structures often does not lead to good accuracy, as there is a problem of centering the entire sample. A complete list of tests of samples of the database of fingerprints during their verification by this method. An algorithm for constructing a code for an arbitrary minution and an algorithm for comparing two sample templates are described. The results of the article show the value of pairwise comparisons of two templates for true and false tests. The indicators of false rejection rate (FRR), false access rate (FAR), single equivalent error rate (EER) were studied.


Sign in / Sign up

Export Citation Format

Share Document