A New Circle based Symmetric key Encryption Technique for Text Data

Author(s):  
Sailaja K L ◽  
Keyword(s):  
Author(s):  
Yegireddi Ramesh ◽  
Kiran Kumar Reddi

With the enormous growth in the Internet and network, data security has become an inevitable concern for any organization. From antecedent security has attracted considerable attention from network researchers. In this perspective many possible fields of endeavour come to mind with many cryptographic algorithms in a broader way, each is highly worthy and lengthy. As society is moving towards digital information age we necessitate highly standard algorithms which compute faster when data size is of wide range or scope. On survey, numerous sequential approaches carried out by symmetric key algorithms on 128 bits as block size are ascertained to be highly in securable and resulting at a low speed. As in the course the commodities are immensely parallelized on multi core processors to solve computational problems, in accordance with, propound parallel symmetric key based algorithms to encrypt/decrypt large data for secure conveyance. The algorithm is aimed to prevail by considering 64 character (512 bits) plain text data, processed 16 characters separately by applying parallelism and finally combine each 16 character cipher data to form 64 character cipher text. The round function employed in the algorithm is very complex, on which improves efficacy.


2020 ◽  
Vol 2 (3) ◽  
pp. 137-144
Author(s):  
Yegireddi Ramesh ◽  
Kiran Kumar Reddi

With the enormous growth in the Internet and network, data security has become an inevitable concern forany organization. From antecedent security has attracted considerable attention from network researchers. In thisperspective many possible fields of endeavour come to mind with many cryptographic algorithms in a broader way,each is highly worthy and lengthy. As society is moving towards digital information age we necessitate highlystandard algorithms which compute faster when data size is of wide range or scope. On survey, numerous sequentialapproaches carried out by symmetric key algorithms on 128 bits as block size are ascertained to be highly insecurable and resulting at a low speed. As in the course the commodities are immensely parallelized on multi coreprocessors to solve computational problems, in accordance with, propound parallel symmetric key based algorithmsto encrypt/decrypt large data for secure conveyance. The algorithm is aimed to prevail by considering 64 character(512 bits) plain text data, processed 16 characters separately by applying parallelism and finally combine each 16-character cipher data to form 64-character cipher text. The round function employed in the algorithm is verycomplex, on which improves efficacy.


1976 ◽  
Vol 15 (01) ◽  
pp. 21-28 ◽  
Author(s):  
Carmen A. Scudiero ◽  
Ruth L. Wong

A free text data collection system has been developed at the University of Illinois utilizing single word, syntax free dictionary lookup to process data for retrieval. The source document for the system is the Surgical Pathology Request and Report form. To date 12,653 documents have been entered into the system.The free text data was used to create an IRS (Information Retrieval System) database. A program to interrogate this database has been developed to numerically coded operative procedures. A total of 16,519 procedures records were generated. One and nine tenths percent of the procedures could not be fitted into any procedures category; 6.1% could not be specifically coded, while 92% were coded into specific categories. A system of PL/1 programs has been developed to facilitate manual editing of these records, which can be performed in a reasonable length of time (1 week). This manual check reveals that these 92% were coded with precision = 0.931 and recall = 0.924. Correction of the readily correctable errors could improve these figures to precision = 0.977 and recall = 0.987. Syntax errors were relatively unimportant in the overall coding process, but did introduce significant error in some categories, such as when right-left-bilateral distinction was attempted.The coded file that has been constructed will be used as an input file to a gynecological disease/PAP smear correlation system. The outputs of this system will include retrospective information on the natural history of selected diseases and a patient log providing information to the clinician on patient follow-up.Thus a free text data collection system can be utilized to produce numerically coded files of reasonable accuracy. Further, these files can be used as a source of useful information both for the clinician and for the medical researcher.


Author(s):  
I. G. Zakharova ◽  
Yu. V. Boganyuk ◽  
M. S. Vorobyova ◽  
E. A. Pavlova

The article goal is to demonstrate the possibilities of the approach to diagnosing the level of IT graduates’ professional competence, based on the analysis of the student’s digital footprint and the content of the corresponding educational program. We describe methods for extracting student professional level indicators from digital footprint text data — courses’ descriptions and graduation qualification works. We show methods of comparing these indicators with the formalized requirements of employers, reflected in the texts of vacancies in the field of information technology. The proposed approach was applied at the Institute of Mathematics and Computer Science of the University of Tyumen. We performed diagnostics using a data set that included texts of courses’ descriptions for IT areas of undergraduate studies, 542 graduation qualification works in these areas, 879 descriptions of job requirements and information on graduate employment. The presented approach allows us to evaluate the relevance of the educational program as a whole and the level of professional competence of each student based on objective data. The results were used to update the content of some major courses and to include new elective courses in the curriculum.


2019 ◽  
Vol 7 (4) ◽  
pp. 220-224
Author(s):  
J.Lenin . ◽  
B. Sundaravadivazhagan ◽  
M. Sulthan Ibrahim
Keyword(s):  

Author(s):  
Kaldius Ndruru ◽  
Putri Ramadhani

Security of data stored on computers is now an absolute requirement, because every data has a high enough value for the user, reader and owner of the data itself. To prevent misuse of the data by other parties, data security is needed. Data security is the protection of data in a system against unauthorized authorization, modification, or destruction. The science that explains the ways of securing data is known as cryptography, while the steps in cryptography are called critical algorithms. At this time, there are many cryptographic algorithms whose keys are weak especially the symmetric key algorithm because they only have one key, the key for encryption is the same as the decryption key so it needs to be modified so that the cryptanalysts are confused in accessing important data. The cryptographic method of Word Auto Key Encryption (WAKE) is one method that has been used to secure data where in this case the writer wants to maximize the encryption key and description of the WAKE algorithm that has been processed through key formation. One way is to apply the algebraic pascal triangle method to maximize the encryption key and description of the WAKE algorithm, utilizing the numbers contained in the columns and rows of the pascal triangle to make shifts on the encryption key and the description of the WAKE algorithm.Keywords: Cryptography, WAKE, pascal


Author(s):  
Aleksey Klokov ◽  
Evgenii Slobodyuk ◽  
Michael Charnine

The object of the research when writing the work was the body of text data collected together with the scientific advisor and the algorithms for processing the natural language of analysis. The stream of hypotheses has been tested against computer science scientific publications through a series of simulation experiments described in this dissertation. The subject of the research is algorithms and the results of the algorithms, aimed at predicting promising topics and terms that appear in the course of time in the scientific environment. The result of this work is a set of machine learning models, with the help of which experiments were carried out to identify promising terms and semantic relationships in the text corpus. The resulting models can be used for semantic processing and analysis of other subject areas.


Sign in / Sign up

Export Citation Format

Share Document