hash code
Recently Published Documents


TOTAL DOCUMENTS

75
(FIVE YEARS 45)

H-INDEX

9
(FIVE YEARS 3)

2022 ◽  
Vol 1 (13) ◽  
pp. 71-79
Author(s):  
Hoàng Thái Hổ ◽  
Nguyễn Thế Hùng ◽  
Nguyễn Tuấn Minh

Tóm tắt—Bài báo trình bày một giải pháp sử dụng năng lực của mạng máy tính phân tán cho thám mã khối. Hệ thống có cấu trúc dựa trên 3 phần mềm. Phần mềm quản trị sử dụng cho nhập dữ liệu đầu vào, phân tích và chia khoảng không gian khóa và phân tích kết quả. Phần mềm thám mã trên CPU và GPU được cài đặt tương ứng cho các máy tính trong mạng phân tán có nhiệm vụ thám mã đối với dữ liệu phần mềm quản trị cung cấp. Kết quả được gửi về phần mềm quản trị để phân tích và giải mã. Quá trình thám mã được thực hiện cùng lúc trên toàn bộ máy tính trong mạng vào thời gian máy tính nhàn rỗi, không ảnh hưởng tới hoạt động hàng ngày của người dùng. Hệ thống bao gồm cả các máy tính có sử card GPU giúp tăng hiệu suất thám mã lên gấp 11 lần. Giải pháp đã được ứng dụng trong thám mật khẩu Windows qua mã băm LAN Manager. Abstract—This paper presents a method to use the capabilities of distributed computer networks in cryptanalysis of block ciphers. The system is structured based on 3 software. Management software for input data entry, analysis, and keyspace division. Cryptanalysis software on CPU and GPU is installed respectively for client computers in the distributed network is responsible for cryptanalysis of data provided by the management software. The results are sent to the administrative software for analysis and decoding. The encryption process is performed on all computers in the network at the same time in their spare time, without affecting the user's daily activities. The system includes GPU computers that increase the performance of the cryptanalysis by 11 times. This solution has been applied in Windows password detection via LAN Manager hash code. 


Author(s):  
Serhii Yevseiev ◽  
Alla Havrylova ◽  
Olha Korol ◽  
Oleh Dmitriiev ◽  
Oleksii Nesmiian ◽  
...  

The transfer of information by telecommunication channels is accompanied by message hashing to control the integrity of the data and confirm the authenticity of the data. When using a reliable hash function, it is computationally difficult to create a fake message with a pre-existing hash code, however, due to the weaknesses of specific hashing algorithms, this threat can be feasible. To increase the level of cryptographic strength of transmitted messages over telecommunication channels, there are ways to create hash codes, which, according to practical research, are imperfect in terms of the speed of their formation and the degree of cryptographic strength. The collisional properties of hashing functions formed using the modified UMAC algorithm using the methodology for assessing the universality and strict universality of hash codes are investigated. Based on the results of the research, an assessment of the impact of the proposed modifications at the last stage of the generation of authentication codes on the provision of universal hashing properties was presented. The analysis of the advantages and disadvantages that accompany the formation of the hash code by the previously known methods is carried out. The scheme of cascading generation of data integrity and authenticity control codes using the UMAC algorithm on crypto-code constructions has been improved. Schemes of algorithms for checking hash codes were developed to meet the requirements of universality and strict universality. The calculation and analysis of collision search in the set of generated hash codes was carried out according to the requirements of a universal and strictly universal class for creating hash codes


Author(s):  
Weiwei Song ◽  
Zhi Gao ◽  
Renwei Dian ◽  
Pedram Ghamisi ◽  
Yongjun Zhang ◽  
...  

2021 ◽  
Vol 2115 (1) ◽  
pp. 012034
Author(s):  
P Arul ◽  
S Renuka

Abstract An Electronic Health Record (EHR) is a database for storing patients medical information collected from different sources such as smart wearable devices, smart sensors and diagnostic imaging equipment. An EHR contains sensitive private information for the patients and the treatment of their diseases. Furthermore, it’s often shared among different members consists of healthcare providers, insurance companies, medical researchers, and others. The main difficulty for EHR information management is the result of gathering, storing, and sharing patient healthcare without affecting privacy and security. Blockchain has recently proposed an efficient way to manage EHR data. This paper provides hybrid architecture for EHR data management by using both Hyperledger blockchain network in on-chain and edge node in off-chain. In this architecture is used to authenticate user without affecting sensitive patient’s information and also used to authenticate the encrypted EHR information in edge node. In the on-chain approach EHR activities and Patient authentication activities are recorded in the blockchain for the purpose of accountability and traceability. Edge nodes stored the encrypted EHR data in the off-chain method. So the combination of on-chain and off-chain approaches only allows the authorized data user who has to meet the EHR access activities and to decrypt the EHR data. If the EHR information is alter by an unauthorized user, the hash code is newly generated which is different from old hash code stored in the blockchain. As the result the user can easily detect that their EHR information has been hacked.


2021 ◽  
Vol 11 (18) ◽  
pp. 8769
Author(s):  
Jun Long ◽  
Longzhi Sun ◽  
Liujie Hua ◽  
Zhan Yang

Cross-modal hashing technology is a key technology for real-time retrieval of large-scale multimedia data in real-world applications. Although the existing cross-modal hashing methods have achieved impressive accomplishment, there are still some limitations: (1) some cross-modal hashing methods do not make full consider the rich semantic information and noise information in labels, resulting in a large semantic gap, and (2) some cross-modal hashing methods adopt the relaxation-based or discrete cyclic coordinate descent algorithm to solve the discrete constraint problem, resulting in a large quantization error or time consumption. Therefore, in order to solve these limitations, in this paper, we propose a novel method, named Discrete Semantics-Guided Asymmetric Hashing (DSAH). Specifically, our proposed DSAH leverages both label information and similarity matrix to enhance the semantic information of the learned hash codes, and the ℓ2,1 norm is used to increase the sparsity of matrix to solve the problem of the inevitable noise and subjective factors in labels. Meanwhile, an asymmetric hash learning scheme is proposed to efficiently perform hash learning. In addition, a discrete optimization algorithm is proposed to fast solve the hash code directly and discretely. During the optimization process, the hash code learning and the hash function learning interact, i.e., the learned hash codes can guide the learning process of the hash function and the hash function can also guide the hash code generation simultaneously. Extensive experiments performed on two benchmark datasets highlight the superiority of DSAH over several state-of-the-art methods.


Author(s):  
Lizbardo Orellano Benancio ◽  
◽  
Ricardo Muñoz Canales ◽  
Paolo Rodriguez Leon ◽  
Enrique Lee Huamaní

Abstract—During various court hearings, the thesis that every authentic digital file has precise metadata of its creation date was questioned.In this way, the problem was raised which indicates, if the metadata of a digital file (Image) whose label records the date of creation by the recording device of a digital image file are accurate and reliable.For this reason, during the forensic analysis carried out in this work, a record of the metadata of five (05) digital image files from known sources is shown and where their characteristics have been detailed, in addition a record of the metadata of the images used that were later manipulated with image editing software with which metadata comparisons were made to show the labels that suffered modifications in their content.Finally, the obtaining of HASH code with the SHA - 256 algorithm is shown, for digital assurance, of the edited and original files whose comparison allows observing the changes in the content at a binary level. Keywords—Crime; Cybercrime; Digital Image; HASH; Metadata


2021 ◽  
Author(s):  
Xiaojun Bai ◽  
Xin Jin ◽  
Feihu Jiang ◽  
Zongxin Wang

2021 ◽  
Vol 7 (8) ◽  
pp. 134
Author(s):  
Miki Tanaka ◽  
Sayaka Shiota ◽  
Hitoshi Kiya

SNS providers are known to carry out the recompression and resizing of uploaded images, but most conventional methods for detecting fake images/tampered images are not robust enough against such operations. In this paper, we propose a novel method for detecting fake images, including distortion caused by image operations such as image compression and resizing. We select a robust hashing method, which retrieves images similar to a query image, for fake-image/tampered-image detection, and hash values extracted from both reference and query images are used to robustly detect fake-images for the first time. If there is an original hash code from a reference image for comparison, the proposed method can more robustly detect fake images than conventional methods. One of the practical applications of this method is to monitor images, including synthetic ones sold by a company. In experiments, the proposed fake-image detection is demonstrated to outperform state-of-the-art methods under the use of various datasets including fake images generated with GANs.


Information ◽  
2021 ◽  
Vol 12 (7) ◽  
pp. 285
Author(s):  
Wenjing Yang ◽  
Liejun Wang ◽  
Shuli Cheng ◽  
Yongming Li ◽  
Anyu Du

Recently, deep learning to hash has extensively been applied to image retrieval, due to its low storage cost and fast query speed. However, there is a defect of insufficiency and imbalance when existing hashing methods utilize the convolutional neural network (CNN) to extract image semantic features and the extracted features do not include contextual information and lack relevance among features. Furthermore, the process of the relaxation hash code can lead to an inevitable quantization error. In order to solve these problems, this paper proposes deep hash with improved dual attention for image retrieval (DHIDA), which chiefly has the following contents: (1) this paper introduces the improved dual attention mechanism (IDA) based on the ResNet18 pre-trained module to extract the feature information of the image, which consists of the position attention module and the channel attention module; (2) when calculating the spatial attention matrix and channel attention matrix, the average value and maximum value of the column of the feature map matrix are integrated in order to promote the feature representation ability and fully leverage the features of each position; and (3) to reduce quantization error, this study designs a new piecewise function to directly guide the discrete binary code. Experiments on CIFAR-10, NUS-WIDE and ImageNet-100 show that the DHIDA algorithm achieves better performance.


Sensors ◽  
2021 ◽  
Vol 21 (14) ◽  
pp. 4679
Author(s):  
Yoon-Su Jeong

As IoT (Internet of Things) devices are diversified in the fields of use (manufacturing, health, medical, energy, home, automobile, transportation, etc.), it is becoming important to analyze and process data sent and received from IoT devices connected to the Internet. Data collected from IoT devices is highly dependent on secure storage in databases located in cloud environments. However, storing directly in a database located in a cloud environment makes it not only difficult to directly control IoT data, but also does not guarantee the integrity of IoT data due to a number of hazards (error and error handling, security attacks, etc.) that can arise from natural disasters and management neglect. In this paper, we propose an optimized hash processing technique that enables hierarchical distributed processing with an n-bit-size blockchain to minimize the loss of data generated from IoT devices deployed in distributed cloud environments. The proposed technique minimizes IoT data integrity errors as well as strengthening the role of intermediate media acting as gateways by interactively authenticating blockchains of n bits into n + 1 and n − 1 layers to normally validate IoT data sent and received from IoT data integrity errors. In particular, the proposed technique ensures the reliability of IoT information by validating hash values of IoT data in the process of storing index information of IoT data distributed in different locations in a blockchain in order to maintain the integrity of the data. Furthermore, the proposed technique ensures the linkage of IoT data by allowing minimal errors in the collected IoT data while simultaneously grouping their linkage information, thus optimizing the load balance after hash processing. In performance evaluation, the proposed technique reduced IoT data processing time by an average of 2.54 times. Blockchain generation time improved on average by 17.3% when linking IoT data. The asymmetric storage efficiency of IoT data according to hash code length is improved by 6.9% on average over existing techniques. Asymmetric storage speed according to the hash code length of the IoT data block was shown to be 10.3% faster on average than existing techniques. Integrity accuracy of IoT data is improved by 18.3% on average over existing techniques.


Sign in / Sign up

Export Citation Format

Share Document