random string
Recently Published Documents


TOTAL DOCUMENTS

32
(FIVE YEARS 9)

H-INDEX

6
(FIVE YEARS 1)

2021 ◽  
Author(s):  
Nour Zaarour ◽  
Nadir Hakem ◽  
NahiKandil

In wireless sensor networks (WSN) high-accuracy localization is crucial for both of WNS management and many other numerous location-based applications. Only a subset of nodes in a WSN is deployed as anchor nodes with their locations a priori known to localize unknown sensor nodes. The accuracy of the estimated position depends on the number of anchor nodes. Obviously, increasing the number or ratio of anchors will undoubtedly increase the localization accuracy. However, it severely constrains the flexibility of WSN deployment while impacting costs and energy. This paper aims to drastically reduce anchor number or ratio of anchor in WSN deployment and ensures a good trade-off for localization accuracy. Hence, this work presents an approach to decrease the number of anchor nodes without compromising localization accuracy. Assuming a random string WSN topology, the results in terms of anchor rates and localization accuracy are presented and show significant reduction in anchor deployment rates from 32% to 2%.


2021 ◽  
Vol 2021 ◽  
pp. 1-15
Author(s):  
Hangchao Ding ◽  
Han Jiang ◽  
Qiuliang Xu

We propose postquantum universal composable (UC) cut-and-choose oblivious transfer (CCOT) protocol under the malicious adversary model. In secure two-party computation, we construct s copies’ garbled circuits, including half check circuit and half evaluation circuit. The sender can transfer the key to the receiver by CCOT protocol. Compared to PVW-OT [6] framework, we invoke WQ-OT [35] framework with reusability of common random string ( crs ) and better security. Relying on LWE’s assumption and the property of the Rounding function, we construct an UC-CCOT protocol, which can resist quantum attack in secure two-party computation.


2021 ◽  
Vol 30 (2) ◽  
Author(s):  
Tom Gur ◽  
Yang P. Liu ◽  
Ron D. Rothblum

AbstractInteractive proofs of proximity allow a sublinear-time verifier to check that a given input is close to the language, using a small amount of communication with a powerful (but untrusted) prover. In this work, we consider two natural minimally interactive variants of such proofs systems, in which the prover only sends a single message, referred to as the proof. The first variant, known as -proofs of Proximity (), is fully non-interactive, meaning that the proof is a function of the input only. The second variant, known as -proofs of Proximity (), allows the proof to additionally depend on the verifier's (entire) random string. The complexity of both s and s is the total number of bits that the verifier observes—namely, the sum of the proof length and query complexity. Our main result is an exponential separation between the power of s and s. Specifically, we exhibit an explicit and natural property $$\Pi$$ Π that admits an with complexity $$O(\log n)$$ O ( log n ) , whereas any for $$\Pi$$ Π has complexity $$\tilde{\Omega}(n^{1/4})$$ Ω ~ ( n 1 / 4 ) , where n denotes the length of the input in bits. Our lower bound also yields an alternate proof, which is more general and arguably much simpler, for a recent result of Fischer et al. (ITCS, 2014). Also, Aaronson (Quantum Information & Computation 2012) has shown a $$\Omega(n^{1/6})$$ Ω ( n 1 / 6 ) lower bound for the same property $$\Pi$$ Π .Lastly, we also consider the notion of oblivious proofs of proximity, in which the verifier's queries are oblivious to the proof. In this setting, we show that s can only be quadratically stronger than s. As an application of this result, we show an exponential separation between the power of public and private coin for oblivious interactive proofs of proximity.


2021 ◽  
Vol 10 (4) ◽  
pp. 2144-2151
Author(s):  
Rogel L. Quilala ◽  
Theda Flare G. Quilala

Abstract—Recently, a Modified SHA-1 (MSHA-1) has been proposed and claimed to have better security performance over SHA-1. However, the study showed that MSHA-1 hashing time performance was slower. In this research, an improved version of MSHA-1 was analyzed using avalanche effect and hashing time as performance measure applying 160-bit output and the mixing method to improve the diffusion rate.  The diffusion results showed the improvement in the avalanche effect of the improved MSHA-1 algorithm by 51.88%, which is higher than the 50% standard to be considered secured. MSHA-1 attained 50.53% avalanche effect while SHA1 achieved only 47.03% thereby showing that the improved MSHA-1 performed better security performance by having an improvement of 9.00% over the original SHA-1 and 3.00% over MSHA-1. The improvement was also tested using 500 random string for ten trials. The improved MSHA-1 has better hashing time performance as indicated by 31.03% improvement. Hash test program has been used to test the effectiveness of the algorithm by producing 1000 hashes from random input strings and showed zero (0) duplicate hashes.


2020 ◽  
Vol 8 (6) ◽  
pp. 01-15
Author(s):  
István Vajda

It is known that most of the interesting multiparty cryptographic tasks cannot be implemented securely without trusted setup in a general concurrent network environment like the Internet. We need an appropriate trusted third party to solve this problem.  An important trusted setup is a public random string shared by the parties. We present a practical n-bit coin toss protocol for provably secure implementation of such setup. Our idea is inviting external peers into the execution of the protocol to establish an honest majority among the parties. We guarantee security in the presence of an unconditional, static, malicious adversary. Additionally, we present an original practical idea of using live public radio broadcast channels for the generation of common physical random source.  


2020 ◽  
Vol 29 (6) ◽  
pp. 1263-1285
Author(s):  
Robert Lasch ◽  
Ismail Oukid ◽  
Roman Dementiev ◽  
Norman May ◽  
Suleyman S. Demirsoy ◽  
...  

AbstractString dictionaries constitute a large portion of the memory footprint of database applications. While strong string dictionary compression algorithms exist, these come with impractical access and compression times. Therefore, lightweight algorithms such as front coding (PFC) are favored in practice. This paper endeavors to make strong string dictionary compression practical. We focus on Re-Pair Front Coding (RPFC), a grammar-based compression algorithm, since it consistently offers better compression ratios than other algorithms in the literature. To accelerate compression times, we propose block-based RPFC (BRPFC) which consists in independently compressing small blocks of the dictionary. For further accelerated compression times especially on large string dictionaries, we also propose an alternative version of BRPFC that uses sampling to speed up compression. Moreover, to accelerate access times, we devise a vectorized access method, using $$\hbox {Intel}^{\circledR }$$ Intel ®  Advanced Vector Extensions 512 ($$\hbox {Intel}^{\circledR }$$ Intel ®  AVX-512). Our experimental evaluation shows that sampled BRPFC offers compression times up to 190 $$\times $$ × faster than RPFC, and random string lookups 2.3 $$\times $$ × faster than RPFC on average. These results move our modified RPFC into a practical range for use in database systems because the overhead of Re-Pair-based compression for access times can be reduced by 2 $$\times $$ × .


2019 ◽  
Vol 8 (3) ◽  
pp. 6230-6235

In this paper, author have performed experimental analysis of Jumbling-Salting (JS) algorithm for larger text size. In the earlier research work, JS Algorithm was symmetric-password encryption algorithm. JS algorithm consists of two prominent cryptographic processes namely Jumbling and Salting. Jumbling consists of three major randomized processes viz. Addition, Selection and Reverse. Jumbling process jumbles the random characters into password string. Salting process adds a random string based on some timestamp value. The output of Jumbling and Salting process is given to predefined AES block to perform 128-bit key encryption to maintain the cipher text size uniform. In this research, the capability of JS algorithm is enhanced. The paper therefore shows the performance of JS algorithm regarding length of cipher text size with respect to AES and DES algorithms. This extended research ensures that JS algorithm is not only suitable for smaller text like password, pin, passcode etc. but it is also favorable for larger texts.


2019 ◽  
Vol 11 (2) ◽  
pp. 171
Author(s):  
V. I. Ilyevsky

In this paper, for the first time ever, the properties of the word detection probability in a random string have been investigated. The formerly known methods led to numerical evaluation of the researched probabilities only. The present work derives the simplest algorithm for calculation of the word’s at least once detection probability in a random string. A recursive formula that considers the overlap capability has been deduced for the probability under study. This formula is being used for the proposition on comparison of the word detection probabilities in a random string for the words with different periods. The result allows determining the structure of words that have maximum and minimum detection probabilities. In particular, words having equal number of alphabetic characters have been studied. It has been established, that for the words in question detection probability is minimal for the ideally symmetrical words that have irreducible period - and maximal for the words devoid of the overlap feature. These results will be useful for molecular genetics, as well as for students studying discrete mathematics, probability theory and molecular biology.


Author(s):  
Cristian S. Calude

The standard definition of randomness as considered in probability theory and used, for example, in quantum mechanics, allows one to speak of a process (such as tossing a coin, or measuring the diagonal polarization of a horizontally polarized photon) as being random. It does not allow one to call a particular outcome (or string of outcomes, or sequence of outcomes) ‘random’, except in an intuitive, heuristic sense. Information-theoretic complexity makes this possible. An algorithmically random string is one which cannot be produced by a description significantly shorter than itself; an algorithmically random sequence is one whose initial finite segments are almost random strings. Gödel’s incompleteness theorem states that every axiomatizable theory which is sufficiently rich and sound is incomplete. Chaitin’s information-theoretic version of Gödel’s theorem goes a step further, revealing the reason for incompleteness: a set of axioms of complexity N cannot yield a theorem that asserts that a specific object is of complexity substantially greater than N. This suggests that incompleteness is not only natural, but pervasive; it can no longer be ignored by everyday mathematics. It also provides a theoretical support for a quasi-empirical and pragmatic attitude to the foundations of mathematics. Information-theoretic complexity is also relevant to physics and biology. For physics it is convenient to reformulate it as the size of the shortest message specifying a microstate, uniquely up to the assumed resolution. In this way we get a rigorous, entropy-like measure of disorder of an individual, microscopic, definite state of a physical system. The regulatory genes of a developing embryo can be ultimately conceived as a program for constructing an organism. The information contained by this biochemical computer program can be measured by information-theoretic complexity.


Sign in / Sign up

Export Citation Format

Share Document