scholarly journals A NOTE ON “ON CIPHERTEXT UNDETECTABILITY”

2013 ◽  
Vol 57 (1) ◽  
pp. 119-121
Author(s):  
Angsuman Das ◽  
Avishek Adhikari

ABSTRACT The notion of ciphertext undetectability was introduced in [Gaˇzi, P. - Stanek, M.: On ciphertext undetectability, Tatra Mt. Math. Publ. 41 (2008), 133-151] as a steganographic property of an encryption scheme. While finding the relationship between ciphertext undetectability and indistinguishability of encryptions, authors showed that ciphertext undetectability does not imply indistinguishability. Though the proposition is correct, the proof is not. In this note, we provide a correct proof of the above-mentioned result by a slight modification of the construction used in original paper cited above.

2018 ◽  
Vol 2018 ◽  
pp. 1-12 ◽  
Author(s):  
Tan Ping Zhou ◽  
Ning Bo Li ◽  
Xiao Yuan Yang ◽  
Li Qun Lv ◽  
Yi Tao Ding ◽  
...  

The decline in genome sequencing costs has widened the population that can afford its cost and has also raised concerns about genetic privacy. Kim et al. present a practical solution to the scenario of secure searching of gene data on a semitrusted business cloud. However, there are three errors in their scheme. We have made three improvements to solve these three errors. (1) They truncate the variation encodings of gene to 21 bits, which causes LPCE error and more than 5% of the entries in the database cannot be queried integrally. We decompose these large encodings by 44 bits and deal with the components, respectively, to avoid LPCE error. (2) We abandon the hash function used in Kim’s scheme, which may cause HCE error with a probability of 2-22 and decompose the position encoding of gene into three parts with the basis 211 to avoid HCE error. (3) We analyze the relationship between the parameters and the CCE error and specify the condition that parameters need to satisfy to avoid the CCE error. Experiments show that our scheme can search all entries, and the probability of searching error is reduced to less than 2-37.4.


2020 ◽  
Vol 36 (5) ◽  
pp. 961-981 ◽  
Author(s):  
Hiroshi Yamada

In recent decades, in the research community of macroeconometric time series analysis, we have observed growing interest in the smoothing method known as the Hodrick–Prescott (HP) filter. This article examines the properties of an alternative smoothing method that looks like the HP filter, but is much less well known. We show that this is actually more like the exponential smoothing filter than the HP filter although it is obtainable through a slight modification of the HP filter. In addition, we also show that it is also like the low-frequency projection of Müller and Watson (2018, Econometrica 86, 775–804). We point out that these results derive from the fact that all three similar smoothing methods can be regarded as a type of graph spectral filter whose graph Fourier transform is discrete cosine transform. We then theoretically reveal the relationship between the similar smoothing methods and provide a way of specifying the smoothing parameter that is necessary for its application. An empirical examination illustrates the results.


2016 ◽  
Vol 2016 ◽  
pp. 1-14 ◽  
Author(s):  
Shu-Ying Wang ◽  
Jian-Feng Zhao ◽  
Xian-Feng Li ◽  
Li-Tao Zhang

In view of the digital image transmission security, based on laser chaos synchronization and Arnold cat map, a novel image encryption scheme is proposed. Based on pixel values of plain image a parameter is generated to influence the secret key. Sequences of the drive system and response system are pretreated by the same method and make image blocking encryption scheme for plain image. Finally, pixels position are scrambled by general Arnold transformation. In decryption process, the chaotic synchronization accuracy is fully considered and the relationship between the effect of synchronization and decryption is analyzed, which has characteristics of high precision, higher efficiency, simplicity, flexibility, and better controllability. The experimental results show that the encryption algorithm image has high security and good antijamming performance.


2020 ◽  
Author(s):  
Weipeng Cao

<p>This paper comments on our recently published conference paper entitled "An Initial Study on the Relationship Between Meta Features of Dataset and the Initialization of NNRW".</p> We point out that the above-mentioned article has a typographical error in proving that using Gamma distribution to initialize NNRW is not a good choice, and give the corresponding correct proof.


Filomat ◽  
2017 ◽  
Vol 31 (8) ◽  
pp. 2219-2229 ◽  
Author(s):  
Min-Jie Luo ◽  
R.K. Raina

In view of the relationship with the Kr?tzel function, we derive a new series representation for the ?-generalized Hurwitz-Lerch Zeta function introduced by H.M. Srivastava [Appl. Math. Inf. Sci. 8 (2014) 1485-1500] and determine the monotonicity of its coeficients. An integral representation of the Mathieu (a;?)-series is rederived by applying the Abel?s summation formula (which provides a slight modification of the result given by Pog?ny [Integral Transforms Spec. Funct. 16 (8) (2005) 685-689]) and this modified form of the result is then used to obtain a new integral representation for Srivastava?s ?-generalized Hurwitz-Lerch Zeta function. Finally, by making use of the various results presented in this paper, we establish two sets of two-sided inequalities for Srivastava?s ?-generalized Hurwitz-Lerch Zeta function.


2020 ◽  
Author(s):  
Weipeng Cao

<p>This paper comments on our recently published conference paper entitled "An Initial Study on the Relationship Between Meta Features of Dataset and the Initialization of NNRW".</p> We point out that the above-mentioned article has a typographical error in proving that using Gamma distribution to initialize NNRW is not a good choice, and give the corresponding correct proof.


2015 ◽  
Vol 9 (1) ◽  
pp. 23-43 ◽  
Author(s):  
ALAN WEIR

AbstractIncreases in the use of automated theorem-provers have renewed focus on the relationship between the informal proofs normally found in mathematical research and fully formalised derivations. Whereas some claim that any correct proof will be underwritten by a fully formal proof, sceptics demur. In this paper I look at the relevance of these issues for formalism, construed as an anti-platonistic metaphysical doctrine. I argue that there are strong reasons to doubt that all proofs are fully formalisable, if formal proofs are required to be finitary, but that, on a proper view of the way in which formal proofs idealise actual practice, this restriction is unjustified and formalism is not threatened.


2015 ◽  
Vol 13 (07) ◽  
pp. 1550057 ◽  
Author(s):  
Marcos Gaudiano ◽  
Omar Osenda

The relationship between entanglement and anisotropy is studied in small spin chains with periodic boundary conditions. The Hamiltonian of the spin chains is given by a slight modification of the dipolar Hamiltonian. The effect of the anisotropy is analyzed using the concurrence shared by spin pairs, but the study is not restricted to nearest-neighbor (NN) entanglement. It is shown that, under rather general conditions, the inclusion of anisotropic terms diminishes the entanglement shared between the spins of the chain irrespective of its range or its magnetic character.


Author(s):  
Wolfgang Breuer ◽  
Benjamin Quinten ◽  
Astrid J. Salzmann

This chapter enhances the growing research field of Cultural Finance by analyzing the relationship between cultural value types—in particular, Autonomy and Embeddedness—and the corporate debt choice of either bank or bond financing. The authors derive their hypotheses from a slight modification and re-interpretation of the Chemmanur and Fulghieri (1994) approach of “relationship lending.” Referring to the importance of specific human capital investments and individuals' future orientation, they show that firms in autonomy cultures tend toward bank finance, whereas firms in embeddedness cultures show a preference for financing by issuing bonds. In a cross-country analysis with 71 countries, the authors find empirical evidence for their established hypotheses.


Author(s):  
A. V. Babash

In 1917, Hilbert Vernam patented a top-secret encryption scheme, which at first was called a one-time notepad and later a Vernam cipher. At the time that Vernam proposed this scheme, there was no evidence that it was completely secret, since, in fact, at that time yet there was no idea what the perfect secret of the cipher was. However, about 25 years later, Claude Shannon introduced the definition of perfect secrecy (perfect cipher) and demonstrated that the random gamming cipher reaches this level of security. Cryptographers believe that there are no effective attacks for attacks of random gamming. In particular, there are no effective attacks for the Vernam cipher.Objective: to justify the fallacy of this proposition to build effective attacks.Methods: analysis of the relationship between the cipher key and the received encrypted text.Results: an attack on the plaintext of a random gamming cipher based on a given encrypted text was developed. In addition, there was a suggestion for another attack on the plaintext contents based on the encrypted text. For all attacks, parameters of their complexity are calculated. These results are new. Previously, an attack on the random gamma code was unavailable. The results disprove the opinion that there are no attacks on this cipher.Practical relevance: firstly, it has become possible to carry out attacks on the random gamming code. Secondly, when using this cipher, it is necessary to strictly limit the length of the message.Discussion: the idea that there is an effective attack on a random gamming cipher arose in 2002, due to the possibility of introducing a similar concept, in which in a definition of the perfect cipher the plaintext is changed for a key. The first idea in creating attacks is that when the key is long its elements are repeated. The second idea is that attacks on two plaintexts are encrypted with one key. And the main idea was that it is necessary to improve the mathematical model of the Shannon code. Therein, when interpreting the concept of the perfect cipher, we should talk about the cipher model perfection.The publication place: in the Yandex search engine a query "Perfect ciphers" resulted in 22 million links, on a query "schemes perfectly secret" there were 43 million links. Yandex on the query "random gamming code" gave 13 million results.


Sign in / Sign up

Export Citation Format

Share Document