scholarly journals Numerical and Non-Asymptotic Analysis of Elias’s and Peres’s Extractors with Finite Input Sequences

Entropy ◽  
2018 ◽  
Vol 20 (10) ◽  
pp. 729 ◽  
Author(s):  
Amonrat Prasitsupparote ◽  
Norio Konno ◽  
Junji Shikata

Many cryptographic systems require random numbers, and the use of weak random numbers leads to insecure systems. In the modern world, there are several techniques for generating random numbers, of which the most fundamental and important methods are deterministic extractors proposed by von Neumann, Elias, and Peres. Elias’s extractor achieves the optimal rate (i.e., information-theoretic upper bound) h ( p ) if the block size tends to infinity, where h ( · ) is the binary entropy function and p is the probability that each bit of input sequences occurs. Peres’s extractor achieves the optimal rate h ( p ) if the length of the input and the number of iterations tend to infinity. Previous research related to both extractors has made no reference to practical aspects including running time and memory size with finite input sequences. In this paper, based on some heuristics, we derive a lower bound on the maximum redundancy of Peres’s extractor, and we show that Elias’s extractor is better than Peres’s extractor in terms of the maximum redundancy (or the rates) if we do not pay attention to the time complexity or space complexity. In addition, we perform numerical and non-asymptotic analysis of both extractors with a finite input sequence with any biased probability under the same environments. To do so, we implemented both extractors on a general PC and simple environments. Our empirical results show that Peres’s extractor is much better than Elias’s extractor for given finite input sequences under a very similar running time. As a consequence, Peres’s extractor would be more suitable to generate uniformly random sequences in practice in applications such as cryptographic systems.

Author(s):  
Amonrat Prasitsupparote ◽  
Norio Konno ◽  
Junji Shikata

Many cryptographic systems require random numbers, and weak random numbers lead to insecure systems. In the modern world, there are several techniques for generating random numbers, of which the most fundamental and important methods are deterministic extractors proposed by von Neumann, Elias, and Peres. Elias’s extractor achieves the optimal rate (i.e., information theoretic upper bound) h(p) if the block size tends to infinity, where h(·) is the binary entropy function and p is probability that each bit of input sequences occurs. Peres’s extractor achieves the optimal rate h(p) if the length of input and the number of iterations tend to infinity. The previous researches related to both extractors did not mention practical aspects including running time and memory-size with finite input sequences. In this paper, based on some heuristics, we derive a lower bound on the maximum redundancy of Peres’s extractor, and we show that Elias’s extractor is better than Peres’s one in terms of the maximum redundancy (or the rates) if we do not pay attention to time complexity or space complexity. In addition, we perform numerical and non-asymptotic analysis of both extractors with a finite input sequence with any biased probability under the same environments. For doing it, we implemented both extractors on a general PC and simple environments. Our empirical results show that Peres’s extractor is much better than Elias’s one for given finite input sequences under the almost same running time. As a consequence, Peres’s extractor would be more suitable to generate uniformly random sequences in practice in applications such as cryptographic systems.


2020 ◽  
Vol 14 ◽  
Author(s):  
Vasu Mehra ◽  
Dhiraj Pandey ◽  
Aayush Rastogi ◽  
Aditya Singh ◽  
Harsh Preet Singh

Background:: People suffering from hearing and speaking disabilities have a few ways of communicating with other people. One of these is to communicate through the use of sign language. Objective:: Developing a system for sign language recognition becomes essential for deaf as well as a mute person. The recognition system acts as a translator between a disabled and an able person. This eliminates the hindrances in exchange of ideas. Most of the existing systems are very poorly designed with limited support for the needs of their day to day facilities. Methods:: The proposed system embedded with gesture recognition capability has been introduced here which extracts signs from a video sequence and displays them on screen. On the other hand, a speech to text as well as text to speech system is also introduced to further facilitate the grieved people. To get the best out of human computer relationship, the proposed solution consists of various cutting-edge technologies and Machine Learning based sign recognition models which have been trained by using Tensor Flow and Keras library. Result:: The proposed architecture works better than several gesture recognition techniques like background elimination and conversion to HSV because of sharply defined image provided to the model for classification. The results of testing indicate reliable recognition systems with high accuracy that includes most of the essential and necessary features for any deaf and dumb person in his/her day to day tasks. Conclusion:: It’s the need of current technological advances to develop reliable solutions which can be deployed to assist deaf and dumb people to adjust to normal life. Instead of focusing on a standalone technology, a plethora of them have been introduced in this proposed work. Proposed Sign Recognition System is based on feature extraction and classification. The trained model helps in identification of different gestures.


2005 ◽  
Vol 15 (12) ◽  
pp. 3999-4006 ◽  
Author(s):  
FENG-JUAN CHEN ◽  
FANG-YUE CHEN ◽  
GUO-LONG HE

Some image processing research are restudied via CNN genes with five variables, and this include edge detection, corner detection, center point extraction and horizontal-vertical line detection. Although they were implemented with nine variables, the results of computer simulation show that the effect with five variables is identical to or better than that with nine variables.


2013 ◽  
Vol 22 (12) ◽  
pp. 1342030 ◽  
Author(s):  
KYRIAKOS PAPADODIMAS ◽  
SUVRAT RAJU

We point out that nonperturbative effects in quantum gravity are sufficient to reconcile the process of black hole evaporation with quantum mechanics. In ordinary processes, these corrections are unimportant because they are suppressed by e-S. However, they gain relevance in information-theoretic considerations because their small size is offset by the corresponding largeness of the Hilbert space. In particular, we show how such corrections can cause the von Neumann entropy of the emitted Hawking quanta to decrease after the Page time, without modifying the thermal nature of each emitted quantum. Second, we show that exponentially suppressed commutators between operators inside and outside the black hole are sufficient to resolve paradoxes associated with the strong subadditivity of entropy without any dramatic modifications of the geometry near the horizon.


2006 ◽  
Vol 71 (9) ◽  
pp. 1270-1277 ◽  
Author(s):  
Ali Reza Ashrafi

In this paper, a new algorithm is presented which is useful for computing the automorphism group of chemical graphs. We compare our algorithm with those of Druffel, Schmidt and Wang. It is proved that the running time of the present algorithm is better than the mentioned algorithms. Also, the automorphism group of Euclidean graph of isomers for the fullerenes C180, C240, C260, C320, C500 and C720 are computed.


2013 ◽  
Vol 753-755 ◽  
pp. 2908-2911
Author(s):  
Yao Yuan Zeng ◽  
Wen Tao Zhao ◽  
Zheng Hua Wang

Multilevel hypergraph partitioning is a significant and extensively researched problem in combinatorial optimization. In this paper, we present a multilevel hypergraph partitioning algorithm based on simulated annealing approach for global optimization. Experiments on the benchmark suite of several unstructured meshes show that, for 2-, 4-, 8-, 16-and 32-way partitioning, although more running time was demanded, the quality of partition produced by our algorithm are on the average 14% and the maximum 22% better than those produced by partitioning software hMETIS in term of the SOED metric.


2013 ◽  
Vol 2013 ◽  
pp. 1-9 ◽  
Author(s):  
Lihong Guo ◽  
Gai-Ge Wang ◽  
Heqi Wang ◽  
Dinan Wang

A hybrid metaheuristic approach by hybridizing harmony search (HS) and firefly algorithm (FA), namely, HS/FA, is proposed to solve function optimization. In HS/FA, the exploration of HS and the exploitation of FA are fully exerted, so HS/FA has a faster convergence speed than HS and FA. Also, top fireflies scheme is introduced to reduce running time, and HS is utilized to mutate between fireflies when updating fireflies. The HS/FA method is verified by various benchmarks. From the experiments, the implementation of HS/FA is better than the standard FA and other eight optimization methods.


2019 ◽  
Vol 8 (4) ◽  
pp. 10051-10056

In recent years, big data is huge amount of data to uncover hidden attributes. Today’s technologies has possible to analyze the data and get data is almost immediately. Why big data is very important? Because cost reduction, faster, and better decision making using Hadoop. For example a large warehouse of terabytes of data is generated daily from social media’s like Twitter, LinkedIn and Facebook are case of organization in the people to people communication area for big data. Big data has 3 most important challenges of Volume, Variety, and Velocity. In this paper we have studied about the performance of Traditional Distributed File System (TDFS) and Hadoop Distributed File System (HDFS). Benefits of HDFS has support for flume tool in Hadoop comparing with TDFS. Memory block size data retrieving time and security are used as metrics in evaluating the performance of TDFS and HDFS. Result shows HDFC performs better than TDFS in the above metrics and HDFS is more suitable for big data analysis comparing of TDFS.


Author(s):  
E. M. Bozhko ◽  
◽  
M. V. Spornik ◽  

Analyzing relevant and informative sources for acquaintance with modern fine art, catalogs of various art exhibitions, article questions and problems associated with the creation of architectural and landscape compositions are considered from a practical point of view. A significant role in art belongs to the architectural landscape, as a genre variety. Promising types of cities - Veduta (A. Canaletto, V. Bellotto) have become separate types of architectural landscape. The genre of painting is the Veduta, which developed in the eighteenth century in Venice. This is an image of views of the city and its environs. Lead amaze with its accuracy. At that time, such images served as photographs. The requirements for the paintings corresponded to their purpose: the accuracy of the image of objects, down to the smallest detail. With the advent of photography, the requirements for graphic images have lost their relevance. The camera can accurately capture the object, transmit small details better than the artist. The changes that are taking place in modern realistic painting are connected precisely with the appearance of photography. Many modern impressionists, trying to impress the landscape they saw, write sketches with wide, wide strokes. For the sake of such a technique, they ignore many important elements of the landscape in order to maximize the expressiveness of their work. Modern artists working in the realistic direction of the architectural landscape pay attention to color reproduction, color of painting, while paying due attention to drawing, linear perspective and construction. Painting and photography at the present stage are fundamentally different from each other. Painting corresponds to its name - living writing, generalization, typification and stylization of forms, the viewer's impression of lightness, airiness and illumination. Modern realistic painting is modified relative to the painting of the VIII-XIX centuries. This process is due to the technical development of the modern world, the advent of digital photography, new materials for creativity. Picturesque language goes into the language of flowers. Professional art education plays a fundamental role in understanding the landscape as a genre of painting. Education allows you to combine composition, the picturesque effect, which is an innovation in realistic landscape painting, for the complete deep impression of the viewer.


2020 ◽  
Author(s):  
Ravishankar Ramanathan ◽  
Michal Horodecki ◽  
Hammad Anwer ◽  
Stefano Pironio ◽  
Karol Horodecki ◽  
...  

Abstract Device-Independent (DI) security is the gold standard of quantum cryptography, providing information-theoretic security based on the very laws of nature. In its highest form, security is guaranteed against adversaries limited only by the no-superluminal signalling rule of relativity. The task of randomness amplification, to generate secure fully uniform bits starting from weakly random seeds, is of both cryptographic and foundational interest, being important for the generation of cryptographically secure random numbers as well as bringing deep connections to the existence of free-will. DI no-signalling proof protocols for this fundamental task have thus far relied on esoteric proofs of non-locality termed pseudo-telepathy games, complicated multi-party setups or high-dimensional quantum systems, and have remained out of reach of experimental implementation. In this paper, we construct the first practically relevant no-signalling proof DI protocols for randomness amplification based on the simplest proofs of Bell non-locality and illustrate them with an experimental implementation in a quantum optical setup using polarised photons. Technically, we relate the problem to the vast field of Hardy paradoxes, without which it would be impossible to achieve amplification of arbitrarily weak sources in the simplest Bell non-locality scenario consisting of two parties choosing between two binary inputs. Furthermore, we identify a deep connection between proofs of the celebrated Kochen-Specker theorem and Hardy paradoxes that enables us to construct Hardy paradoxes with the non-zero probability taking any value in (0,1]. Our methods enable us, under the fair-sampling assumption of the experiment, to realize up to 25 bits of randomness in 20 hours of experimental data collection from an initial private source of randomness 0.1 away from uniform.


Sign in / Sign up

Export Citation Format

Share Document