scholarly journals Quantum Advantage for Shared Randomness Generation

Quantum ◽  
2021 ◽  
Vol 5 ◽  
pp. 569
Author(s):  
Tamal Guha ◽  
Mir Alimuddin ◽  
Sumit Rout ◽  
Amit Mukherjee ◽  
Some Sankar Bhattacharya ◽  
...  

Sharing correlated random variables is a resource for a number of information theoretic tasks such as privacy amplification, simultaneous message passing, secret sharing and many more. In this article, we show that to establish such a resource called shared randomness, quantum systems provide an advantage over their classical counterpart. Precisely, we show that appropriate albeit fixed measurements on a shared two-qubit state can generate correlations which cannot be obtained from any possible state on two classical bits. In a resource theoretic set-up, this feature of quantum systems can be interpreted as an advantage in winning a two players co-operative game, which we call the `non-monopolize social subsidy' game. It turns out that the quantum states leading to the desired advantage must possess non-classicality in the form of quantum discord. On the other hand, while distributing such sources of shared randomness between two parties via noisy channels, quantum channels with zero capacity as well as with classical capacity strictly less than unity perform more efficiently than the perfect classical channel. Protocols presented here are noise-robust and hence should be realizable with state-of-the-art quantum devices.

2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Bartosz Regula ◽  
Ryuji Takagi

AbstractQuantum channels underlie the dynamics of quantum systems, but in many practical settings it is the channels themselves that require processing. We establish universal limitations on the processing of both quantum states and channels, expressed in the form of no-go theorems and quantitative bounds for the manipulation of general quantum channel resources under the most general transformation protocols. Focusing on the class of distillation tasks — which can be understood either as the purification of noisy channels into unitary ones, or the extraction of state-based resources from channels — we develop fundamental restrictions on the error incurred in such transformations, and comprehensive lower bounds for the overhead of any distillation protocol. In the asymptotic setting, our results yield broadly applicable bounds for rates of distillation. We demonstrate our results through applications to fault-tolerant quantum computation, where we obtain state-of-the-art lower bounds for the overhead cost of magic state distillation, as well as to quantum communication, where we recover a number of strong converse bounds for quantum channel capacity.


2020 ◽  
Vol 2020 (4) ◽  
pp. 76-1-76-7
Author(s):  
Swaroop Shankar Prasad ◽  
Ofer Hadar ◽  
Ilia Polian

Image steganography can have legitimate uses, for example, augmenting an image with a watermark for copyright reasons, but can also be utilized for malicious purposes. We investigate the detection of malicious steganography using neural networkbased classification when images are transmitted through a noisy channel. Noise makes detection harder because the classifier must not only detect perturbations in the image but also decide whether they are due to the malicious steganographic modifications or due to natural noise. Our results show that reliable detection is possible even for state-of-the-art steganographic algorithms that insert stego bits not affecting an image’s visual quality. The detection accuracy is high (above 85%) if the payload, or the amount of the steganographic content in an image, exceeds a certain threshold. At the same time, noise critically affects the steganographic information being transmitted, both through desynchronization (destruction of information which bits of the image contain steganographic information) and by flipping these bits themselves. This will force the adversary to use a redundant encoding with a substantial number of error-correction bits for reliable transmission, making detection feasible even for small payloads.


Author(s):  
Rajnikant Kumar

NSDL was registered by the SEBI on June 7, 1996 as India’s first depository to facilitate trading and settlement of securities in the dematerialized form. NSDL has been set up to cater to the demanding needs of the Indian capital markets. NSDL commenced operations on November 08, 1996. NSDL has been promoted by a number of companies, the prominent of them being IDBI, UTI, NSE, SBI, HDFC Bank Ltd., etc. The initial paid up capital of NSDL was Rs. 105 crore which was reduced to Rs. 80 crore. During 2000-2001 through buy-back programme by buying back 2.5 crore shares @ 12 Rs./share. It was done to bring the size of its capital in better alignment with its financial operations and to provide same return to shareholders by gainfully deploying the excess cash available with NSDL. NSDL carries out its activities through service providers such as depository participants (DPs), issuing companies and their registrars and share transfer agents and clearing corporations/ clearing houses of stock exchanges. These entities are NSDL's business partners and are integrated in to the NSDL depository system to provide various services to investors and clearing members. The investor can get depository services through NSDL's depository participants. An investor needs to open a depository account with a depository participant to avail of depository facilities. Depository system essentially aims at eliminating the voluminous and cumbersome paper work involved in the scrip-based system and offers scope for ‘paperless’ trading through state-of-the-art technology. A depository can be compared to a bank. A depository holds securities of investors in the form of electronic accounts, in the same way as bank holds money in a saving account. Besides, holding securities, a depository also provides services related to transactions in securities.


Author(s):  
Michael Withnall ◽  
Edvard Lindelöf ◽  
Ola Engkvist ◽  
Hongming Chen

We introduce Attention and Edge Memory schemes to the existing Message Passing Neural Network framework for graph convolution, and benchmark our approaches against eight different physical-chemical and bioactivity datasets from the literature. We remove the need to introduce <i>a priori</i> knowledge of the task and chemical descriptor calculation by using only fundamental graph-derived properties. Our results consistently perform on-par with other state-of-the-art machine learning approaches, and set a new standard on sparse multi-task virtual screening targets. We also investigate model performance as a function of dataset preprocessing, and make some suggestions regarding hyperparameter selection.


2020 ◽  
Vol 34 (07) ◽  
pp. 11693-11700 ◽  
Author(s):  
Ao Luo ◽  
Fan Yang ◽  
Xin Li ◽  
Dong Nie ◽  
Zhicheng Jiao ◽  
...  

Crowd counting is an important yet challenging task due to the large scale and density variation. Recent investigations have shown that distilling rich relations among multi-scale features and exploiting useful information from the auxiliary task, i.e., localization, are vital for this task. Nevertheless, how to comprehensively leverage these relations within a unified network architecture is still a challenging problem. In this paper, we present a novel network structure called Hybrid Graph Neural Network (HyGnn) which targets to relieve the problem by interweaving the multi-scale features for crowd density as well as its auxiliary task (localization) together and performing joint reasoning over a graph. Specifically, HyGnn integrates a hybrid graph to jointly represent the task-specific feature maps of different scales as nodes, and two types of relations as edges: (i) multi-scale relations capturing the feature dependencies across scales and (ii) mutual beneficial relations building bridges for the cooperation between counting and localization. Thus, through message passing, HyGnn can capture and distill richer relations between nodes to obtain more powerful representations, providing robust and accurate results. Our HyGnn performs significantly well on four challenging datasets: ShanghaiTech Part A, ShanghaiTech Part B, UCF_CC_50 and UCF_QNRF, outperforming the state-of-the-art algorithms by a large margin.


2021 ◽  
Author(s):  
Phongsathorn Kittiworapanya ◽  
Kitsuchart Pasupa ◽  
Peter Auer

<div>We assessed several state-of-the-art deep learning algorithms and computer vision techniques for estimating the particle size of mixed commercial waste from images. In waste management, the first step is often coarse shredding, using the particle size to set up the shredder machine. The difficulty is separating the waste particles in an image, which can not be performed well. This work focused on estimating size by using the texture from the input image, captured at a fixed height from the camera lens to the ground. We found that EfficientNet achieved the best performance of 0.72 on F1-Score and 75.89% on accuracy.<br></div>


Author(s):  
George Dasoulas ◽  
Ludovic Dos Santos ◽  
Kevin Scaman ◽  
Aladin Virmaux

In this paper, we show that a simple coloring scheme can improve, both theoretically and empirically, the expressive power of Message Passing Neural Networks (MPNNs). More specifically, we introduce a graph neural network called Colored Local Iterative Procedure (CLIP) that uses colors to disambiguate identical node attributes, and show that this representation is a universal approximator of continuous functions on graphs with node attributes. Our method relies on separability, a key topological characteristic that allows to extend well-chosen neural networks into universal representations. Finally, we show experimentally that CLIP is capable of capturing structural characteristics that traditional MPNNs fail to distinguish, while being state-of-the-art on benchmark graph classification datasets.


Author(s):  
CHANG-HWAN LEE

In spite of its simplicity, naive Bayesian learning has been widely used in many data mining applications. However, the unrealistic assumption that all features are equally important negatively impacts the performance of naive Bayesian learning. In this paper, we propose a new method that uses a Kullback–Leibler measure to calculate the weights of the features analyzed in naive Bayesian learning. Its performance is compared to that of other state-of-the-art methods over a number of datasets.


2011 ◽  
Vol 83 (6) ◽  
Author(s):  
Patrick J. Coles ◽  
Li Yu ◽  
Vlad Gheorghiu ◽  
Robert B. Griffiths

2019 ◽  
Vol 116 (20) ◽  
pp. 9735-9740 ◽  
Author(s):  
Tran Ngoc Huan ◽  
Daniel Alves Dalla Corte ◽  
Sarah Lamaison ◽  
Dilan Karapinar ◽  
Lukas Lutz ◽  
...  

Conversion of carbon dioxide into hydrocarbons using solar energy is an attractive strategy for storing such a renewable source of energy into the form of chemical energy (a fuel). This can be achieved in a system coupling a photovoltaic (PV) cell to an electrochemical cell (EC) for CO2 reduction. To be beneficial and applicable, such a system should use low-cost and easily processable photovoltaic cells and display minimal energy losses associated with the catalysts at the anode and cathode and with the electrolyzer device. In this work, we have considered all of these parameters altogether to set up a reference PV–EC system for CO2 reduction to hydrocarbons. By using the same original and efficient Cu-based catalysts at both electrodes of the electrolyzer, and by minimizing all possible energy losses associated with the electrolyzer device, we have achieved CO2 reduction to ethylene and ethane with a 21% energy efficiency. Coupled with a state-of-the-art, low-cost perovskite photovoltaic minimodule, this system reaches a 2.3% solar-to-hydrocarbon efficiency, setting a benchmark for an inexpensive all–earth-abundant PV–EC system.


Sign in / Sign up

Export Citation Format

Share Document