noisy measurements
Recently Published Documents


TOTAL DOCUMENTS

350
(FIVE YEARS 76)

H-INDEX

24
(FIVE YEARS 4)

Entropy ◽  
2022 ◽  
Vol 24 (1) ◽  
pp. 110
Author(s):  
Onur Günlü

The problem of reliable function computation is extended by imposing privacy, secrecy, and storage constraints on a remote source whose noisy measurements are observed by multiple parties. The main additions to the classic function computation problem include (1) privacy leakage to an eavesdropper is measured with respect to the remote source rather than the transmitting terminals’ observed sequences; (2) the information leakage to a fusion center with respect to the remote source is considered a new privacy leakage metric; (3) the function computed is allowed to be a distorted version of the target function, which allows the storage rate to be reduced compared to a reliable function computation scenario, in addition to reducing secrecy and privacy leakages; (4) two transmitting node observations are used to compute a function. Inner and outer bounds on the rate regions are derived for lossless and lossy single-function computation with two transmitting nodes, which recover previous results in the literature. For special cases, including invertible and partially invertible functions, and degraded measurement channels, exact lossless and lossy rate regions are characterized, and one exact region is evaluated as an example scenario.


2022 ◽  
Vol 22 (1&2) ◽  
pp. 1-16
Author(s):  
Artur Czerwinski

In this article, we investigate the problem of entanglement characterization by polarization measurements combined with maximum likelihood estimation (MLE). A realistic scenario is considered with measurement results distorted by random experimental errors. In particular, by imposing unitary rotations acting on the measurement operators, we can test the performance of the tomographic technique versus the amount of noise. Then, dark counts are introduced to explore the efficiency of the framework in a multi-dimensional noise scenario. The concurrence is used as a figure of merit to quantify how well entanglement is preserved through noisy measurements. Quantum fidelity is computed to quantify the accuracy of state reconstruction. The results of numerical simulations are depicted on graphs and discussed.


2021 ◽  
Vol 11 (3) ◽  
Author(s):  
Ryan McKenna ◽  
Gerome Miklau ◽  
Daniel Sheldon

We propose a general approach for differentially private synthetic data generation, that consists of three steps: (1) select a collection of low-dimensional marginals, (2) measure those marginals with a noise addition mechanism, and (3) generate synthetic data that preserves the measured marginals well. Central to this approach is Private-PGM, a post-processing method that is used to estimate a high-dimensional data distribution from noisy measurements of its marginals. We present two mechanisms, NIST-MST and MST, that are instances of this general approach. NIST-MST was the winning mechanism in the 2018 NIST differential privacy synthetic data competition, and MST is a new mechanism that can work in more general settings, while still performing comparably to NIST-MST. We believe our general approach should be of broad interest, and can be adopted in future mechanisms for synthetic data generation.


Author(s):  
Onur Günlü

The problem of reliable function computation is extended by imposing privacy, secrecy, and storage constraints on a remote source whose noisy measurements are observed by multiple parties. The main additions to the classic function computation problem include 1) privacy leakage to an eavesdropper is measured with respect to the remote source rather than the transmitting terminals’ observed sequences; 2) the information leakage to a fusion center with respect to the remote source is considered as a new privacy leakage metric; 3) the function computed is allowed to be a distorted version of the target function, which allows to reduce the storage rate as compared to a reliable function computation scenario in addition to reducing secrecy and privacy leakages; 4) two transmitting node observations are used to compute a function. Inner and outer bounds on the rate regions are derived for lossless and lossy single-function computation with two transmitting nodes, which recover previous results in the literature. For special cases that include invertible and partially-invertible functions, and degraded measurement channels, exact lossless and lossy rate regions are characterized, and one exact region is evaluated for an example scenario.


2021 ◽  
Vol 7 (1) ◽  
Author(s):  
Gaoyan Zhu ◽  
Daniel Dilley ◽  
Kunkun Wang ◽  
Lei Xiao ◽  
Eric Chitambar ◽  
...  

AbstractThe Clauser–Horne–Shimony–Holt (CHSH) inequality test is widely used as a mean of invalidating the local deterministic theories. Most attempts to experimentally test nonlocality have presumed unphysical idealizations that do not hold in real experiments, namely, noiseless measurements. We demonstrate an experimental violation of the CHSH inequality that is free of idealization and rules out local models with high confidence. We show that the CHSH inequality can always be violated for any nonzero noise parameter of the measurement. Intriguingly, less entanglement exhibits more nonlocality in the CHSH test with noisy measurements. Furthermore, we theoretically propose and experimentally demonstrate how the CHSH test with noisy measurements can be used to detect weak entanglement on two-qubit states. Our results offer a deeper insight into the relation between entanglement and nonlocality.


Author(s):  
Howard Heaton ◽  
Samy Wu Fung ◽  
Aviv Gibali ◽  
Wotao Yin

AbstractInverse problems consist of recovering a signal from a collection of noisy measurements. These problems can often be cast as feasibility problems; however, additional regularization is typically necessary to ensure accurate and stable recovery with respect to data perturbations. Hand-chosen analytic regularization can yield desirable theoretical guarantees, but such approaches have limited effectiveness recovering signals due to their inability to leverage large amounts of available data. To this end, this work fuses data-driven regularization and convex feasibility in a theoretically sound manner. This is accomplished using feasibility-based fixed point networks (F-FPNs). Each F-FPN defines a collection of nonexpansive operators, each of which is the composition of a projection-based operator and a data-driven regularization operator. Fixed point iteration is used to compute fixed points of these operators, and weights of the operators are tuned so that the fixed points closely represent available data. Numerical examples demonstrate performance increases by F-FPNs when compared to standard TV-based recovery methods for CT reconstruction and a comparable neural network based on algorithm unrolling. Codes are available on Github: github.com/howardheaton/feasibility_fixed_point_networks.


Sensors ◽  
2021 ◽  
Vol 21 (22) ◽  
pp. 7675
Author(s):  
Angel L. Cedeño ◽  
Ricardo Albornoz ◽  
Rodrigo Carvajal ◽  
Boris I. Godoy ◽  
Juan C. Agüero

Filtering and smoothing algorithms are key tools to develop decision-making strategies and parameter identification techniques in different areas of research, such as economics, financial data analysis, communications, and control systems. These algorithms are used to obtain an estimation of the system state based on the sequentially available noisy measurements of the system output. In a real-world system, the noisy measurements can suffer a significant loss of information due to (among others): (i) a reduced resolution of cost-effective sensors typically used in practice or (ii) a digitalization process for storing or transmitting the measurements through a communication channel using a minimum amount of resources. Thus, obtaining suitable state estimates in this context is essential. In this paper, Gaussian sum filtering and smoothing algorithms are developed in order to deal with noisy measurements that are also subject to quantization. In this approach, the probability mass function of the quantized output given the state is characterized by an integral equation. This integral was approximated by using a Gauss–Legendre quadrature; hence, a model with a Gaussian mixture structure was obtained. This model was used to develop filtering and smoothing algorithms. The benefits of this proposal, in terms of accuracy of the estimation and computational cost, are illustrated via numerical simulations.


2021 ◽  
Author(s):  
Quan Sun ◽  
Fei‐Yun Wu ◽  
Kunde Yang ◽  
Chunlong Huang

Sign in / Sign up

Export Citation Format

Share Document