local detectors
Recently Published Documents


TOTAL DOCUMENTS

19
(FIVE YEARS 3)

H-INDEX

4
(FIVE YEARS 1)

Author(s):  
Kai Yu ◽  
Chun Hui Zhang ◽  
Xing Yu Zhou ◽  
Qin Wang

Abstract In quantum key distribution (QKD), passive decoy-state method can simplify the intensity modulation and reduce some of side-channel information leakage and modulation errors. It is usually implemented with a heralded single-photon source. In [Physical Review A 96, 032312 (2016)], a novel passive decoy-state method is proposed by Wang et al., which uses two local detectors to generate more detection events for tightly estimating channel parameters. However, in original scheme, the two local detectors are assumed to be identical, including same detection efficiency and dark count rate, which is often not satisfied in realistic experiment. Therefore, in this paper, we construct a model for this passive decoy-state QKD scheme with two mismatched detectors and explore the effect on QKD performance with certain parameter. We also take the finite-size effect into consideration, showing the performance with statistical fluctuations. The results show that the efficiencies of local detectors affect the key rate more obviously than dark count rates. Our work provides a reference value for realistic QKD system.


2019 ◽  
Vol 128 (2) ◽  
pp. 420-437 ◽  
Author(s):  
Christoph Feichtenhofer ◽  
Axel Pinz ◽  
Richard P. Wildes ◽  
Andrew Zisserman

Abstract As the success of deep models has led to their deployment in all areas of computer vision, it is increasingly important to understand how these representations work and what they are capturing. In this paper, we shed light on deep spatiotemporal representations by visualizing the internal representation of models that have been trained to recognize actions in video. We visualize multiple two-stream architectures to show that local detectors for appearance and motion objects arise to form distributed representations for recognizing human actions. Key observations include the following. First, cross-stream fusion enables the learning of true spatiotemporal features rather than simply separate appearance and motion features. Second, the networks can learn local representations that are highly class specific, but also generic representations that can serve a range of classes. Third, throughout the hierarchy of the network, features become more abstract and show increasing invariance to aspects of the data that are unimportant to desired distinctions (e.g. motion patterns across various speeds). Fourth, visualizations can be used not only to shed light on learned representations, but also to reveal idiosyncrasies of training data and to explain failure cases of the system.


2017 ◽  
Vol 59 ◽  
pp. 150-167 ◽  
Author(s):  
Song Wu ◽  
Ard Oerlemans ◽  
Erwin M. Bakker ◽  
Michael S. Lew

2015 ◽  
Vol 7 (10) ◽  
pp. 12-18 ◽  
Author(s):  
Faten A. Khalifa ◽  
◽  
Noura A. Semary ◽  
Hatem M. El-Sayed ◽  
Mohiy M. Hadhoud

2015 ◽  
Vol 44 ◽  
pp. 1-13 ◽  
Author(s):  
S. Bianco ◽  
D. Mazzini ◽  
D.P. Pau ◽  
R. Schettini

Sign in / Sign up

Export Citation Format

Share Document