scholarly journals Autoscaling Bloom filter: controlling trade-off between true and false positives

2019 ◽  
Vol 32 (8) ◽  
pp. 3675-3684 ◽  
Author(s):  
Denis Kleyko ◽  
Abbas Rahimi ◽  
Ross W. Gayler ◽  
Evgeny Osipov
2020 ◽  
Vol 2020 (14) ◽  
pp. 378-1-378-7
Author(s):  
Tyler Nuanes ◽  
Matt Elsey ◽  
Radek Grzeszczuk ◽  
John Paul Shen

We present a high-quality sky segmentation model for depth refinement and investigate residual architecture performance to inform optimally shrinking the network. We describe a model that runs in near real-time on mobile device, present a new, highquality dataset, and detail a unique weighing to trade off false positives and false negatives in binary classifiers. We show how the optimizations improve bokeh rendering by correcting stereo depth misprediction in sky regions. We detail techniques used to preserve edges, reject false positives, and ensure generalization to the diversity of sky scenes. Finally, we present a compact model and compare performance of four popular residual architectures (ShuffleNet, MobileNetV2, Resnet-101, and Resnet-34-like) at constant computational cost.


2013 ◽  
Vol 22 (23) ◽  
pp. 5738-5742 ◽  
Author(s):  
Hugo B. Harrison ◽  
Pablo Saenz-Agudelo ◽  
Serge Planes ◽  
Geoffrey P. Jones ◽  
Michael L. Berumen

2014 ◽  
Vol 8 (4) ◽  
pp. 1865-1877 ◽  
Author(s):  
Hyesook Lim ◽  
Nara Lee ◽  
Jungwon Lee ◽  
Changhoon Yim

2021 ◽  
pp. 1-22
Author(s):  
Patrick M. Kuhn ◽  
Nick Vivyan

Abstract To reduce strategic misreporting on sensitive topics, survey researchers increasingly use list experiments rather than direct questions. However, the complexity of list experiments may increase nonstrategic misreporting. We provide the first empirical assessment of this trade-off between strategic and nonstrategic misreporting. We field list experiments on election turnout in two different countries, collecting measures of respondents’ true turnout. We detail and apply a partition validation method which uses true scores to distinguish true and false positives and negatives for list experiments, thus allowing detection of nonstrategic reporting errors. For both list experiments, partition validation reveals nonstrategic misreporting that is: undetected by standard diagnostics or validation; greater than assumed in extant simulation studies; and severe enough that direct turnout questions subject to strategic misreporting exhibit lower overall reporting error. We discuss how our results can inform the choice between list experiment and direct question for other topics and survey contexts.


Author(s):  
Siyue Wang ◽  
Xiao Wang ◽  
Pin-Yu Chen ◽  
Pu Zhao ◽  
Xue Lin

This paper proposes Characteristic Examples for effectively fingerprinting deep neural networks, featuring high-robustness to the base model against model pruning as well as low-transferability to unassociated models. This is the first work taking both robustness and transferability into consideration for generating realistic fingerprints, whereas current methods lack practical assumptions and may incur large false positive rates. To achieve better trade-off between robustness and transferability, we propose three kinds of characteristic examples: vanilla C-examples, RC-examples, and LTRC-example, to derive fingerprints from the original base model. To fairly characterize the trade-off between robustness and transferability, we propose Uniqueness Score, a comprehensive metric that measures the difference between robustness and transferability, which also serves as an indicator to the false alarm problem. Extensive experiments demonstrate that the proposed characteristic examples can achieve superior performance when compared with existing fingerprinting methods. In particular, for VGG ImageNet models, using LTRC-examples gives 4X higher uniqueness score than the baseline method and does not incur any false positives.


Sign in / Sign up

Export Citation Format

Share Document