output probability
Recently Published Documents


TOTAL DOCUMENTS

77
(FIVE YEARS 14)

H-INDEX

8
(FIVE YEARS 1)

2021 ◽  
Vol 7 (1) ◽  
Author(s):  
Ernest Y.-Z. Tan ◽  
René Schwonnek ◽  
Koon Tong Goh ◽  
Ignatius William Primaatmaja ◽  
Charles C.-W. Lim

AbstractDevice-independent quantum key distribution (DIQKD) provides the strongest form of secure key exchange, using only the input–output statistics of the devices to achieve information-theoretic security. Although the basic security principles of DIQKD are now well understood, it remains a technical challenge to derive reliable and robust security bounds for advanced DIQKD protocols that go beyond the previous results based on violations of the CHSH inequality. In this work, we present a framework based on semidefinite programming that gives reliable lower bounds on the asymptotic secret key rate of any QKD protocol using untrusted devices. In particular, our method can in principle be utilized to find achievable secret key rates for any DIQKD protocol, based on the full input–output probability distribution or any choice of Bell inequality. Our method also extends to other DI cryptographic tasks.


2021 ◽  
Vol 7 (1) ◽  
Author(s):  
Ryan L. Mann

AbstractWe establish a classical heuristic algorithm for exactly computing quantum probability amplitudes. Our algorithm is based on mapping output probability amplitudes of quantum circuits to evaluations of the Tutte polynomial of graphic matroids. The algorithm evaluates the Tutte polynomial recursively using the deletion–contraction property while attempting to exploit structural properties of the matroid. We consider several variations of our algorithm and present experimental results comparing their performance on two classes of random quantum circuits. Further, we obtain an explicit form for Clifford circuit amplitudes in terms of matroid invariants and an alternative efficient classical algorithm for computing the output probability amplitudes of Clifford circuits.


2021 ◽  
Author(s):  
Evander van Wolfswinkel ◽  
Jette Wielaard ◽  
Jules Lavalaye ◽  
Jorrit Hoff ◽  
jan Booij ◽  
...  

Abstract Purpose: Dopamine transporter (DAT) imaging with 123I-FP-CIT SPECT is used to support the diagnosis of Parkinson’s disease (PD) in clinically uncertain cases. Previous studies showed that automatic classification of 123I‑FP‑CIT SPECT images (marketed as DaTSCAN) is feasible by using machine learning algorithms. However, these studies lacked sizable use of data from routine clinical practice. This study aims to contribute to the discussion whether artificial intelligence (AI) can be applied in clinical practice. Moreover, we investigated the need for hospital specific training data.Methods: A convolutional neural network (CNN) named DaTNet-3 was designed and trained to classify DaTSCAN images as either normal or supportive of a dopaminergic deficit. Both a multi-site data set (n = 2412) from the Parkinson’s Progression Marker Initiative (PPMI) and an in-house data set containing clinical images (n = 932) obtained in routine practice at the St Antonius hospital (STA) were used for training and testing. STA images were labeled based on interpretation by nuclear medicine physicians. To investigate whether indeterminate scans effects classification accuracy, a threshold was applied on the output probability.Results: DaTNet-3 trained with STA data reached an accuracy of 89.0% in correctly identifying images of the clinical STA test set as either normal or with decreased striatal DAT binding (98.5% on the PPMI test set). When thresholded, accuracy increased to 95.7%. This increase was not observed when trained with PPMI data, indicating the incorrect images were confidently classified as the incorrect class.Conclusion: Based on results of DaTNet-3 we conclude that automatic interpretation of DaTSCAN images with AI is feasible and robust. Further, we conclude DaTNet-3 performs slightly better when it is trained with hospital specific data. This difference increased when output probability was thresholded. Therefore we conclude that the usability of a data set increases if it contains indeterminate images.


Author(s):  
Tsunato Nakai ◽  
Daisuke Suzuki ◽  
Takeshi Fujino

Deep neural networks (DNNs) have been applied to various industries. In particular, DNNs on embedded devices have attracted considerable interest because they allow real-time and distributed processing on site. However, adversarial examples (AEs), which add small perturbations to the input data of DNNs to cause misclassification, are serious threats to DNNs. In this paper, a novel black-box attack is proposed to craft AEs based only on processing time, i.e., the side-channel leaks from DNNs on embedded devices. Unlike several existing black-box attacks that utilize output probability, the proposed attack exploits the relationship between the number of activated nodes and processing time without using training data, model architecture, parameters, substitute models, or output probability. The perturbations for AEs are determined by the differential processing time based on the input data of the DNNs in the proposed attack. The experimental results show that the AEs of the proposed attack effectively cause an increase in the number of activated nodes and the misclassification of one of the incorrect labels against the DNNs on a microcontroller unit. Moreover, these results indicate that the attack can evade gradient-masking and confidence reduction countermeasures, which conceal the output probability, to prevent the crafting of AEs against several black-box attacks. Finally, the countermeasures against the attack are implemented and evaluated to clarify that the implementation of an activation function with data-dependent timing leaks is the cause of the proposed attack.


2021 ◽  
Vol 11 (2) ◽  
pp. 135-145
Author(s):  
Ying-Heng Yeo ◽  
Kin-Sam Yen

As an important export, cleanliness control on edible bird’s nest (EBN) is paramount. Automatic impurities detection is in urgent need to replace manual practices. However, effective impurities detection algorithm is yet to be developed due to the unresolved inhomogeneous optical properties of EBN. The objective of this work is to develop a novel U-net based algorithm for accurate impurities detection. The algorithm leveraged the convolution mechanisms of U-net for precise and localized features extraction. Output probability tensors were then generated from the deconvolution layers for impurities detection and positioning. The U-net based algorithm outperformed previous image processing-based methods with a higher impurities detection rate of 96.69% and a lower misclassification rate of 10.08%. The applicability of the algorithm was further confirmed with a reasonably high dice coefficient of more than 0.8. In conclusion, the developed U-net based algorithm successfully mitigated intensity inhomogeneity in EBN and improved the impurities detection rate.


2020 ◽  
Vol 2 (2) ◽  
Author(s):  
Matthias C. Caro ◽  
Ishaun Datta

AbstractWe characterize the expressive power of quantum circuits with the pseudo-dimension, a measure of complexity for probabilistic concept classes. We prove pseudo-dimension bounds on the output probability distributions of quantum circuits; the upper bounds are polynomial in circuit depth and number of gates. Using these bounds, we exhibit a class of circuit output states out of which at least one has exponential gate complexity of state preparation, and moreover demonstrate that quantum circuits of known polynomial size and depth are PAC-learnable.


2020 ◽  
Vol 10 (8) ◽  
pp. 2950
Author(s):  
Qiuyu Zhu ◽  
Zikuang He ◽  
Tao Zhang ◽  
Wennan Cui

Convolutional neural networks (CNNs) have made great achievements on computer vision tasks, especially the image classification. With the improvement of network structure and loss functions, the performance of image classification is getting higher and higher. The classic Softmax + cross-entropy loss has been the norm for training neural networks for years, which is calculated from the output probability of the ground-truth class. Then the network’s weight is updated by gradient calculation of the loss. However, after several epochs of training, the back-propagation errors usually become almost negligible. For the above considerations, we proposed that batch normalization with adjustable scale could be added after network output to alleviate the problem of vanishing gradient problem in deep learning. The experimental results show that our method can significantly improve the final classification accuracy on different network structures, and is also better than many other improved classification Loss.


2020 ◽  
Vol 34 (05) ◽  
pp. 8560-8567
Author(s):  
Tong Niu ◽  
Mohit Bansal

Many sequence-to-sequence dialogue models tend to generate safe, uninformative responses. There have been various useful efforts on trying to eliminate them. However, these approaches either improve decoding algorithms during inference, rely on hand-crafted features, or employ complex models. In our work, we build dialogue models that are dynamically aware of what utterances or tokens are dull without any feature-engineering. Specifically, we start with a simple yet effective automatic metric, AvgOut, which calculates the average output probability distribution of all time steps on the decoder side during training. This metric directly estimates which tokens are more likely to be generated, thus making it a faithful evaluation of the model diversity (i.e., for diverse models, the token probabilities should be more evenly distributed rather than peaked at a few dull tokens). We then leverage this novel metric to propose three models that promote diversity without losing relevance. The first model, MinAvgOut, directly maximizes the diversity score through the output distributions of each batch; the second model, Label Fine-Tuning (LFT), prepends to the source sequence a label continuously scaled by the diversity score to control the diversity level; the third model, RL, adopts Reinforcement Learning and treats the diversity score as a reward signal. Moreover, we experiment with a hybrid model by combining the loss terms of MinAvgOut and RL. All four models outperform their base LSTM-RNN model on both diversity and relevance by a large margin, and are comparable to or better than competitive baselines (also verified via human evaluation). Moreover, our approaches are orthogonal to the base model, making them applicable as an add-on to other emerging better dialogue models in the future.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Kamaludin Dingle ◽  
Guillermo Valle Pérez ◽  
Ard A. Louis

Sign in / Sign up

Export Citation Format

Share Document