Reducing radiation to patients and improving image quality in a real-world nuclear cardiology laboratory

2017 ◽  
Vol 24 (6) ◽  
pp. 1871-1877 ◽  
Author(s):  
Stephen A. Bloom ◽  
Karen Meyers
Author(s):  
Usman A. Hasnie ◽  
James Barrios ◽  
Ami E. Iskandrian ◽  
Fadi G. Hage

Author(s):  
Anibal Pedraza ◽  
Oscar Deniz ◽  
Gloria Bueno

AbstractThe phenomenon of Adversarial Examples has become one of the most intriguing topics associated to deep learning. The so-called adversarial attacks have the ability to fool deep neural networks with inappreciable perturbations. While the effect is striking, it has been suggested that such carefully selected injected noise does not necessarily appear in real-world scenarios. In contrast to this, some authors have looked for ways to generate adversarial noise in physical scenarios (traffic signs, shirts, etc.), thus showing that attackers can indeed fool the networks. In this paper we go beyond that and show that adversarial examples also appear in the real-world without any attacker or maliciously selected noise involved. We show this by using images from tasks related to microscopy and also general object recognition with the well-known ImageNet dataset. A comparison between these natural and the artificially generated adversarial examples is performed using distance metrics and image quality metrics. We also show that the natural adversarial examples are in fact at a higher distance from the originals that in the case of artificially generated adversarial examples.


2020 ◽  
Vol 27 (1) ◽  
pp. 305-314
Author(s):  
Vedran Oruc ◽  
Blake Smith ◽  
Navkaranbir S. Bajaj ◽  
Pradeep Bhambhvani ◽  
Ami E. Iskandrian ◽  
...  

2011 ◽  
Vol 57 (14) ◽  
pp. E1243
Author(s):  
Kathleen Hayes-Brown ◽  
Fareed M. Collado ◽  
Mohammed Alhaji ◽  
Ankit Maheshwari ◽  
Raj Vahistha ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document