Parameter Discrepancy Hypothesis: Adversarial Attack for Graph Data

Author(s):  
Yiteng Wu ◽  
Wei Liu ◽  
Xinbang Hu ◽  
Xuqiao Yu
2020 ◽  
Vol 16 (1) ◽  
Author(s):  
Mona Lundin

This study explores the use of a new protocol in hypertension care, in which continuous patient-generated data reported through digital technology are presented in graphical form and discussed in follow-up consultations with nurses. This protocol is part of an infrastructure design project in which patients and medical professionals are co-designers. The approach used for the study was interaction analysis, which rendered possible detailed in situ examination of local variations in how nurses relate to the protocol. The findings show three distinct engagements: (1) teasing out an average blood pressure, (2) working around the protocol and graph data and (3) delivering an analysis. It was discovered that the graphical representations structured the consultations to a great extent, and that nurses mostly referred to graphs that showed blood pressure values, which is a measurement central to the medical discourse of hypertension. However, it was also found that analysis of the data alone was not sufficient to engage patients: nurses' invisible and inclusion work through eliciting patients' narratives played an important role here. A conclusion of the study is that nurses and patients both need to be more thoroughly introduced to using protocols based on graphs for more productive consultations to be established. 


Sensors ◽  
2021 ◽  
Vol 21 (11) ◽  
pp. 3922
Author(s):  
Sheeba Lal ◽  
Saeed Ur Rehman ◽  
Jamal Hussain Shah ◽  
Talha Meraj ◽  
Hafiz Tayyab Rauf ◽  
...  

Due to the rapid growth in artificial intelligence (AI) and deep learning (DL) approaches, the security and robustness of the deployed algorithms need to be guaranteed. The security susceptibility of the DL algorithms to adversarial examples has been widely acknowledged. The artificially created examples will lead to different instances negatively identified by the DL models that are humanly considered benign. Practical application in actual physical scenarios with adversarial threats shows their features. Thus, adversarial attacks and defense, including machine learning and its reliability, have drawn growing interest and, in recent years, has been a hot topic of research. We introduce a framework that provides a defensive model against the adversarial speckle-noise attack, the adversarial training, and a feature fusion strategy, which preserves the classification with correct labelling. We evaluate and analyze the adversarial attacks and defenses on the retinal fundus images for the Diabetic Retinopathy recognition problem, which is considered a state-of-the-art endeavor. Results obtained on the retinal fundus images, which are prone to adversarial attacks, are 99% accurate and prove that the proposed defensive model is robust.


Sign in / Sign up

Export Citation Format

Share Document