scholarly journals Manipulating the Byzantine: Optimizing Model Poisoning Attacks and Defenses for Federated Learning

Author(s):  
Virat Shejwalkar ◽  
Amir Houmansadr
1968 ◽  
Vol 8 (2) ◽  
pp. 240-263
Author(s):  
Azizur Rahman Khan

In the present decade there has been a great proliferation of multisectoral models for planning. Part of the incentive has certainly been the potentiality of their application in formulating the actual plans. By now there have been so many different types of multisectoral models that it is useful to attempt some kind of classification according as whether or not they embody certain well-known features. The advantage of such a classification is that one gets a general idea about the structure of the model simply by knowing where it belongs in the list of classification. One broad principle of classification is based on whether the model simply provides a consistent plan or whether it also satisfies some criteria of optimality. A multisectoral consistency model provides an allocation of the scarce resources (e.g., investment and foreign exchange) in such a way that the sectoral output levels are consistent with some given consumption or income target, consistency in this context meaning that the supply of each sector's output is matched by demand generated by intersectoral and final use at base-year relative prices. To the extent that the targets are flexible, there may be many such feasible plans. An optimizing model finds the "best" possible allocation of resources among sectors, the "best" being understood in the sense of maximiz¬ing > a given preference function subject to the constraints that ensure that the plan is also feasible.


Author(s):  
Yao Deng ◽  
Tiehua Zhang ◽  
Guannan Lou ◽  
Xi Zheng ◽  
Jiong Jin ◽  
...  

Sensors ◽  
2021 ◽  
Vol 21 (11) ◽  
pp. 3922
Author(s):  
Sheeba Lal ◽  
Saeed Ur Rehman ◽  
Jamal Hussain Shah ◽  
Talha Meraj ◽  
Hafiz Tayyab Rauf ◽  
...  

Due to the rapid growth in artificial intelligence (AI) and deep learning (DL) approaches, the security and robustness of the deployed algorithms need to be guaranteed. The security susceptibility of the DL algorithms to adversarial examples has been widely acknowledged. The artificially created examples will lead to different instances negatively identified by the DL models that are humanly considered benign. Practical application in actual physical scenarios with adversarial threats shows their features. Thus, adversarial attacks and defense, including machine learning and its reliability, have drawn growing interest and, in recent years, has been a hot topic of research. We introduce a framework that provides a defensive model against the adversarial speckle-noise attack, the adversarial training, and a feature fusion strategy, which preserves the classification with correct labelling. We evaluate and analyze the adversarial attacks and defenses on the retinal fundus images for the Diabetic Retinopathy recognition problem, which is considered a state-of-the-art endeavor. Results obtained on the retinal fundus images, which are prone to adversarial attacks, are 99% accurate and prove that the proposed defensive model is robust.


2010 ◽  
Vol 20 (5) ◽  
pp. 618-629 ◽  
Author(s):  
Cristian Grossmann ◽  
Guido Ströhlein ◽  
Manfred Morari ◽  
Massimo Morbidelli

2013 ◽  
Vol 96 (4) ◽  
pp. 15-28
Author(s):  
Koki Matsumura ◽  
Masaru Kawamoto

Sign in / Sign up

Export Citation Format

Share Document