scholarly journals Evaluating the Robustness of Defense Mechanisms based on AutoEncoder Reconstructions against Carlini-Wagner Adversarial Attacks

2020 ◽  
Vol 1 ◽  
pp. 6
Author(s):  
Petru Hlihor ◽  
Riccardo Volpi ◽  
Luigi Malagò

Adversarial Examples represent a serious problem affecting the security of machine learning systems. In this paper we focus on a defense mechanism based on reconstructing images before classification using an autoencoder. We experiment on several types of autoencoders and evaluate the impact of strategies such as injecting noise in the input during training and in the latent space at inference time.We tested the models on adversarial examples generated with the Carlini-Wagner attack, in a white-box scenario and on the stacked system composed by the autoencoder and the classifier.

2019 ◽  
Vol 95 (8) ◽  
Author(s):  
Felix Wesener ◽  
Britta Tietjen

ABSTRACT Organisms are prone to different stressors and have evolved various defense mechanisms. One such defense mechanism is priming, where a mild preceding stress prepares the organism toward an improved stress response. This improved response can strongly vary, and primed organisms have been found to respond with one of three response strategies: a shorter delay to stress, a faster buildup of their response or a more intense response. However, a universal comparative assessment, which response is superior under a given environmental setting, is missing. We investigate the benefits of the three improved responses for microorganisms with an ordinary differential equation model, simulating the impact of an external stress on a microbial population that is either naïve or primed. We systematically assess the resulting population performance for different costs associated with priming and stress conditions. Our results show that independent of stress type and priming costs, the stronger primed response is most beneficial for longer stress phases, while the faster and earlier responses increase population performance and survival probability under short stresses. Competition increases priming benefits and promotes the early stress response. This dependence on the ecological context highlights the importance of including primed response strategies into microbial stress ecology.


2021 ◽  
Author(s):  
Tim Rudner ◽  
Helen Toner

This paper is the second installment in a series on “AI safety,” an area of machine learning research that aims to identify causes of unintended behavior in machine learning systems and develop tools to ensure these systems work safely and reliably. The first paper in the series, “Key Concepts in AI Safety: An Overview,” described three categories of AI safety issues: problems of robustness, assurance, and specification. This paper introduces adversarial examples, a major challenge to robustness in modern machine learning systems.


2021 ◽  
Author(s):  
Cor Steging ◽  
Silja Renooij ◽  
Bart Verheij

The justification of an algorithm’s outcomes is important in many domains, and in particular in the law. However, previous research has shown that machine learning systems can make the right decisions for the wrong reasons: despite high accuracies, not all of the conditions that define the domain of the training data are learned. In this study, we investigate what the system does learn, using state-of-the-art explainable AI techniques. With the use of SHAP and LIME, we are able to show which features impact the decision making process and how the impact changes with different distributions of the training data. However, our results also show that even high accuracy and good relevant feature detection are no guarantee for a sound rationale. Hence these state-of-the-art explainable AI techniques cannot be used to fully expose unsound rationales, further advocating the need for a separate method for rationale evaluation.


2021 ◽  
Vol 12 ◽  
Author(s):  
Zahra Iqbal ◽  
Mohammed Shariq Iqbal ◽  
Abeer Hashem ◽  
Elsayed Fathi Abd_Allah ◽  
Mohammad Israil Ansari

Plants are subjected to a plethora of environmental cues that cause extreme losses to crop productivity. Due to fluctuating environmental conditions, plants encounter difficulties in attaining full genetic potential for growth and reproduction. One such environmental condition is the recurrent attack on plants by herbivores and microbial pathogens. To surmount such attacks, plants have developed a complex array of defense mechanisms. The defense mechanism can be either preformed, where toxic secondary metabolites are stored; or can be inducible, where defense is activated upon detection of an attack. Plants sense biotic stress conditions, activate the regulatory or transcriptional machinery, and eventually generate an appropriate response. Plant defense against pathogen attack is well understood, but the interplay and impact of different signals to generate defense responses against biotic stress still remain elusive. The impact of light and dark signals on biotic stress response is one such area to comprehend. Light and dark alterations not only regulate defense mechanisms impacting plant development and biochemistry but also bestow resistance against invading pathogens. The interaction between plant defense and dark/light environment activates a signaling cascade. This signaling cascade acts as a connecting link between perception of biotic stress, dark/light environment, and generation of an appropriate physiological or biochemical response. The present review highlights molecular responses arising from dark/light fluctuations vis-à-vis elicitation of defense mechanisms in plants.


Author(s):  
Stephanie E. August ◽  
Audrey Tsaima

AbstractThe role of artificial intelligence in US education is expanding. As education moves toward providing customized learning paths, the use of artificial intelligence (AI) and machine learning (ML) algorithms in learning systems increases. This can be viewed as growing metaphorical exoskeletons for instructors, enabling them to provide a higher level of guidance, feedback, and autonomy to learners. In turn, the instructor gains time to sense student needs and support authentic learning experiences that go beyond what AI and ML can provide. Applications of AI-based education technology support learning through automated tutoring, personalizing learning, assessing student knowledge, and automating tasks normally performed by the instructor. This technology raises questions about how it is best used, what data provides evidence of the impact of AI and ML on learning, and future directions in interactive learning systems. Exploration of the use of AI and ML for both co-curricular and independent learnings in content presentation and instruction; interactions, communications, and discussions; learner activities; assessment and evaluation; and co-curricular opportunities provide guidance for future research.


Author(s):  
Rebeen Rebwar Hama Amin ◽  
Dana Hassan ◽  
Masnida Hussin

DNS reflection/amplification attacks are types of Distributed Denial of Service (DDoS) attacks that take advantage of vulnerabilities in the Domain Name System (DNS) and use it as an attacking tool. This type of attack can quickly deplete the resources (i.e. computational and bandwidth) of the targeted system. Many defense mechanisms are proposed to mitigate the impact of this type of attack. However, these defense mechanisms are centralized-based and cannot deal with a distributed-based attack. Also, these defense mechanisms have a single point of deployment which leads to a lack of computational resources to handle an attack with a large magnitude. In this work, we presented a new distributed-based defense mechanism (DDM) to counter reflection/ amplification attacks. While operating, we calculated the CPU counters of the machines that we deployed our defense mechanism with which showed 19.9% computational improvement. On top of that, our defense mechanism showed that it can protect the attack path from exhaustion during reflection/amplification attacks without putting any significant traffic load on the network by eliminating every spoofed request from getting responses.


E-Management ◽  
2021 ◽  
Vol 4 (2) ◽  
pp. 28-36
Author(s):  
A. A. Dashkov ◽  
Yu. O. Nesterova

In the XXI century, “trust” becomes a category that manifests itself in a variety of ways and affects many areas of human activity, including the economy and business. With the development of information and communication technologies and end-to-end technologies, this influence is becoming more and more noticeable. A special place in digital technologies is occupied by human trust when interacting with artificial intelligence and machine learning systems. In this case, trust becomes a potential stumbling block in the field of further development of interaction between artificial intelligence and humans. Trust plays a key role in ensuring recognition in society, continuous progress and development of artificial intelligence.The article considers human trust in artificial intelligence and machine learning systems from different sides. The main objectives of the research paper are to structure existing research on this subject and identify the most important ways to create trust among potential consumers of artificial intelligence products. The article investigates the attitude to artificial intelligence in different countries, as well as the need for trust among users of artificial intelligence systems and analyses the impact of distrust on business. The authors identified the factors that are crucial in the formation of the initial level of trust and the development of continuous trust in artificial intelligence.


Entropy ◽  
2020 ◽  
Vol 22 (9) ◽  
pp. 999 ◽  
Author(s):  
Ian Fischer

Much of the field of Machine Learning exhibits a prominent set of failure modes, including vulnerability to adversarial examples, poor out-of-distribution (OoD) detection, miscalibration, and willingness to memorize random labelings of datasets. We characterize these as failures of robust generalization, which extends the traditional measure of generalization as accuracy or related metrics on a held-out set. We hypothesize that these failures to robustly generalize are due to the learning systems retaining too much information about the training data. To test this hypothesis, we propose the Minimum Necessary Information (MNI) criterion for evaluating the quality of a model. In order to train models that perform well with respect to the MNI criterion, we present a new objective function, the Conditional Entropy Bottleneck (CEB), which is closely related to the Information Bottleneck (IB). We experimentally test our hypothesis by comparing the performance of CEB models with deterministic models and Variational Information Bottleneck (VIB) models on a variety of different datasets and robustness challenges. We find strong empirical evidence supporting our hypothesis that MNI models improve on these problems of robust generalization.


AI Magazine ◽  
2014 ◽  
Vol 35 (4) ◽  
pp. 105-120 ◽  
Author(s):  
Saleema Amershi ◽  
Maya Cakmak ◽  
William Bradley Knox ◽  
Todd Kulesza

Intelligent systems that learn interactively from their end-users are quickly becoming widespread. Until recently, this progress has been fueled mostly by advances in machine learning; however, more and more researchers are realizing the importance of studying users of these systems. In this article we promote this approach and demonstrate how it can result in better user experiences and more effective learning systems. We present a number of case studies that characterize the impact of interactivity, demonstrate ways in which some existing systems fail to account for the user, and explore new ways for learning systems to interact with their users. We argue that the design process for interactive machine learning systems should involve users at all stages: explorations that reveal human interaction patterns and inspire novel interaction methods, as well as refinement stages to tune details of the interface and choose among alternatives. After giving a glimpse of the progress that has been made so far, we discuss the challenges that we face in moving the field forward.


Sign in / Sign up

Export Citation Format

Share Document