Objective Metrics and Gradient Descent Algorithms for Adversarial Examples in Machine Learning

Author(s):  
Uyeong Jang ◽  
Xi Wu ◽  
Somesh Jha
2021 ◽  
Vol 7 (3) ◽  
pp. 41
Author(s):  
Emre Baspinar ◽  
Luca Calatroni ◽  
Valentina Franceschi ◽  
Dario Prandi

We consider Wilson-Cowan-type models for the mathematical description of orientation-dependent Poggendorff-like illusions. Our modelling improves two previously proposed cortical-inspired approaches, embedding the sub-Riemannian heat kernel into the neuronal interaction term, in agreement with the intrinsically anisotropic functional architecture of V1 based on both local and lateral connections. For the numerical realisation of both models, we consider standard gradient descent algorithms combined with Fourier-based approaches for the efficient computation of the sub-Laplacian evolution. Our numerical results show that the use of the sub-Riemannian kernel allows us to reproduce numerically visual misperceptions and inpainting-type biases in a stronger way in comparison with the previous approaches.


2021 ◽  
Vol 1 (2) ◽  
pp. 252-273
Author(s):  
Pavlos Papadopoulos ◽  
Oliver Thornewill von Essen ◽  
Nikolaos Pitropakis ◽  
Christos Chrysoulas ◽  
Alexios Mylonas ◽  
...  

As the internet continues to be populated with new devices and emerging technologies, the attack surface grows exponentially. Technology is shifting towards a profit-driven Internet of Things market where security is an afterthought. Traditional defending approaches are no longer sufficient to detect both known and unknown attacks to high accuracy. Machine learning intrusion detection systems have proven their success in identifying unknown attacks with high precision. Nevertheless, machine learning models are also vulnerable to attacks. Adversarial examples can be used to evaluate the robustness of a designed model before it is deployed. Further, using adversarial examples is critical to creating a robust model designed for an adversarial environment. Our work evaluates both traditional machine learning and deep learning models’ robustness using the Bot-IoT dataset. Our methodology included two main approaches. First, label poisoning, used to cause incorrect classification by the model. Second, the fast gradient sign method, used to evade detection measures. The experiments demonstrated that an attacker could manipulate or circumvent detection with significant probability.


2021 ◽  
Vol 12 (4) ◽  
pp. 185
Author(s):  
Wujian Yang ◽  
Jianghao Dong ◽  
Yuke Ren

Hydrogen energy vehicles are being increasingly widely used. To ensure the safety of hydrogenation stations, research into the detection of hydrogen leaks is required. Offline analysis using data machine learning is achieved using Spark SQL and Spark MLlib technology. In this study, to determine the safety status of a hydrogen refueling station, we used multiple algorithm models to perform calculation and analysis: a multi-source data association prediction algorithm, a random gradient descent algorithm, a deep neural network optimization algorithm, and other algorithm models. We successfully analyzed the data, including the potential relationships, internal relationships, and operation laws between the data, to detect the safety statuses of hydrogen refueling stations.


Sign in / Sign up

Export Citation Format

Share Document