behavioral cloning
Recently Published Documents


TOTAL DOCUMENTS

48
(FIVE YEARS 30)

H-INDEX

5
(FIVE YEARS 1)

2021 ◽  
pp. 185-198
Author(s):  
Uppala Sumanth ◽  
Narinder Singh Punn ◽  
Sanjay Kumar Sonbhadra ◽  
Sonali Agarwal

2021 ◽  
Vol 10 (2) ◽  
Author(s):  
Vihan Karnala ◽  
Marianne Campbell

The purpose of this study is to gain an understanding of the impact of model architecture on the efficacy of adversarial examples against machine learning systems implemented in self-driving applications. Prior research shows how to create and train against adversarial examples in many use cases; however, there is no definite understanding of how a machine learning model’s architecture affects the efficacy of adversarial examples. Data was collected through an experimental setting involving end-to-end self-driving models trained through behavioral cloning. Three model types were tested based on popular frameworks for machine learning algorithms dealing with images. Results showed a statistically significant difference in the impact of adversarial examples between these models. This means that certain model types and architectures are more susceptible to attacks. Therefore, the conclusion can be made that model architecture does impact the efficacy of adversarial examples; however, this is potentially limited to closed-loop, end-to-end systems in which algorithms make the entire decision. Future research should investigate what specific structure within models causes increased susceptibility to adversarial attacks.


Sign in / Sign up

Export Citation Format

Share Document