scholarly journals Arms Race in Adversarial Malware Detection: A Survey

2023 ◽  
Vol 55 (1) ◽  
pp. 1-35
Author(s):  
Deqiang Li ◽  
Qianmu Li ◽  
Yanfang (Fanny) Ye ◽  
Shouhuai Xu

Malicious software (malware) is a major cyber threat that has to be tackled with Machine Learning (ML) techniques because millions of new malware examples are injected into cyberspace on a daily basis. However, ML is vulnerable to attacks known as adversarial examples. In this article, we survey and systematize the field of Adversarial Malware Detection (AMD) through the lens of a unified conceptual framework of assumptions, attacks, defenses, and security properties. This not only leads us to map attacks and defenses to partial order structures, but also allows us to clearly describe the attack-defense arms race in the AMD context. We draw a number of insights, including: knowing the defender’s feature set is critical to the success of transfer attacks; the effectiveness of practical evasion attacks largely depends on the attacker’s freedom in conducting manipulations in the problem space; knowing the attacker’s manipulation set is critical to the defender’s success; and the effectiveness of adversarial training depends on the defender’s capability in identifying the most powerful attack. We also discuss a number of future research directions.

2020 ◽  
Vol 14 ◽  
Author(s):  
Meghna Dhalaria ◽  
Ekta Gandotra

Purpose: This paper provides the basics of Android malware, its evolution and tools and techniques for malware analysis. Its main aim is to present a review of the literature on Android malware detection using machine learning and deep learning and identify the research gaps. It provides the insights obtained through literature and future research directions which could help researchers to come up with robust and accurate techniques for classification of Android malware. Design/Methodology/Approach: This paper provides a review of the basics of Android malware, its evolution timeline and detection techniques. It includes the tools and techniques for analyzing the Android malware statically and dynamically for extracting features and finally classifying these using machine learning and deep learning algorithms. Findings: The number of Android users is expanding very fast due to the popularity of Android devices. As a result, there are more risks to Android users due to the exponential growth of Android malware. On-going research aims to overcome the constraints of earlier approaches for malware detection. As the evolving malware are complex and sophisticated, earlier approaches like signature based and machine learning based are not able to identify these timely and accurately. The findings from the review shows various limitations of earlier techniques i.e. requires more detection time, high false positive and false negative rate, low accuracy in detecting sophisticated malware and less flexible. Originality/value: This paper provides a systematic and comprehensive review on the tools and techniques being employed for analysis, classification and identification of Android malicious applications. It includes the timeline of Android malware evolution, tools and techniques for analyzing these statically and dynamically for the purpose of extracting features and finally using these features for their detection and classification using machine learning and deep learning algorithms. On the basis of the detailed literature review, various research gaps are listed. The paper also provides future research directions and insights which could help researchers to come up with innovative and robust techniques for detecting and classifying the Android malware.


Sensors ◽  
2021 ◽  
Vol 21 (11) ◽  
pp. 3922
Author(s):  
Sheeba Lal ◽  
Saeed Ur Rehman ◽  
Jamal Hussain Shah ◽  
Talha Meraj ◽  
Hafiz Tayyab Rauf ◽  
...  

Due to the rapid growth in artificial intelligence (AI) and deep learning (DL) approaches, the security and robustness of the deployed algorithms need to be guaranteed. The security susceptibility of the DL algorithms to adversarial examples has been widely acknowledged. The artificially created examples will lead to different instances negatively identified by the DL models that are humanly considered benign. Practical application in actual physical scenarios with adversarial threats shows their features. Thus, adversarial attacks and defense, including machine learning and its reliability, have drawn growing interest and, in recent years, has been a hot topic of research. We introduce a framework that provides a defensive model against the adversarial speckle-noise attack, the adversarial training, and a feature fusion strategy, which preserves the classification with correct labelling. We evaluate and analyze the adversarial attacks and defenses on the retinal fundus images for the Diabetic Retinopathy recognition problem, which is considered a state-of-the-art endeavor. Results obtained on the retinal fundus images, which are prone to adversarial attacks, are 99% accurate and prove that the proposed defensive model is robust.


2022 ◽  
Vol 54 (7) ◽  
pp. 1-34
Author(s):  
Sophie Dramé-Maigné ◽  
Maryline Laurent ◽  
Laurent Castillo ◽  
Hervé Ganem

The Internet of Things is taking hold in our everyday life. Regrettably, the security of IoT devices is often being overlooked. Among the vast array of security issues plaguing the emerging IoT, we decide to focus on access control, as privacy, trust, and other security properties cannot be achieved without controlled access. This article classifies IoT access control solutions from the literature according to their architecture (e.g., centralized, hierarchical, federated, distributed) and examines the suitability of each one for access control purposes. Our analysis concludes that important properties such as auditability and revocation are missing from many proposals while hierarchical and federated architectures are neglected by the community. Finally, we provide an architecture-based taxonomy and future research directions: a focus on hybrid architectures, usability, flexibility, privacy, and revocation schemes in serverless authorization.


2021 ◽  
Vol 11 (22) ◽  
pp. 10809
Author(s):  
Hugo S. Oliveira ◽  
José J. M. Machado ◽  
João Manuel R. S. Tavares

With the widespread use of surveillance image cameras and enhanced awareness of public security, objects, and persons Re-Identification (ReID), the task of recognizing objects in non-overlapping camera networks has attracted particular attention in computer vision and pattern recognition communities. Given an image or video of an object-of-interest (query), object identification aims to identify the object from images or video feed taken from different cameras. After many years of great effort, object ReID remains a notably challenging task. The main reason is that an object’s appearance may dramatically change across camera views due to significant variations in illumination, poses or viewpoints, or even cluttered backgrounds. With the advent of Deep Neural Networks (DNN), there have been many proposals for different network architectures achieving high-performance levels. With the aim of identifying the most promising methods for ReID for future robust implementations, a review study is presented, mainly focusing on the person and multi-object ReID and auxiliary methods for image enhancement. Such methods are crucial for robust object ReID, while highlighting limitations of the identified methods. This is a very active field, evidenced by the dates of the publications found. However, most works use data from very different datasets and genres, which presents an obstacle to wide generalized DNN model training and usage. Although the model’s performance has achieved satisfactory results on particular datasets, a particular trend was observed in the use of 3D Convolutional Neural Networks (CNN), attention mechanisms to capture object-relevant features, and generative adversarial training to overcome data limitations. However, there is still room for improvement, namely in using images from urban scenarios among anonymized images to comply with public privacy legislation. The main challenges that remain in the ReID field, and prospects for future research directions towards ReID in dense urban scenarios, are also discussed.


2015 ◽  
Vol 44 (7) ◽  
pp. 2535-2557 ◽  
Author(s):  
YoungAh Park ◽  
Charlotte Fritz ◽  
Steve M. Jex

Given that many employees use e-mail for work communication on a daily basis, this study examined within-person relationships between day-level incivility via work e-mail (cyber incivility) and employee outcomes. Using resource-based theories, we examined two resources (i.e., job control, psychological detachment from work) that may alleviate the effects of cyber incivility on distress. Daily survey data collected over 4 consecutive workdays from 96 employees were analyzed using hierarchical linear modeling. Results showed that on days when employees experienced cyber incivility, they reported higher affective and physical distress at the end of the workday that, in turn, was associated with higher distress the next morning. Job control attenuated the concurrent relationships between cyber incivility and both types of distress at work, while psychological detachment from work in the evening weakened the lagged relationships between end-of-workday distress and distress the following morning. These findings shed light on cyber incivility as a daily stressor and on the importance of resources in both the work and home domains that can help reduce the incivility-related stress process. Theoretical and practical implications, limitations, and future research directions are discussed.


2022 ◽  
pp. 59-73
Author(s):  
Kwok Tai Chui ◽  
Patricia Ordóñez de Pablos ◽  
Miltiadis D. Lytras ◽  
Ryan Wen Liu ◽  
Chien-wen Shen

Software has been the essential element to computers in today's digital era. Unfortunately, it has experienced challenges from various types of malware, which are designed for sabotage, criminal money-making, and information theft. To protect the gadgets from malware, numerous malware detection algorithms have been proposed. In the olden days there were shallow learning algorithms, and in recent years there are deep learning algorithms. With the availability of big data for training of model and affordable and high-performance computing services, deep learning has demonstrated its superiority in many smart city applications, in terms of accuracy, error rate, etc. This chapter intends to conduct a systematic review on the latest development of deep learning algorithms for malware detection. Some future research directions are suggested for further exploration.


2023 ◽  
Vol 55 (1) ◽  
pp. 1-38
Author(s):  
Gabriel Resende Machado ◽  
Eugênio Silva ◽  
Ronaldo Ribeiro Goldschmidt

Deep Learning algorithms have achieved state-of-the-art performance for Image Classification. For this reason, they have been used even in security-critical applications, such as biometric recognition systems and self-driving cars. However, recent works have shown those algorithms, which can even surpass human capabilities, are vulnerable to adversarial examples. In Computer Vision, adversarial examples are images containing subtle perturbations generated by malicious optimization algorithms to fool classifiers. As an attempt to mitigate these vulnerabilities, numerous countermeasures have been proposed recently in the literature. However, devising an efficient defense mechanism has proven to be a difficult task, since many approaches demonstrated to be ineffective against adaptive attackers. Thus, this article aims to provide all readerships with a review of the latest research progress on Adversarial Machine Learning in Image Classification, nevertheless, with a defender’s perspective. This article introduces novel taxonomies for categorizing adversarial attacks and defenses, as well as discuss possible reasons regarding the existence of adversarial examples. In addition, relevant guidance is also provided to assist researchers when devising and evaluating defenses. Finally, based on the reviewed literature, this article suggests some promising paths for future research.


Author(s):  
Nag Nami ◽  
Melody Moh

Intelligent systems are capable of doing tasks on their own with minimal or no human intervention. With the advent of big data and IoT, these intelligence systems have made their ways into most industries and homes. With its recent advancements, deep learning has created a niche in the technology space and is being actively used in big data and IoT systems globally. With the wider adoption, deep learning models unfortunately have become susceptible to attacks. Research has shown that many state-of-the-art accurate models can be vulnerable to attacks by well-crafted adversarial examples. This chapter aims to provide concise, in-depth understanding of attacks and defense of deep learning models. The chapter first presents the key architectures and application domains of deep learning and their vulnerabilities. Next, it illustrates the prominent adversarial examples, including the algorithms and techniques used to generate these attacks. Finally, it describes challenges and mechanisms to counter these attacks, and suggests future research directions.


Author(s):  
Enkhbold Nyamsuren ◽  
Han L.J. Van der Maas ◽  
Matthias Maurer

The Computerized Adaptive Practice (CAP) system describes a set of algorithms for assessing player’s expertise and difficulties of in-game problems and for adapting the latter to the former. However, an effective use of CAP requires that in-game problems are designed carefully and refined over time to avoid possible barriers to learning. This study proposes a methodology and three different instruments for analyzing the problem set in CAP-enabled games. The instruments include the Guttman scale, a ranked order, and a Hasse diagram that offer analysis at different levels of granularity and complexity. The methodology proposes to use quantified difficulty measures to infer topology of the problem set. It is well-suited for serious games that emphasize practice and repetitive play. The emphasis is put on the simplicity of use and visualization of the problem space to maximally support teachers and game developers in designing and refining CAP-enabled games. Two case studies demonstrate practical applications of the proposed instruments on empirical data. Future research directions are proposed to address potential drawbacks.


Sign in / Sign up

Export Citation Format

Share Document