adversarial behavior
Recently Published Documents


TOTAL DOCUMENTS

28
(FIVE YEARS 9)

H-INDEX

4
(FIVE YEARS 1)

Author(s):  
Lily Xu

Green security concerns the protection of the world's wildlife, forests, and fisheries from poaching, illegal logging, and illegal fishing. Unfortunately, conservation efforts in green security domains are constrained by the limited availability of defenders, who must patrol vast areas to protect from attackers. Artificial intelligence (AI) techniques have been developed for green security and other security settings, such as US Coast Guard patrols and airport screenings, but effective deployment of AI in these settings requires learning adversarial behavior and planning in complex environments where the true dynamics may be unknown. My research develops novel techniques in machine learning and game theory to enable the effective development and deployment of AI in these resource-constrained settings. Notably, my work has spanned the pipeline from learning in a supervised setting, planning in stochastic environments, sequential planning in uncertain environments, and deployment in the real world. The overarching goal is to optimally allocate scarce resources under uncertainty for environmental conservation.


Author(s):  
Kyle D Christensen ◽  
Peter Dobias

This work reviews the development and tests of an intermediate force capability (IFC) concept development hybrid wargame aimed at examining a maritime task force’s ability to counter hybrid threats in the gray zone. IFCs offer a class of response between doing nothing and using lethal force in a situation that would be politically unpalatable. Thus, the aim of the wargame is to evaluate whether IFCs can make a difference to mission success against hybrid threats in the gray zone. This wargame series was particularly important because it used traditional game mechanics in a unique and innovative way to evaluate and assess IFCs. The results of the wargame demonstrated that IFCs have a high probability of filling the gap between doing nothing and using lethal force. The presence of IFCs provided engagement time and space for the maritime task force commander. It also identified that development of robust IFC capabilities, not only against personnel, but against systems (trucks, cars, UAVs, etc.), can also effectively counter undesirable adversarial behavior


2020 ◽  
Vol 2020 (3) ◽  
pp. 384-403
Author(s):  
Camille Cobb ◽  
Lucy Simko ◽  
Tadayoshi Kohno ◽  
Alexis Hiniker

AbstractOnline status indicators (or OSIs, i.e., interface elements that communicate whether a user is online) can leak potentially sensitive information about users. In this work, we analyze 184 mobile applications to systematically characterize the existing design space of OSIs. We identified 40 apps with OSIs across a variety of genres and conducted a design review of the OSIs in each, examining both Android and iOS versions of these apps. We found that OSI design decisions clustered into four major categories, namely: appearance, audience, settings, and fidelity to actual user behavior. Less than half of these apps allow users change the default settings for OSIs. Informed by our findings, we discuss: 1) how these design choices support adversarial behavior, 2) design guidelines for creating consistent, privacy-conscious OSIs, and 3) a set of novel design concepts for building future tools to augment users’ ability to control and understand the presence information they broadcast. By connecting the common design patterns we document to prior work on privacy in social technologies, we contribute an empirical understanding of the systematic ways in which OSIs can make users more or less vulnerable to unwanted information disclosure.


Author(s):  
Thanh H. Nguyen ◽  
Arunesh Sinha ◽  
He He

Learning attacker behavior is an important research topic in security games as security agencies are often uncertain about attackers' decision making. Previous work has focused on developing various behavioral models of attackers based on historical attack data. However, a clever attacker can manipulate its attacks to fail such attack-driven learning, leading to ineffective defense strategies. We study attacker behavior deception with three main contributions. First, we propose a new model, named partial behavior deception model, in which there is a deceptive attacker (among multiple attackers) who controls a portion of attacks. Our model captures real-world security scenarios such as wildlife protection in which multiple poachers are present. Second, we introduce a new scalable algorithm, GAMBO, to compute an optimal deception strategy of the deceptive attacker. Our algorithm employs the projected gradient descent and uses the implicit function theorem for the computation of gradient. Third, we conduct a comprehensive set of experiments, showing a significant benefit for the attacker and loss for the defender due to attacker deception.


Author(s):  
Keith Paarporn ◽  
Brian Canty ◽  
Philip N. Brown ◽  
Mahnoosh Alizadeh ◽  
Jason R. Marden

Author(s):  
Roberto Pellungrini ◽  
Luca Pappalardo ◽  
Filippo Simini ◽  
Anna Monreale

Sign in / Sign up

Export Citation Format

Share Document