Adversarial Learning-based Data Augmentation for Rotation-robust Human Tracking

Author(s):  
Kexin Chen ◽  
Xue Zhou ◽  
Qidong Zhou ◽  
Hongbing Xu
2022 ◽  
pp. 108414
Author(s):  
Jia Wang ◽  
Min Gao ◽  
Zongwei Wang ◽  
Chenghua Lin ◽  
Wei Zhou ◽  
...  

2021 ◽  
Vol 547 ◽  
pp. 1025-1044
Author(s):  
Dan Li ◽  
Changde Du ◽  
Shengpei Wang ◽  
Haibao Wang ◽  
Huiguang He

Electronics ◽  
2022 ◽  
Vol 11 (2) ◽  
pp. 213
Author(s):  
Ghada Abdelmoumin ◽  
Jessica Whitaker ◽  
Danda B. Rawat ◽  
Abdul Rahman

An effective anomaly-based intelligent IDS (AN-Intel-IDS) must detect both known and unknown attacks. Hence, there is a need to train AN-Intel-IDS using dynamically generated, real-time data in an adversarial setting. Unfortunately, the public datasets available to train AN-Intel-IDS are ineluctably static, unrealistic, and prone to obsolescence. Further, the need to protect private data and conceal sensitive data features has limited data sharing, thus encouraging the use of synthetic data for training predictive and intrusion detection models. However, synthetic data can be unrealistic and potentially bias. On the other hand, real-time data are realistic and current; however, it is inherently imbalanced due to the uneven distribution of anomalous and non-anomalous examples. In general, non-anomalous or normal examples are more frequent than anomalous or attack examples, thus leading to skewed distribution. While imbalanced data are commonly predominant in intrusion detection applications, it can lead to inaccurate predictions and degraded performance. Furthermore, the lack of real-time data produces potentially biased models that are less effective in predicting unknown attacks. Therefore, training AN-Intel-IDS using imbalanced and adversarial learning is instrumental to their efficacy and high performance. This paper investigates imbalanced learning and adversarial learning for training AN-Intel-IDS using a qualitative study. It surveys and synthesizes generative-based data augmentation techniques for addressing the uneven data distribution and generative-based adversarial techniques for generating synthetic yet realistic data in an adversarial setting using rapid review, structured reporting, and subgroup analysis.


2020 ◽  
Vol 1 (6) ◽  
Author(s):  
Yujui Chen ◽  
Tse-Wei Lin ◽  
Chiou-Ting Hsu

2020 ◽  
Vol 43 ◽  
Author(s):  
Myrthe Faber

Abstract Gilead et al. state that abstraction supports mental travel, and that mental travel critically relies on abstraction. I propose an important addition to this theoretical framework, namely that mental travel might also support abstraction. Specifically, I argue that spontaneous mental travel (mind wandering), much like data augmentation in machine learning, provides variability in mental content and context necessary for abstraction.


Author(s):  
Alex Hernández-García ◽  
Johannes Mehrer ◽  
Nikolaus Kriegeskorte ◽  
Peter König ◽  
Tim C. Kietzmann

2002 ◽  
Vol 7 (1) ◽  
pp. 31-42
Author(s):  
J. Šaltytė ◽  
K. Dučinskas

The Bayesian classification rule used for the classification of the observations of the (second-order) stationary Gaussian random fields with different means and common factorised covariance matrices is investigated. The influence of the observed data augmentation to the Bayesian risk is examined for three different nonlinear widely applicable spatial correlation models. The explicit expression of the Bayesian risk for the classification of augmented data is derived. Numerical comparison of these models by the variability of Bayesian risk in case of the first-order neighbourhood scheme is performed.


Sign in / Sign up

Export Citation Format

Share Document