adversarial learning
Recently Published Documents


TOTAL DOCUMENTS

768
(FIVE YEARS 666)

H-INDEX

21
(FIVE YEARS 13)

Author(s):  
Haitong Yang ◽  
Guangyou Zhou ◽  
Tingting He

This article considers the task of text style transfer: transforming a specific style of sentence into another while preserving its style-independent content. A dominate approach to text style transfer is to learn a good content factor of text, define a fixed vector for every style and recombine them to generate text in the required style. In fact, there are a large number of different words to convey the same style from different aspects. Thus, using a fixed vector to represent one style is very inefficient, which causes the weak representation power of the style vector and limits text diversity of the same style. To address this problem, we propose a novel neural generative model called Adversarial Separation Network (ASN), which can learn the content and style vector jointly and the learnt vectors have strong representation power and good interpretabilities. In our method, adversarial learning is implemented to enhance our model’s capability of disentangling the two factors. To evaluate our method, we conduct experiments on two benchmark datasets. Experimental results show our method can perform style transfer better than strong comparison systems. We also demonstrate the strong interpretability of the learnt latent vectors.


2022 ◽  
Vol 27 (2) ◽  
pp. 244-256
Author(s):  
Kainan Zhang ◽  
Zhi Tian ◽  
Zhipeng Cai ◽  
Daehee Seo

2022 ◽  
Vol 122 ◽  
pp. 108350
Author(s):  
Prashant W. Patil ◽  
Akshay Dudhane ◽  
Sachin Chaudhary ◽  
Subrahmanyam Murala

2022 ◽  
Author(s):  
Jenny Yang ◽  
Andrew AS Soltan ◽  
Yang Yang ◽  
David A Clifton

Machine learning is becoming increasingly promi- nent in healthcare. Although its benefits are clear, growing attention is being given to how machine learning may exacerbate existing biases and disparities. In this study, we introduce an adversarial training framework that is capable of mitigating biases that may have been acquired through data collection or magnified during model development. For example, if one class is over-presented or errors/inconsistencies in practice are reflected in the training data, then a model can be biased by these. To evaluate our adversarial training framework, we used the statistical definition of equalized odds. We evaluated our model for the task of rapidly predicting COVID-19 for patients presenting to hospital emergency departments, and aimed to mitigate regional (hospital) and ethnic biases present. We trained our framework on a large, real-world COVID-19 dataset and demonstrated that adversarial training demonstrably improves outcome fairness (with respect to equalized odds), while still achieving clinically-effective screening performances (NPV>0.98). We compared our method to the benchmark set by related previous work, and performed prospective and external validation on four independent hospital cohorts. Our method can be generalized to any outcomes, models, and definitions of fairness.


Electronics ◽  
2022 ◽  
Vol 11 (2) ◽  
pp. 213
Author(s):  
Ghada Abdelmoumin ◽  
Jessica Whitaker ◽  
Danda B. Rawat ◽  
Abdul Rahman

An effective anomaly-based intelligent IDS (AN-Intel-IDS) must detect both known and unknown attacks. Hence, there is a need to train AN-Intel-IDS using dynamically generated, real-time data in an adversarial setting. Unfortunately, the public datasets available to train AN-Intel-IDS are ineluctably static, unrealistic, and prone to obsolescence. Further, the need to protect private data and conceal sensitive data features has limited data sharing, thus encouraging the use of synthetic data for training predictive and intrusion detection models. However, synthetic data can be unrealistic and potentially bias. On the other hand, real-time data are realistic and current; however, it is inherently imbalanced due to the uneven distribution of anomalous and non-anomalous examples. In general, non-anomalous or normal examples are more frequent than anomalous or attack examples, thus leading to skewed distribution. While imbalanced data are commonly predominant in intrusion detection applications, it can lead to inaccurate predictions and degraded performance. Furthermore, the lack of real-time data produces potentially biased models that are less effective in predicting unknown attacks. Therefore, training AN-Intel-IDS using imbalanced and adversarial learning is instrumental to their efficacy and high performance. This paper investigates imbalanced learning and adversarial learning for training AN-Intel-IDS using a qualitative study. It surveys and synthesizes generative-based data augmentation techniques for addressing the uneven data distribution and generative-based adversarial techniques for generating synthetic yet realistic data in an adversarial setting using rapid review, structured reporting, and subgroup analysis.


2022 ◽  
pp. 108414
Author(s):  
Jia Wang ◽  
Min Gao ◽  
Zongwei Wang ◽  
Chenghua Lin ◽  
Wei Zhou ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document