scholarly journals Recent Advances in Adversarial Training for Adversarial Robustness

Author(s):  
Tao Bai ◽  
Jinqi Luo ◽  
Jun Zhao ◽  
Bihan Wen ◽  
Qian Wang

Adversarial training is one of the most effective approaches for deep learning models to defend against adversarial examples. Unlike other defense strategies, adversarial training aims to enhance the robustness of models intrinsically. During the past few years, adversarial training has been studied and discussed from various aspects, which deserves a comprehensive review. For the first time in this survey, we systematically review the recent progress on adversarial training for adversarial robustness with a novel taxonomy. Then we discuss the generalization problems in adversarial training from three perspectives and highlight the challenges which are not fully tackled. Finally, we present potential future directions.

2021 ◽  
Vol 5 (4) ◽  
pp. 1-25
Author(s):  
Wei Jiang ◽  
Zhiyuan He ◽  
Jinyu Zhan ◽  
Weijia Pan ◽  
Deepak Adhikari

Great progress has been made in deep learning over the past few years, which drives the deployment of deep learning–based applications into cyber-physical systems. But the lack of interpretability for deep learning models has led to potential security holes. Recent research has found that deep neural networks are vulnerable to well-designed input examples, called adversarial examples . Such examples are often too small to detect, but they completely fool deep learning models. In practice, adversarial attacks pose a serious threat to the success of deep learning. With the continuous development of deep learning applications, adversarial examples for different fields have also received attention. In this article, we summarize the methods of generating adversarial examples in computer vision, speech recognition, and natural language processing and study the applications of adversarial examples. We also explore emerging research and open problems.


2021 ◽  
Vol 6 (5) ◽  
pp. 10-15
Author(s):  
Ela Bhattacharya ◽  
D. Bhattacharya

COVID-19 has emerged as the latest worrisome pandemic, which is reported to have its outbreak in Wuhan, China. The infection spreads by means of human contact, as a result, it has caused massive infections across 200 countries around the world. Artificial intelligence has likewise contributed to managing the COVID-19 pandemic in various aspects within a short span of time. Deep Neural Networks that are explored in this paper have contributed to the detection of COVID-19 from imaging sources. The datasets, pre-processing, segmentation, feature extraction, classification and test results which can be useful for discovering future directions in the domain of automatic diagnosis of the disease, utilizing artificial intelligence-based frameworks, have been investigated in this paper.


2020 ◽  
Author(s):  
Saeed Nosratabadi ◽  
Amir Mosavi ◽  
Puhong Duan ◽  
Pedram Ghamisi ◽  
Filip Ferdinand ◽  
...  

Abstract This paper provides the state of the art of data science in economics. Through a novel taxonomy of applications and methods advances in data science are investigated. The data science advances are investigated in three individual classes of deep learning models, ensemble models, and hybrid models. Application domains include stock market, marketing, E-commerce, corporate banking, and cryptocurrency. Prisma method, a systematic literature review methodology is used to ensure the quality of the survey. The findings revealed that the trends are on advancement of hybrid models as more than 51% of the reviewed articles applied hybrid model. On the other hand, it is found that based on the RMSE accuracy metric, hybrid models had higher prediction accuracy than other algorithms. While it is expected the trends go toward the advancements of deep learning models.


Author(s):  
Jinquan Yan ◽  
Yinbiao He ◽  
Gang Li ◽  
Hao Yu

The ASME B&PV Code, Section III, is being used as the design acceptance criteria in the construction of China’s third generation AP1000 nuclear power plants. This is the first time that the ASME Code was fully accepted in Chinese nuclear power industry. In the past 6 years, a few improvements of the Code were found to be necessary to satisfy the various requirements originated from these new power plant (NPP) constructions. These improvements are originated from a) the stress-strain curves needed in elastic-plastic analysis, b) the environmental fatigue issue, c) the perplexity generated from the examination requirements after hydrostatic test and d) the safe end welding problems. In this paper, the necessities of these proposed improvements on the ASME B&PV code are further explained and discussed case by case. Hopefully, through these efforts, the near future development direction and assignment of the ASME B&PV-III China International Working Group can be set up.


2020 ◽  
Vol 2020 ◽  
pp. 1-9 ◽  
Author(s):  
Lingyun Jiang ◽  
Kai Qiao ◽  
Ruoxi Qin ◽  
Linyuan Wang ◽  
Wanting Yu ◽  
...  

In image classification of deep learning, adversarial examples where input is intended to add small magnitude perturbations may mislead deep neural networks (DNNs) to incorrect results, which means DNNs are vulnerable to them. Different attack and defense strategies have been proposed to better research the mechanism of deep learning. However, those researches in these networks are only for one aspect, either an attack or a defense. There is in the improvement of offensive and defensive performance, and it is difficult to promote each other in the same framework. In this paper, we propose Cycle-Consistent Adversarial GAN (CycleAdvGAN) to generate adversarial examples, which can learn and approximate the distribution of the original instances and adversarial examples, especially promoting attackers and defenders to confront each other and improve their ability. For CycleAdvGAN, once the GeneratorA and D are trained, GA can generate adversarial perturbations efficiently for any instance, improving the performance of the existing attack methods, and GD can generate recovery adversarial examples to clean instances, defending against existing attack methods. We apply CycleAdvGAN under semiwhite-box and black-box settings on two public datasets MNIST and CIFAR10. Using the extensive experiments, we show that our method has achieved the state-of-the-art adversarial attack method and also has efficiently improved the defense ability, which made the integration of adversarial attack and defense come true. In addition, it has improved the attack effect only trained on the adversarial dataset generated by any kind of adversarial attack.


2021 ◽  
Vol 40 ◽  
pp. 03030
Author(s):  
Mehdi Surani ◽  
Ramchandra Mangrulkar

Over the past years the exponential growth of social media usage has given the power to every individual to share their opinions freely. This has led to numerous threats allowing users to exploit their freedom of speech, thus spreading hateful comments, using abusive language, carrying out personal attacks, and sometimes even to the extent of cyberbullying. However, determining abusive content is not a difficult task and many social media platforms have solutions available already but at the same time, many are searching for more efficient ways and solutions to overcome this issue. Traditional models explore machine learning models to identify negative content posted on social media. Shaming categories are explored, and content is put in place according to the label. Such categorization is easy to detect as the contextual language used is direct. However, the use of irony to mock or convey contempt is also a part of public shaming and must be considered while categorizing the shaming labels. In this research paper, various shaming types, namely toxic, severe toxic, obscene, threat, insult, identity hate, and sarcasm are predicted using deep learning approaches like CNN and LSTM. These models have been studied along with traditional models to determine which model gives the most accurate results.


Sign in / Sign up

Export Citation Format

Share Document