Universal Adversarial Training Using Auxiliary Conditional Generative Model-Based Adversarial Attack Generationopen access
- Authors
- Dingeto, Hiskias; Kim, Juntae
- Issue Date
- Aug-2023
- Publisher
- MDPI
- Keywords
- adversarial training; adversarial attacks; generative models; conditional generative adversarial network; auxiliary conditional generative adversarial networks
- Citation
- Applied Sciences, v.13, no.15, pp 1 - 17
- Pages
- 17
- Indexed
- SCIE
SCOPUS
- Journal Title
- Applied Sciences
- Volume
- 13
- Number
- 15
- Start Page
- 1
- End Page
- 17
- URI
- https://scholarworks.dongguk.edu/handle/sw.dongguk/20338
- DOI
- 10.3390/app13158830
- ISSN
- 2076-3417
2076-3417
- Abstract
- While Machine Learning has become the holy grail of modern-day computing, it has many security flaws that have yet to be addressed and resolved. Adversarial attacks are one of these security flaws, in which an attacker appends noise to data samples that machine learning models take as input with the aim of fooling the model. Various adversarial training methods have been proposed that augment adversarial examples in the training dataset for defense against such attacks. However, a general limitation exists where a robust model can only protect itself against adversarial attacks that are known or similar to those it was trained on. To address this limitation, this paper proposes a Universal Adversarial Training algorithm using adversarial examples generated by an Auxiliary Classifier Generative Adversarial Network (AC-GAN) in parallel with other data augmentation techniques, such as the mixup method. This method builds on a previously proposed technique, Adversarial Training, in which adversarial examples produced by gradient-based methods are augmented and added to the training data. Our method improves the AC-GAN architecture for adversarial example generation to make it more suitable for adversarial training by updating different loss terms and testing its performance against various attacks compared to other robust adversarial models. In this way, it becomes apparent that generative models are better suited for boosting adversarial robustness through adversarial training. When tested using various attack types, our proposed model had an average accuracy of 97.48% on the MNIST dataset and 94.02% on the CelebA dataset, proving that generative models have a higher chance of boosting adversarial security through adversarial training.
- Files in This Item
- There are no files associated with this item.
- Appears in
Collections - College of Advanced Convergence Engineering > Department of Computer Science and Artificial Intelligence > 1. Journal Articles

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.