Detailed Information

Cited 0 time in webofscience Cited 1 time in scopus
Metadata Downloads

Comparative Study of Adversarial Defenses: Adversarial Training and Regularization in Vision Transformers and CNNsopen access

Authors
Dingeto, HiskiasKim, Juntae
Issue Date
Jul-2024
Publisher
MDPI AG
Keywords
machine learning; security; adversarial attack; adversarial robustness; adversarial defense; vision transformers; convolutional neural networks; regularization
Citation
Electronics, v.13, no.13, pp 1 - 14
Pages
14
Indexed
SCIE
SCOPUS
Journal Title
Electronics
Volume
13
Number
13
Start Page
1
End Page
14
URI
https://scholarworks.dongguk.edu/handle/sw.dongguk/22704
DOI
10.3390/electronics13132534
ISSN
2079-9292
2079-9292
Abstract
Transformer-based models are driving a significant revolution in the field of machine learning at the moment. Among these innovations, vision transformers (ViTs) stand out for their application of transformer architectures to vision-related tasks. By demonstrating performance as good, if not better, than traditional convolutional neural networks (CNNs), ViTs have managed to capture considerable interest in the field. This study focuses on the resilience of ViTs and CNNs in the face of adversarial attacks. Such attacks, which introduce noise into the input of machine learning models to produce incorrect outputs, pose significant challenges to the reliability of machine learning models. Our analysis evaluated the adversarial robustness of CNNs and ViTs by using regularization techniques and adversarial training methods. Adversarial training, in particular, represents a traditional approach to boosting defenses against these attacks. Despite its prominent use, our findings reveal that regularization techniques enable vision transformers and, in most cases, CNNs to enhance adversarial defenses more effectively. Through testing datasets like CIFAR-10 and CIFAR-100, we demonstrate that vision transformers, especially when combined with effective regularization strategies, demonstrate adversarial robustness, even without adversarial training. Two main inferences can be drawn from our findings. Firstly, it emphasizes how effectively vision transformers could strengthen artificial intelligence defenses against adversarial attacks. Secondly, it shows how regularization, which requires much fewer computational resources and covers a wide range of adversarial attacks, can be effective for adversarial defenses. Understanding and improving a model's resilience to adversarial attacks is crucial for developing secure, dependable systems that can handle the complexity of real-world applications as artificial intelligence and machine learning technologies advance.
Files in This Item
There are no files associated with this item.
Appears in
Collections
College of Advanced Convergence Engineering > Department of Computer Science and Artificial Intelligence > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Kim, Jun Tae photo

Kim, Jun Tae
College of Advanced Convergence Engineering (Department of Computer Science and Artificial Intelligence)
Read more

Altmetrics

Total Views & Downloads

BROWSE