Detailed Information

Cited 0 time in webofscience Cited 1 time in scopus
Metadata Downloads

Comparative Study of Adversarial Defenses: Adversarial Training and Regularization in Vision Transformers and CNNs

Full metadata record
DC Field Value Language
dc.contributor.authorDingeto, Hiskias-
dc.contributor.authorKim, Juntae-
dc.date.accessioned2024-08-08T13:32:27Z-
dc.date.available2024-08-08T13:32:27Z-
dc.date.issued2024-07-
dc.identifier.issn2079-9292-
dc.identifier.issn2079-9292-
dc.identifier.urihttps://scholarworks.dongguk.edu/handle/sw.dongguk/22704-
dc.description.abstractTransformer-based models are driving a significant revolution in the field of machine learning at the moment. Among these innovations, vision transformers (ViTs) stand out for their application of transformer architectures to vision-related tasks. By demonstrating performance as good, if not better, than traditional convolutional neural networks (CNNs), ViTs have managed to capture considerable interest in the field. This study focuses on the resilience of ViTs and CNNs in the face of adversarial attacks. Such attacks, which introduce noise into the input of machine learning models to produce incorrect outputs, pose significant challenges to the reliability of machine learning models. Our analysis evaluated the adversarial robustness of CNNs and ViTs by using regularization techniques and adversarial training methods. Adversarial training, in particular, represents a traditional approach to boosting defenses against these attacks. Despite its prominent use, our findings reveal that regularization techniques enable vision transformers and, in most cases, CNNs to enhance adversarial defenses more effectively. Through testing datasets like CIFAR-10 and CIFAR-100, we demonstrate that vision transformers, especially when combined with effective regularization strategies, demonstrate adversarial robustness, even without adversarial training. Two main inferences can be drawn from our findings. Firstly, it emphasizes how effectively vision transformers could strengthen artificial intelligence defenses against adversarial attacks. Secondly, it shows how regularization, which requires much fewer computational resources and covers a wide range of adversarial attacks, can be effective for adversarial defenses. Understanding and improving a model's resilience to adversarial attacks is crucial for developing secure, dependable systems that can handle the complexity of real-world applications as artificial intelligence and machine learning technologies advance.-
dc.format.extent14-
dc.language영어-
dc.language.isoENG-
dc.publisherMDPI AG-
dc.titleComparative Study of Adversarial Defenses: Adversarial Training and Regularization in Vision Transformers and CNNs-
dc.typeArticle-
dc.publisher.location스위스-
dc.identifier.doi10.3390/electronics13132534-
dc.identifier.scopusid2-s2.0-85198397550-
dc.identifier.wosid001269713900001-
dc.identifier.bibliographicCitationElectronics, v.13, no.13, pp 1 - 14-
dc.citation.titleElectronics-
dc.citation.volume13-
dc.citation.number13-
dc.citation.startPage1-
dc.citation.endPage14-
dc.type.docTypeArticle-
dc.description.isOpenAccessY-
dc.description.journalRegisteredClassscie-
dc.description.journalRegisteredClassscopus-
dc.relation.journalResearchAreaComputer Science-
dc.relation.journalResearchAreaEngineering-
dc.relation.journalResearchAreaPhysics-
dc.relation.journalWebOfScienceCategoryComputer Science, Information Systems-
dc.relation.journalWebOfScienceCategoryEngineering, Electrical & Electronic-
dc.relation.journalWebOfScienceCategoryPhysics, Applied-
dc.subject.keywordPlusNETWORK-
dc.subject.keywordAuthormachine learning-
dc.subject.keywordAuthorsecurity-
dc.subject.keywordAuthoradversarial attack-
dc.subject.keywordAuthoradversarial robustness-
dc.subject.keywordAuthoradversarial defense-
dc.subject.keywordAuthorvision transformers-
dc.subject.keywordAuthorconvolutional neural networks-
dc.subject.keywordAuthorregularization-
Files in This Item
There are no files associated with this item.
Appears in
Collections
College of Advanced Convergence Engineering > Department of Computer Science and Artificial Intelligence > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Kim, Jun Tae photo

Kim, Jun Tae
College of Advanced Convergence Engineering (Department of Computer Science and Artificial Intelligence)
Read more

Altmetrics

Total Views & Downloads

BROWSE