Cited 5 time in
Bridged adversarial training
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Kim, Hoki | - |
| dc.contributor.author | Lee, Woojin | - |
| dc.contributor.author | Lee, Sungyoon | - |
| dc.contributor.author | Lee, Jaewook | - |
| dc.date.accessioned | 2024-08-08T14:00:30Z | - |
| dc.date.available | 2024-08-08T14:00:30Z | - |
| dc.date.issued | 2023-10 | - |
| dc.identifier.issn | 0893-6080 | - |
| dc.identifier.issn | 1879-2782 | - |
| dc.identifier.uri | https://scholarworks.dongguk.edu/handle/sw.dongguk/22750 | - |
| dc.description.abstract | Adversarial robustness is considered a required property of deep neural networks. In this study, we discover that adversarially trained models might have significantly different characteristics in terms of margin and smoothness, even though they show similar robustness. Inspired by the observation, we investigate the effect of different regularizers and discover the negative effect of the smoothness regularizer on maximizing the margin. Based on the analyses, we propose a new method called bridged adversarial training that mitigates the negative effect by bridging the gap between clean and adversarial examples. We provide theoretical and empirical evidence that the proposed method provides stable and better robustness, especially for large perturbations. © 2023 Elsevier Ltd | - |
| dc.format.extent | 17 | - |
| dc.language | 영어 | - |
| dc.language.iso | ENG | - |
| dc.publisher | Elsevier Ltd | - |
| dc.title | Bridged adversarial training | - |
| dc.type | Article | - |
| dc.publisher.location | 네델란드 | - |
| dc.identifier.doi | 10.1016/j.neunet.2023.08.024 | - |
| dc.identifier.scopusid | 2-s2.0-85170099365 | - |
| dc.identifier.wosid | 001072083900001 | - |
| dc.identifier.bibliographicCitation | Neural Networks, v.167, pp 266 - 282 | - |
| dc.citation.title | Neural Networks | - |
| dc.citation.volume | 167 | - |
| dc.citation.startPage | 266 | - |
| dc.citation.endPage | 282 | - |
| dc.type.docType | Article | - |
| dc.description.isOpenAccess | Y | - |
| dc.description.journalRegisteredClass | scie | - |
| dc.description.journalRegisteredClass | scopus | - |
| dc.relation.journalResearchArea | Computer Science | - |
| dc.relation.journalResearchArea | Neurosciences & Neurology | - |
| dc.relation.journalWebOfScienceCategory | Computer Science, Artificial Intelligence | - |
| dc.relation.journalWebOfScienceCategory | Neurosciences | - |
| dc.subject.keywordAuthor | Adversarial defense | - |
| dc.subject.keywordAuthor | Adversarial robustness | - |
| dc.subject.keywordAuthor | Adversarial training | - |
| dc.subject.keywordAuthor | Neural networks | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
30, Pildong-ro 1-gil, Jung-gu, Seoul, 04620, Republic of Korea+82-2-2260-3114
Copyright(c) 2023 DONGGUK UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.
