Cited 20 time in
A defense method against backdoor attacks on neural networks
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Kaviani, Sara | - |
| dc.contributor.author | Shamshiri, Samaneh | - |
| dc.contributor.author | Sohn, Insoo | - |
| dc.date.accessioned | 2024-08-08T09:31:59Z | - |
| dc.date.available | 2024-08-08T09:31:59Z | - |
| dc.date.issued | 2023-03 | - |
| dc.identifier.issn | 0957-4174 | - |
| dc.identifier.issn | 1873-6793 | - |
| dc.identifier.uri | https://scholarworks.dongguk.edu/handle/sw.dongguk/20970 | - |
| dc.description.abstract | Due to computational complexities of artificial neural networks (ANNs), there is an increasing demand for third parties and MLaaS (machine learning as a service) to take charge of the training procedure. Therefore, making ANNs robust against adversarial attacks has received a lot of attention. Backdoor attacks, which causes targeted mis-classification while the accuracy on clean data is not affected, are among the most efficient attacks. In this paper, we propose a method called link-pruning with scale-freeness (LPSF), in which the dormant threatening links from the neurons in the input layer to other neurons of feed-forward neural network are eliminated according to the information gained from a portion of clean input data and the essential links are strengthened by changing the fully-connected networks to scale-free structures. To the best of our knowledge, it is the first defense method that makes the network significantly robust against backdoor attack (BD) before the network is attacked. LPSF is evaluated on feed-forward neural networks and with malicious MNIST, FMNIST, handwritten Chinese characters and HODA datasets. Through LPSF strategy, we achieve a sufficiently high and stable accuracy on clean data and an exceeding reduction range of 50% - 94% for attack success rate. | - |
| dc.format.extent | 14 | - |
| dc.language | 영어 | - |
| dc.language.iso | ENG | - |
| dc.publisher | Elsevier Ltd | - |
| dc.title | A defense method against backdoor attacks on neural networks | - |
| dc.type | Article | - |
| dc.publisher.location | 네델란드 | - |
| dc.identifier.doi | 10.1016/j.eswa.2022.118990 | - |
| dc.identifier.scopusid | 2-s2.0-85140061018 | - |
| dc.identifier.wosid | 000877846900002 | - |
| dc.identifier.bibliographicCitation | Expert Systems with Applications, v.213, pp 1 - 14 | - |
| dc.citation.title | Expert Systems with Applications | - |
| dc.citation.volume | 213 | - |
| dc.citation.startPage | 1 | - |
| dc.citation.endPage | 14 | - |
| dc.type.docType | Article | - |
| dc.description.isOpenAccess | Y | - |
| dc.description.journalRegisteredClass | scie | - |
| dc.description.journalRegisteredClass | scopus | - |
| dc.relation.journalResearchArea | Computer Science | - |
| dc.relation.journalResearchArea | Engineering | - |
| dc.relation.journalResearchArea | Operations Research & Management Science | - |
| dc.relation.journalWebOfScienceCategory | Computer Science, Artificial Intelligence | - |
| dc.relation.journalWebOfScienceCategory | Engineering, Electrical & Electronic | - |
| dc.relation.journalWebOfScienceCategory | Operations Research & Management Science | - |
| dc.subject.keywordAuthor | Feed-forward neural networks | - |
| dc.subject.keywordAuthor | Backdoor attacks | - |
| dc.subject.keywordAuthor | Scale-free networks | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
30, Pildong-ro 1-gil, Jung-gu, Seoul, 04620, Republic of Korea+82-2-2260-3114
Copyright(c) 2023 DONGGUK UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.
