Cited 27 time in
Defense against neural trojan attacks: A survey
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Kaviani, Sara | - |
| dc.contributor.author | Sohn, Insoo | - |
| dc.date.accessioned | 2023-04-27T19:40:33Z | - |
| dc.date.available | 2023-04-27T19:40:33Z | - |
| dc.date.issued | 2021-01-29 | - |
| dc.identifier.issn | 0925-2312 | - |
| dc.identifier.issn | 1872-8286 | - |
| dc.identifier.uri | https://scholarworks.dongguk.edu/handle/sw.dongguk/5439 | - |
| dc.description.abstract | Deep learning techniques have become significantly prevalent in many real-world problems including a variety of detection, recognition, and classification tasks. To obtain high-performance neural networks, an enormous amount of training datasets, memory, and time-consuming computations are required which has increased the demands for outsource training among users. As a result, the machine-learning-as-aservice(MLaaS) providers or a third party can gain an opportunity to put the model's security at risk by training the model with malicious inputs. The malicious functionality inserted into the neural network by the adversary will be activated in the presence of specific inputs. These kinds of attacks to neural networks, called trojan or backdoor attacks, are very stealthy and hard to detect because they do not affect the network performance on clean datasets. In this paper, we refer to two important threat models and we focus on the detection and mitigation techniques against these types of attacks on neural networks which has been proposed recently. We summarize, discuss, and compare the defense methods and their corresponding results. (c) 2020 Elsevier B.V. All rights reserved. | - |
| dc.format.extent | 17 | - |
| dc.language | 영어 | - |
| dc.language.iso | ENG | - |
| dc.publisher | ELSEVIER | - |
| dc.title | Defense against neural trojan attacks: A survey | - |
| dc.type | Article | - |
| dc.publisher.location | 네델란드 | - |
| dc.identifier.doi | 10.1016/j.neucom.2020.07.133 | - |
| dc.identifier.scopusid | 2-s2.0-85096389546 | - |
| dc.identifier.wosid | 000599876700001 | - |
| dc.identifier.bibliographicCitation | NEUROCOMPUTING, v.423, pp 651 - 667 | - |
| dc.citation.title | NEUROCOMPUTING | - |
| dc.citation.volume | 423 | - |
| dc.citation.startPage | 651 | - |
| dc.citation.endPage | 667 | - |
| dc.type.docType | Article | - |
| dc.description.isOpenAccess | N | - |
| dc.description.journalRegisteredClass | scie | - |
| dc.description.journalRegisteredClass | scopus | - |
| dc.relation.journalResearchArea | Computer Science | - |
| dc.relation.journalWebOfScienceCategory | Computer Science, Artificial Intelligence | - |
| dc.subject.keywordAuthor | Deep learning | - |
| dc.subject.keywordAuthor | Trojan attacks | - |
| dc.subject.keywordAuthor | Backdoor attacks | - |
| dc.subject.keywordAuthor | Defense | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
30, Pildong-ro 1-gil, Jung-gu, Seoul, 04620, Republic of Korea+82-2-2260-3114
Copyright(c) 2023 DONGGUK UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.
