Cited 0 time in
Towards undetectable adversarial attack on time series classification
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Kim, Hoki | - |
| dc.contributor.author | Lee, Yunyoung | - |
| dc.contributor.author | Lee, Woojin | - |
| dc.contributor.author | Lee, Jaewook | - |
| dc.date.accessioned | 2025-05-13T05:00:19Z | - |
| dc.date.available | 2025-05-13T05:00:19Z | - |
| dc.date.issued | 2025-10 | - |
| dc.identifier.issn | 0020-0255 | - |
| dc.identifier.issn | 1872-6291 | - |
| dc.identifier.uri | https://scholarworks.dongguk.edu/handle/sw.dongguk/58319 | - |
| dc.description.abstract | Although deep learning models have shown superior performance for time series classification, prior studies have recently discovered that small perturbations can fool various time series models. This vulnerability poses a serious threat that can cause malfunctions in real-world systems, such as Internet-of-Things (IoT) devices and industrial control systems. To defend these systems against adversarial time series, recent studies have proposed a detection method using time series characteristics. In this paper, however, we reveal that this detection-based defense can be easily circumvented. Through an extensive investigation into existing adversarial attacks and generated adversarial time series examples, we discover that they tend to ignore the trends in local areas and add excessive noise to the original examples. Based on the analyses, we propose a new adaptive attack, called trend-adaptive interval attack (TIA), that generates a hardly detectable adversarial time series by adopting trend-adaptive loss and gradient-based interval selection. Our experiments demonstrate that the proposed method successfully maintains the important features of the original time series and deceives diverse time series models without being detected. | - |
| dc.format.extent | 17 | - |
| dc.language | 영어 | - |
| dc.language.iso | ENG | - |
| dc.publisher | Elsevier Inc | - |
| dc.title | Towards undetectable adversarial attack on time series classification | - |
| dc.type | Article | - |
| dc.publisher.location | 네델란드 | - |
| dc.identifier.doi | 10.1016/j.ins.2025.122216 | - |
| dc.identifier.scopusid | 2-s2.0-105003372992 | - |
| dc.identifier.wosid | 001481471300001 | - |
| dc.identifier.bibliographicCitation | Information Sciences, v.715, pp 1 - 17 | - |
| dc.citation.title | Information Sciences | - |
| dc.citation.volume | 715 | - |
| dc.citation.startPage | 1 | - |
| dc.citation.endPage | 17 | - |
| dc.type.docType | Article | - |
| dc.description.isOpenAccess | N | - |
| dc.description.journalRegisteredClass | scie | - |
| dc.description.journalRegisteredClass | scopus | - |
| dc.relation.journalResearchArea | Computer Science | - |
| dc.relation.journalWebOfScienceCategory | Computer Science, Information Systems | - |
| dc.subject.keywordAuthor | Adversarial attack | - |
| dc.subject.keywordAuthor | Detection | - |
| dc.subject.keywordAuthor | Time series | - |
| dc.subject.keywordAuthor | Deep learning | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
30, Pildong-ro 1-gil, Jung-gu, Seoul, 04620, Republic of Korea+82-2-2260-3114
Copyright(c) 2023 DONGGUK UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.
