Cited 0 time in
Black-box adversarial examples via frequency distortion against fault diagnosis systems
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Lee, Sangho | - |
| dc.contributor.author | Kim, Hoki | - |
| dc.contributor.author | Lee, Woojin | - |
| dc.contributor.author | Son, Youngdoo | - |
| dc.date.accessioned | 2025-03-10T02:03:00Z | - |
| dc.date.available | 2025-03-10T02:03:00Z | - |
| dc.date.issued | 2025-03 | - |
| dc.identifier.issn | 1568-4946 | - |
| dc.identifier.issn | 1872-9681 | - |
| dc.identifier.uri | https://scholarworks.dongguk.edu/handle/sw.dongguk/57875 | - |
| dc.description.abstract | Deep learning has significantly impacted prognostic and health management, but its susceptibility to adversarial attacks raises security risks for fault diagnosis systems. Previous research on the adversarial robustness of these systems is limited by unrealistic assumptions about prior model knowledge, which is often unobtainable in the real world, and by a lack of integration of domain-specific knowledge, particularly frequency information crucial for identifying unique characteristics for machinery states. To address these limitations and enhance robustness assessments, we propose a novel adversarial attack method that exploits frequency distortion. Our approach corrupts both frequency components and waveforms of vibration signals from rotating machinery, enabling amore thorough evaluation of system vulnerability without requiring access to model information. Through extensive experiments on two bearing datasets, including a self-collected dataset, we demonstrate the effectiveness of the proposed method in generating malicious yet imperceptible examples that remarkably degrade model performance, even without access to model information. In realistic attack scenarios for fault diagnosis systems, our approach produces adversarial examples that mimic unique frequency components associated with the deceived machinery states, leading to average performance drops of approximately 13 and 19 percentage points higher than existing methods on the two datasets, respectively. These results reveal potential risks for deep learning models embedded in fault diagnosis systems, highlighting the need for enhanced robustness against adversarial attacks. | - |
| dc.format.extent | 10 | - |
| dc.language | 영어 | - |
| dc.language.iso | ENG | - |
| dc.publisher | ELSEVIER | - |
| dc.title | Black-box adversarial examples via frequency distortion against fault diagnosis systems | - |
| dc.type | Article | - |
| dc.publisher.location | 네델란드 | - |
| dc.identifier.doi | 10.1016/j.asoc.2025.112828 | - |
| dc.identifier.scopusid | 2-s2.0-85217358686 | - |
| dc.identifier.wosid | 001428077900001 | - |
| dc.identifier.bibliographicCitation | Applied Soft Computing, v.171, pp 1 - 10 | - |
| dc.citation.title | Applied Soft Computing | - |
| dc.citation.volume | 171 | - |
| dc.citation.startPage | 1 | - |
| dc.citation.endPage | 10 | - |
| dc.type.docType | Article | - |
| dc.description.isOpenAccess | N | - |
| dc.description.journalRegisteredClass | scie | - |
| dc.description.journalRegisteredClass | scopus | - |
| dc.relation.journalResearchArea | Computer Science | - |
| dc.relation.journalWebOfScienceCategory | Computer Science, Artificial Intelligence | - |
| dc.relation.journalWebOfScienceCategory | Computer Science, Interdisciplinary Applications | - |
| dc.subject.keywordPlus | NETWORKS | - |
| dc.subject.keywordPlus | MACHINE | - |
| dc.subject.keywordAuthor | Adversarial attack | - |
| dc.subject.keywordAuthor | Black-box setting | - |
| dc.subject.keywordAuthor | Fourier transform | - |
| dc.subject.keywordAuthor | Rotating machinery | - |
| dc.subject.keywordAuthor | Fault diagnosis | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
30, Pildong-ro 1-gil, Jung-gu, Seoul, 04620, Republic of Korea+82-2-2260-3114
Copyright(c) 2023 DONGGUK UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.
