Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Black-box adversarial examples via frequency distortion against fault diagnosis systems

Full metadata record
DC Field Value Language
dc.contributor.authorLee, Sangho-
dc.contributor.authorKim, Hoki-
dc.contributor.authorLee, Woojin-
dc.contributor.authorSon, Youngdoo-
dc.date.accessioned2025-03-10T02:03:00Z-
dc.date.available2025-03-10T02:03:00Z-
dc.date.issued2025-03-
dc.identifier.issn1568-4946-
dc.identifier.issn1872-9681-
dc.identifier.urihttps://scholarworks.dongguk.edu/handle/sw.dongguk/57875-
dc.description.abstractDeep learning has significantly impacted prognostic and health management, but its susceptibility to adversarial attacks raises security risks for fault diagnosis systems. Previous research on the adversarial robustness of these systems is limited by unrealistic assumptions about prior model knowledge, which is often unobtainable in the real world, and by a lack of integration of domain-specific knowledge, particularly frequency information crucial for identifying unique characteristics for machinery states. To address these limitations and enhance robustness assessments, we propose a novel adversarial attack method that exploits frequency distortion. Our approach corrupts both frequency components and waveforms of vibration signals from rotating machinery, enabling amore thorough evaluation of system vulnerability without requiring access to model information. Through extensive experiments on two bearing datasets, including a self-collected dataset, we demonstrate the effectiveness of the proposed method in generating malicious yet imperceptible examples that remarkably degrade model performance, even without access to model information. In realistic attack scenarios for fault diagnosis systems, our approach produces adversarial examples that mimic unique frequency components associated with the deceived machinery states, leading to average performance drops of approximately 13 and 19 percentage points higher than existing methods on the two datasets, respectively. These results reveal potential risks for deep learning models embedded in fault diagnosis systems, highlighting the need for enhanced robustness against adversarial attacks.-
dc.format.extent10-
dc.language영어-
dc.language.isoENG-
dc.publisherELSEVIER-
dc.titleBlack-box adversarial examples via frequency distortion against fault diagnosis systems-
dc.typeArticle-
dc.publisher.location네델란드-
dc.identifier.doi10.1016/j.asoc.2025.112828-
dc.identifier.scopusid2-s2.0-85217358686-
dc.identifier.wosid001428077900001-
dc.identifier.bibliographicCitationApplied Soft Computing, v.171, pp 1 - 10-
dc.citation.titleApplied Soft Computing-
dc.citation.volume171-
dc.citation.startPage1-
dc.citation.endPage10-
dc.type.docTypeArticle-
dc.description.isOpenAccessN-
dc.description.journalRegisteredClassscie-
dc.description.journalRegisteredClassscopus-
dc.relation.journalResearchAreaComputer Science-
dc.relation.journalWebOfScienceCategoryComputer Science, Artificial Intelligence-
dc.relation.journalWebOfScienceCategoryComputer Science, Interdisciplinary Applications-
dc.subject.keywordPlusNETWORKS-
dc.subject.keywordPlusMACHINE-
dc.subject.keywordAuthorAdversarial attack-
dc.subject.keywordAuthorBlack-box setting-
dc.subject.keywordAuthorFourier transform-
dc.subject.keywordAuthorRotating machinery-
dc.subject.keywordAuthorFault diagnosis-
Files in This Item
There are no files associated with this item.
Appears in
Collections
College of Engineering > Department of Industrial and Systems Engineering > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Lee, Woo Jin photo

Lee, Woo Jin
College of Advanced Convergence Engineering (Department of Computer Science and Artificial Intelligence)
Read more

Altmetrics

Total Views & Downloads

BROWSE