Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Black-box adversarial examples via frequency distortion against fault diagnosis systems

Authors
Lee, SanghoKim, HokiLee, WoojinSon, Youngdoo
Issue Date
Mar-2025
Publisher
ELSEVIER
Keywords
Adversarial attack; Black-box setting; Fourier transform; Rotating machinery; Fault diagnosis
Citation
Applied Soft Computing, v.171, pp 1 - 10
Pages
10
Indexed
SCIE
SCOPUS
Journal Title
Applied Soft Computing
Volume
171
Start Page
1
End Page
10
URI
https://scholarworks.dongguk.edu/handle/sw.dongguk/57875
DOI
10.1016/j.asoc.2025.112828
ISSN
1568-4946
1872-9681
Abstract
Deep learning has significantly impacted prognostic and health management, but its susceptibility to adversarial attacks raises security risks for fault diagnosis systems. Previous research on the adversarial robustness of these systems is limited by unrealistic assumptions about prior model knowledge, which is often unobtainable in the real world, and by a lack of integration of domain-specific knowledge, particularly frequency information crucial for identifying unique characteristics for machinery states. To address these limitations and enhance robustness assessments, we propose a novel adversarial attack method that exploits frequency distortion. Our approach corrupts both frequency components and waveforms of vibration signals from rotating machinery, enabling amore thorough evaluation of system vulnerability without requiring access to model information. Through extensive experiments on two bearing datasets, including a self-collected dataset, we demonstrate the effectiveness of the proposed method in generating malicious yet imperceptible examples that remarkably degrade model performance, even without access to model information. In realistic attack scenarios for fault diagnosis systems, our approach produces adversarial examples that mimic unique frequency components associated with the deceived machinery states, leading to average performance drops of approximately 13 and 19 percentage points higher than existing methods on the two datasets, respectively. These results reveal potential risks for deep learning models embedded in fault diagnosis systems, highlighting the need for enhanced robustness against adversarial attacks.
Files in This Item
There are no files associated with this item.
Appears in
Collections
College of Engineering > Department of Industrial and Systems Engineering > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Son, Young Doo photo

Son, Young Doo
College of Engineering (Department of Industrial and Systems Engineering)
Read more

Altmetrics

Total Views & Downloads

BROWSE