Searching for just a few words should be enough to get started. If you need to make more complex queries, use the tips below to guide you.
Article type: Research Article
Authors: Shi, Jinga | Zhang, Xiao-Lina; * | Wang, Yong-Pinga | Gu, Rui-Chuna | Xu, En-Huib
Affiliations: [a] School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, China | [b] China Nanhu Academy of Electronics and Information Technology, Jiaxing, China
Correspondence: [*] Corresponding author. Xiao-Lin Zhang, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, China. E-mail: [email protected].
Abstract: Deep neural networks (DNNs) are susceptible to adversarial attacks, and one important factor is that adversarial samples are transferable, i.e., adversarial samples generated by a particular network may deceive other black-box models. However, existing transferable adversarial attacks tend to modify the input features of images directly without selection to reduce the prediction accuracy in the alternative model, which would enable the adversarial samples to fall into the model’s local optimum. Alternative models differ significantly from the victim model in most cases, and while simultaneously attacking multiple models may improve transferability, gathering numerous different models is more challenging and expensive. We simulate various models using frequency domain transformation to close the gap between the source and victim models and improve transferability. At the same time, we destroy important intermediate layer features that influence the decision of the model in the feature space. Additionally, smoothing loss is introduced to remove high-frequency perturbations. Extensive experiments demonstrate that our FM-FSTA attack generates more well-hidden and transferable adversarial samples, and achieves a high deception rate even when attacking adversarially trained models. Compared to other methods, our FM-FSTA improved attack success rate under different defense mechanisms, which reveals the potential threats of current robust models.
Keywords: Deep neural networks, adversarial samples, transferable attacks
DOI: 10.3233/JIFS-234156
Journal: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-14, 2024
IOS Press, Inc.
6751 Tepper Drive
Clifton, VA 20124
USA
Tel: +1 703 830 6300
Fax: +1 703 830 2300
[email protected]
For editorial issues, like the status of your submitted paper or proposals, write to [email protected]
IOS Press
Nieuwe Hemweg 6B
1013 BG Amsterdam
The Netherlands
Tel: +31 20 688 3355
Fax: +31 20 687 0091
[email protected]
For editorial issues, permissions, book requests, submissions and proceedings, contact the Amsterdam office [email protected]
Inspirees International (China Office)
Ciyunsi Beili 207(CapitaLand), Bld 1, 7-901
100025, Beijing
China
Free service line: 400 661 8717
Fax: +86 10 8446 7947
[email protected]
For editorial issues, like the status of your submitted paper or proposals, write to [email protected]
如果您在出版方面需要帮助或有任何建, 件至: [email protected]