您的位置: 首页 > 外文期刊论文 > 详情页

AdaNI: Adaptive Noise Injection to improve adversarial robustness

作   者:
Yuezun LiCong ZhangHonggang QiSiwei Lyu
作者机构:
SUNYCollege of Computer Science and Technology Ocean University of China ChinaUniversity at Buffalo USAUniversity of Chinese Academy of Sciences
关键词:
Adversarial examplesAdversarial robustnessImage classification
期刊名称:
Computer vision and image understanding: CVIU
i s s n:
1077-3142
年卷期:
2024 年 238 卷 Jan. 期
页   码:
103855.1-103855.12
页   码:
摘   要:
Deep Neural Networks (DNNs) have been proven vulnerable to adversarial perturbations, which narrow their applications in safe-critical scenarios such as video surveillance and autonomous driving. To counter this threat, a very recent line of adversarial defense methods is proposed to increase the uncertainty of DNNs via injecting random noises in both the training and testing process. Note the existing defense methods usually inject noises uniformly to DNNs. We argue that the magnitude of noises is highly correlated with the response of corresponding features and the randomness on important feature spots can further weaken adversarial attacks. As such, we propose a new method, namely AdaNI, which can increase feature randomness via Adaptive Noise injection to improve the adversarial robustness. Compared to existing methods, our method creates non-unified random noises guided by features and then injects them into DNNs adaptively. Extensive experiments are conducted on several datasets (e.g., CIFAR10, CIFAR100, Mini-ImageNet) with comparisons to state-of-the-art defense methods, which corroborates the efficacy of our method against a variety of powerful white-box attacks (e.g., FGSM, PGD, C&W, Auto Attack) and black-box attacks (e.g., Transferable, ZOO, Square Attack). Moreover, our method is adapted to improve the robustness of DeepFake detection to demonstrate its applicability.
相关作者
载入中,请稍后...
相关机构
    载入中,请稍后...
应用推荐

意 见 箱

匿名:登录

个人用户登录

找回密码

第三方账号登录

忘记密码

个人用户注册

必须为有效邮箱
6~16位数字与字母组合
6~16位数字与字母组合
请输入正确的手机号码

信息补充