SUNY;
College of Computer Science and Technology;
Ocean University of China;
China;
University at Buffalo;
USA;
University of Chinese Academy of Sciences;
Deep Neural Networks (DNNs) have been proven vulnerable to adversarial perturbations, which narrow their applications in safe-critical scenarios such as video surveillance and autonomous driving. To counter this threat, a very recent line of adversarial defense methods is proposed to increase the uncertainty of DNNs via injecting random noises in both the training and testing process. Note the existing defense methods usually inject noises uniformly to DNNs. We argue that the magnitude of noises is highly correlated with the response of corresponding features and the randomness on important feature spots can further weaken adversarial attacks. As such, we propose a new method, namely AdaNI, which can increase feature randomness via Adaptive Noise injection to improve the adversarial robustness. Compared to existing methods, our method creates non-unified random noises guided by features and then injects them into DNNs adaptively. Extensive experiments are conducted on several datasets (e.g., CIFAR10, CIFAR100, Mini-ImageNet) with comparisons to state-of-the-art defense methods, which corroborates the efficacy of our method against a variety of powerful white-box attacks (e.g., FGSM, PGD, C&W, Auto Attack) and black-box attacks (e.g., Transferable, ZOO, Square Attack). Moreover, our method is adapted to improve the robustness of DeepFake detection to demonstrate its applicability.