[BA] Improving NN Robustness using Relevance-guided Targeted Cutout
Improving NN Robustness using Relevance-guided Targeted Cutout
Due to the increasing popularity of machine learning in the last years, robustness of the trained models is more important than ever.
Relevance-guided Targeted Cutout is a data augmentation technique to improve the robustness of neural networks against adversarial examples as well as common corruptions.
By masking the for a classification most relevant pixels of a training image, the neural network has to consider more pixels to classify an image.
For the size of the mask as well as the method for predicting the relevance of pixels, there are many different possibilities.
The results show that with Relevance-guided Targeted Cutout robustness can be improved by at least 0.5% against Projected Gradient Descent attacks to 85% against Jacobian-based Saliency Map attacks compared to a ResNet-20 baseline model.
Furthermore, the performance of common data augmentation techniques can be exceeded in at least one case of adversarial examples or common corruptions.