UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Adaptive randomized smoothing : certifying multi-step defences against adversarial examples Shaikh, Mohammed Shadab Salauddin

Abstract

We introduce an adaptive defense mechanism against adversarial examples called Adaptive Randomized Smoothing (ARS). ARS builds and improves upon the framework of Randomized Smoothing by leveraging two key properties of f-Differential Privacy: post-processing and adaptive composition. Our main contribution is using these two properties to extend the analysis of Randomized Smoothing to certify a test-time prediction’s robustness using multiple steps instead of a single step. The additional step allows us to reduce the noise required for the prediction (classification) task. We present results for two steps only, however our theory is generally applicable to more than two steps. We instantiate ARS on Image classification task to certify predictions of Deep Learning models against L∞-norm bounded adversarial examples. In this L∞ threat model, we enable the second step to learn and apply input-dependent masking that reduces the noise in the classification task. We design adaptivity benchmarks, based on CIFAR-10 and CelebA, and show that ARS improves accuracy by 2 to 5% points. On ImageNet, ARS improves accuracy by 1 to 3% points over standard RS without adaptivity.

Item Citations and Data

Rights

Attribution-NonCommercial-NoDerivatives 4.0 International