UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Towards safeguarding convolutional neural networks against physical corruptions Abhijith, Sharma

Abstract

Convolutional neural networks (CNNs) are integral to vision-based AI systems due to their remarkable performance on visual tasks. Although highly accurate, CNNs are vulnerable to natural and adversarial physical corruption in the real world. It poses a serious security concern for safety-critical systems. Such corruption often arises unexpectedly and alters the model's performance. One of the most practical adversarial corruption is the patch attack. The current form of patch attacks often involves only a single adversarial patch. Using multiple patches enables the attacker to craft a stronger adversary by utilizing various combinations of the patches and their respective locations. Moreover, mitigating multiple patches is a challenging task in practice due to the nascence of the domain. In recent years, the primary focus has been on adversarial attacks. However, natural corruptions (e.g., snow, fog, dust) are an omnipresent threat to CNN-based systems having equally devastating consequences. Hence, it is essential to make CNNs resilient against both adversarial attacks and natural disturbances. The contributions of this thesis are three-fold: First, we propose the idea of naturalistic support artifacts (NSA) for robust prediction against natural corruption. The NSAs are natural-looking objects generated through artifact training and have high visual fidelity in the scene. The NSAs are shown to be beneficial in scenarios where model parameters are inaccessible and adding artifacts in the scene is feasible. Second, we present three independent ways to perform an attack with multiple patches: Split, Mono-Multi, and Poly-Multi attacks, showcasing the true potential of patch attacks. The multi-patch attacks are shown to overcome existing state-of-the-art defenses, raising a serious risk to CNN-based systems. Ultimately, we present a novel, model-agnostic patch mitigation technique based on total variation-based image resurfacing (TVR). The TVR acts as a first line of defense against patch attacks by cleansing the image of any suspicious perturbations. The TVR can nullify single and multi-patch attacks in one scan of the image, making it the first defense technique to defend against multi-patch attacks. The thesis attempts to move one step closer to the aim of safe and robust CNN-based AI systems.

Item Citations and Data

Rights

Attribution-NonCommercial-NoDerivatives 4.0 International